Nov 24 08:49:02 crc systemd[1]: Starting Kubernetes Kubelet... Nov 24 08:49:02 crc restorecon[4695]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Nov 24 08:49:02 crc restorecon[4695]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 24 08:49:02 crc restorecon[4695]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 24 08:49:02 crc restorecon[4695]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Nov 24 08:49:02 crc restorecon[4695]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Nov 24 08:49:02 crc restorecon[4695]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Nov 24 08:49:02 crc restorecon[4695]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Nov 24 08:49:02 crc restorecon[4695]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Nov 24 08:49:02 crc restorecon[4695]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Nov 24 08:49:02 crc restorecon[4695]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Nov 24 08:49:02 crc restorecon[4695]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Nov 24 08:49:02 crc restorecon[4695]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Nov 24 08:49:02 crc restorecon[4695]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Nov 24 08:49:02 crc restorecon[4695]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Nov 24 08:49:02 crc restorecon[4695]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Nov 24 08:49:02 crc restorecon[4695]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Nov 24 08:49:02 crc restorecon[4695]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Nov 24 08:49:02 crc restorecon[4695]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Nov 24 08:49:02 crc restorecon[4695]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 24 08:49:02 crc restorecon[4695]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 24 08:49:02 crc restorecon[4695]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 24 08:49:02 crc restorecon[4695]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 24 08:49:02 crc restorecon[4695]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 24 08:49:02 crc restorecon[4695]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 24 08:49:02 crc restorecon[4695]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 24 08:49:02 crc restorecon[4695]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 24 08:49:02 crc restorecon[4695]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 24 08:49:02 crc restorecon[4695]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 24 08:49:02 crc restorecon[4695]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 24 08:49:02 crc restorecon[4695]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 24 08:49:02 crc restorecon[4695]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 24 08:49:02 crc restorecon[4695]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 24 08:49:02 crc restorecon[4695]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Nov 24 08:49:02 crc restorecon[4695]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Nov 24 08:49:02 crc restorecon[4695]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Nov 24 08:49:02 crc restorecon[4695]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Nov 24 08:49:02 crc restorecon[4695]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Nov 24 08:49:02 crc restorecon[4695]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Nov 24 08:49:02 crc restorecon[4695]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 24 08:49:02 crc restorecon[4695]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 24 08:49:02 crc restorecon[4695]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 24 08:49:02 crc restorecon[4695]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 24 08:49:02 crc restorecon[4695]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 24 08:49:02 crc restorecon[4695]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 24 08:49:02 crc restorecon[4695]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 24 08:49:02 crc restorecon[4695]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 24 08:49:02 crc restorecon[4695]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 24 08:49:02 crc restorecon[4695]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 24 08:49:02 crc restorecon[4695]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 24 08:49:02 crc restorecon[4695]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 24 08:49:02 crc restorecon[4695]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 24 08:49:02 crc restorecon[4695]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 24 08:49:02 crc restorecon[4695]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 24 08:49:02 crc restorecon[4695]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 24 08:49:02 crc restorecon[4695]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 24 08:49:02 crc restorecon[4695]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 24 08:49:02 crc restorecon[4695]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 24 08:49:02 crc restorecon[4695]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 24 08:49:02 crc restorecon[4695]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 24 08:49:02 crc restorecon[4695]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 24 08:49:02 crc restorecon[4695]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 24 08:49:02 crc restorecon[4695]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 24 08:49:02 crc restorecon[4695]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 24 08:49:02 crc restorecon[4695]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 24 08:49:02 crc restorecon[4695]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 24 08:49:02 crc restorecon[4695]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 24 08:49:02 crc restorecon[4695]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 24 08:49:03 crc restorecon[4695]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 24 08:49:03 crc restorecon[4695]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Nov 24 08:49:04 crc kubenswrapper[4886]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 24 08:49:04 crc kubenswrapper[4886]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Nov 24 08:49:04 crc kubenswrapper[4886]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 24 08:49:04 crc kubenswrapper[4886]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 24 08:49:04 crc kubenswrapper[4886]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Nov 24 08:49:04 crc kubenswrapper[4886]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.592170 4886 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 24 08:49:04 crc kubenswrapper[4886]: W1124 08:49:04.596225 4886 feature_gate.go:330] unrecognized feature gate: PlatformOperators Nov 24 08:49:04 crc kubenswrapper[4886]: W1124 08:49:04.596246 4886 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Nov 24 08:49:04 crc kubenswrapper[4886]: W1124 08:49:04.596258 4886 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Nov 24 08:49:04 crc kubenswrapper[4886]: W1124 08:49:04.596262 4886 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Nov 24 08:49:04 crc kubenswrapper[4886]: W1124 08:49:04.596266 4886 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Nov 24 08:49:04 crc kubenswrapper[4886]: W1124 08:49:04.596270 4886 feature_gate.go:330] unrecognized feature gate: InsightsConfig Nov 24 08:49:04 crc kubenswrapper[4886]: W1124 08:49:04.596277 4886 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Nov 24 08:49:04 crc kubenswrapper[4886]: W1124 08:49:04.596282 4886 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Nov 24 08:49:04 crc kubenswrapper[4886]: W1124 08:49:04.596285 4886 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Nov 24 08:49:04 crc kubenswrapper[4886]: W1124 08:49:04.596290 4886 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Nov 24 08:49:04 crc kubenswrapper[4886]: W1124 08:49:04.596294 4886 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Nov 24 08:49:04 crc kubenswrapper[4886]: W1124 08:49:04.596298 4886 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Nov 24 08:49:04 crc kubenswrapper[4886]: W1124 08:49:04.596304 4886 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Nov 24 08:49:04 crc kubenswrapper[4886]: W1124 08:49:04.596311 4886 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Nov 24 08:49:04 crc kubenswrapper[4886]: W1124 08:49:04.596316 4886 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Nov 24 08:49:04 crc kubenswrapper[4886]: W1124 08:49:04.596322 4886 feature_gate.go:330] unrecognized feature gate: SignatureStores Nov 24 08:49:04 crc kubenswrapper[4886]: W1124 08:49:04.596327 4886 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Nov 24 08:49:04 crc kubenswrapper[4886]: W1124 08:49:04.596333 4886 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Nov 24 08:49:04 crc kubenswrapper[4886]: W1124 08:49:04.596338 4886 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Nov 24 08:49:04 crc kubenswrapper[4886]: W1124 08:49:04.596343 4886 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Nov 24 08:49:04 crc kubenswrapper[4886]: W1124 08:49:04.596348 4886 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Nov 24 08:49:04 crc kubenswrapper[4886]: W1124 08:49:04.596354 4886 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Nov 24 08:49:04 crc kubenswrapper[4886]: W1124 08:49:04.596358 4886 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Nov 24 08:49:04 crc kubenswrapper[4886]: W1124 08:49:04.596366 4886 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Nov 24 08:49:04 crc kubenswrapper[4886]: W1124 08:49:04.596370 4886 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Nov 24 08:49:04 crc kubenswrapper[4886]: W1124 08:49:04.596374 4886 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Nov 24 08:49:04 crc kubenswrapper[4886]: W1124 08:49:04.596378 4886 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Nov 24 08:49:04 crc kubenswrapper[4886]: W1124 08:49:04.596382 4886 feature_gate.go:330] unrecognized feature gate: Example Nov 24 08:49:04 crc kubenswrapper[4886]: W1124 08:49:04.596385 4886 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Nov 24 08:49:04 crc kubenswrapper[4886]: W1124 08:49:04.596389 4886 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Nov 24 08:49:04 crc kubenswrapper[4886]: W1124 08:49:04.596393 4886 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Nov 24 08:49:04 crc kubenswrapper[4886]: W1124 08:49:04.596396 4886 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Nov 24 08:49:04 crc kubenswrapper[4886]: W1124 08:49:04.596400 4886 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Nov 24 08:49:04 crc kubenswrapper[4886]: W1124 08:49:04.596404 4886 feature_gate.go:330] unrecognized feature gate: OVNObservability Nov 24 08:49:04 crc kubenswrapper[4886]: W1124 08:49:04.596409 4886 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Nov 24 08:49:04 crc kubenswrapper[4886]: W1124 08:49:04.596414 4886 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Nov 24 08:49:04 crc kubenswrapper[4886]: W1124 08:49:04.596418 4886 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Nov 24 08:49:04 crc kubenswrapper[4886]: W1124 08:49:04.596423 4886 feature_gate.go:330] unrecognized feature gate: NewOLM Nov 24 08:49:04 crc kubenswrapper[4886]: W1124 08:49:04.596428 4886 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Nov 24 08:49:04 crc kubenswrapper[4886]: W1124 08:49:04.596433 4886 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Nov 24 08:49:04 crc kubenswrapper[4886]: W1124 08:49:04.596438 4886 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Nov 24 08:49:04 crc kubenswrapper[4886]: W1124 08:49:04.596443 4886 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Nov 24 08:49:04 crc kubenswrapper[4886]: W1124 08:49:04.596447 4886 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Nov 24 08:49:04 crc kubenswrapper[4886]: W1124 08:49:04.596451 4886 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Nov 24 08:49:04 crc kubenswrapper[4886]: W1124 08:49:04.596455 4886 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Nov 24 08:49:04 crc kubenswrapper[4886]: W1124 08:49:04.596459 4886 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Nov 24 08:49:04 crc kubenswrapper[4886]: W1124 08:49:04.596462 4886 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Nov 24 08:49:04 crc kubenswrapper[4886]: W1124 08:49:04.596488 4886 feature_gate.go:330] unrecognized feature gate: GatewayAPI Nov 24 08:49:04 crc kubenswrapper[4886]: W1124 08:49:04.596497 4886 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Nov 24 08:49:04 crc kubenswrapper[4886]: W1124 08:49:04.596503 4886 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Nov 24 08:49:04 crc kubenswrapper[4886]: W1124 08:49:04.596508 4886 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Nov 24 08:49:04 crc kubenswrapper[4886]: W1124 08:49:04.596512 4886 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Nov 24 08:49:04 crc kubenswrapper[4886]: W1124 08:49:04.596518 4886 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Nov 24 08:49:04 crc kubenswrapper[4886]: W1124 08:49:04.596522 4886 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Nov 24 08:49:04 crc kubenswrapper[4886]: W1124 08:49:04.596526 4886 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Nov 24 08:49:04 crc kubenswrapper[4886]: W1124 08:49:04.596529 4886 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Nov 24 08:49:04 crc kubenswrapper[4886]: W1124 08:49:04.596533 4886 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Nov 24 08:49:04 crc kubenswrapper[4886]: W1124 08:49:04.596537 4886 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Nov 24 08:49:04 crc kubenswrapper[4886]: W1124 08:49:04.596540 4886 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Nov 24 08:49:04 crc kubenswrapper[4886]: W1124 08:49:04.596544 4886 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Nov 24 08:49:04 crc kubenswrapper[4886]: W1124 08:49:04.596548 4886 feature_gate.go:330] unrecognized feature gate: PinnedImages Nov 24 08:49:04 crc kubenswrapper[4886]: W1124 08:49:04.596552 4886 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Nov 24 08:49:04 crc kubenswrapper[4886]: W1124 08:49:04.596555 4886 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Nov 24 08:49:04 crc kubenswrapper[4886]: W1124 08:49:04.596559 4886 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Nov 24 08:49:04 crc kubenswrapper[4886]: W1124 08:49:04.596562 4886 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Nov 24 08:49:04 crc kubenswrapper[4886]: W1124 08:49:04.596566 4886 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Nov 24 08:49:04 crc kubenswrapper[4886]: W1124 08:49:04.596569 4886 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Nov 24 08:49:04 crc kubenswrapper[4886]: W1124 08:49:04.596573 4886 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Nov 24 08:49:04 crc kubenswrapper[4886]: W1124 08:49:04.596576 4886 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Nov 24 08:49:04 crc kubenswrapper[4886]: W1124 08:49:04.596580 4886 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Nov 24 08:49:04 crc kubenswrapper[4886]: W1124 08:49:04.596584 4886 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.597313 4886 flags.go:64] FLAG: --address="0.0.0.0" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.597326 4886 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.597338 4886 flags.go:64] FLAG: --anonymous-auth="true" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.597343 4886 flags.go:64] FLAG: --application-metrics-count-limit="100" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.597350 4886 flags.go:64] FLAG: --authentication-token-webhook="false" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.597354 4886 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.597360 4886 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.597366 4886 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.597370 4886 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.597375 4886 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.597380 4886 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.597385 4886 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.597390 4886 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.597404 4886 flags.go:64] FLAG: --cgroup-root="" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.597408 4886 flags.go:64] FLAG: --cgroups-per-qos="true" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.597418 4886 flags.go:64] FLAG: --client-ca-file="" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.597423 4886 flags.go:64] FLAG: --cloud-config="" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.597427 4886 flags.go:64] FLAG: --cloud-provider="" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.597431 4886 flags.go:64] FLAG: --cluster-dns="[]" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.597436 4886 flags.go:64] FLAG: --cluster-domain="" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.597441 4886 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.597445 4886 flags.go:64] FLAG: --config-dir="" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.597449 4886 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.597454 4886 flags.go:64] FLAG: --container-log-max-files="5" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.597460 4886 flags.go:64] FLAG: --container-log-max-size="10Mi" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.597464 4886 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.597469 4886 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.597479 4886 flags.go:64] FLAG: --containerd-namespace="k8s.io" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.597486 4886 flags.go:64] FLAG: --contention-profiling="false" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.597490 4886 flags.go:64] FLAG: --cpu-cfs-quota="true" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.597495 4886 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.597499 4886 flags.go:64] FLAG: --cpu-manager-policy="none" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.597503 4886 flags.go:64] FLAG: --cpu-manager-policy-options="" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.597510 4886 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.597514 4886 flags.go:64] FLAG: --enable-controller-attach-detach="true" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.597519 4886 flags.go:64] FLAG: --enable-debugging-handlers="true" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.597523 4886 flags.go:64] FLAG: --enable-load-reader="false" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.597528 4886 flags.go:64] FLAG: --enable-server="true" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.597533 4886 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.597539 4886 flags.go:64] FLAG: --event-burst="100" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.597544 4886 flags.go:64] FLAG: --event-qps="50" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.597548 4886 flags.go:64] FLAG: --event-storage-age-limit="default=0" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.597553 4886 flags.go:64] FLAG: --event-storage-event-limit="default=0" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.597557 4886 flags.go:64] FLAG: --eviction-hard="" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.597562 4886 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.597567 4886 flags.go:64] FLAG: --eviction-minimum-reclaim="" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.597572 4886 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.597576 4886 flags.go:64] FLAG: --eviction-soft="" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.597580 4886 flags.go:64] FLAG: --eviction-soft-grace-period="" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.597585 4886 flags.go:64] FLAG: --exit-on-lock-contention="false" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.597589 4886 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.597593 4886 flags.go:64] FLAG: --experimental-mounter-path="" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.597598 4886 flags.go:64] FLAG: --fail-cgroupv1="false" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.597603 4886 flags.go:64] FLAG: --fail-swap-on="true" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.597608 4886 flags.go:64] FLAG: --feature-gates="" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.597614 4886 flags.go:64] FLAG: --file-check-frequency="20s" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.597620 4886 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.597625 4886 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.597630 4886 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.597635 4886 flags.go:64] FLAG: --healthz-port="10248" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.597641 4886 flags.go:64] FLAG: --help="false" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.597645 4886 flags.go:64] FLAG: --hostname-override="" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.597650 4886 flags.go:64] FLAG: --housekeeping-interval="10s" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.597655 4886 flags.go:64] FLAG: --http-check-frequency="20s" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.597660 4886 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.597665 4886 flags.go:64] FLAG: --image-credential-provider-config="" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.597669 4886 flags.go:64] FLAG: --image-gc-high-threshold="85" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.597673 4886 flags.go:64] FLAG: --image-gc-low-threshold="80" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.597678 4886 flags.go:64] FLAG: --image-service-endpoint="" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.597682 4886 flags.go:64] FLAG: --kernel-memcg-notification="false" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.597687 4886 flags.go:64] FLAG: --kube-api-burst="100" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.597691 4886 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.597696 4886 flags.go:64] FLAG: --kube-api-qps="50" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.597700 4886 flags.go:64] FLAG: --kube-reserved="" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.597704 4886 flags.go:64] FLAG: --kube-reserved-cgroup="" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.597709 4886 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.597714 4886 flags.go:64] FLAG: --kubelet-cgroups="" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.597718 4886 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.597722 4886 flags.go:64] FLAG: --lock-file="" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.597727 4886 flags.go:64] FLAG: --log-cadvisor-usage="false" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.597731 4886 flags.go:64] FLAG: --log-flush-frequency="5s" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.597735 4886 flags.go:64] FLAG: --log-json-info-buffer-size="0" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.597742 4886 flags.go:64] FLAG: --log-json-split-stream="false" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.597746 4886 flags.go:64] FLAG: --log-text-info-buffer-size="0" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.597750 4886 flags.go:64] FLAG: --log-text-split-stream="false" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.597755 4886 flags.go:64] FLAG: --logging-format="text" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.597759 4886 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.597764 4886 flags.go:64] FLAG: --make-iptables-util-chains="true" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.597768 4886 flags.go:64] FLAG: --manifest-url="" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.597772 4886 flags.go:64] FLAG: --manifest-url-header="" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.597778 4886 flags.go:64] FLAG: --max-housekeeping-interval="15s" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.597783 4886 flags.go:64] FLAG: --max-open-files="1000000" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.597789 4886 flags.go:64] FLAG: --max-pods="110" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.597793 4886 flags.go:64] FLAG: --maximum-dead-containers="-1" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.597798 4886 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.597802 4886 flags.go:64] FLAG: --memory-manager-policy="None" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.597807 4886 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.597811 4886 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.597822 4886 flags.go:64] FLAG: --node-ip="192.168.126.11" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.597826 4886 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.597837 4886 flags.go:64] FLAG: --node-status-max-images="50" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.597843 4886 flags.go:64] FLAG: --node-status-update-frequency="10s" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.597847 4886 flags.go:64] FLAG: --oom-score-adj="-999" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.597852 4886 flags.go:64] FLAG: --pod-cidr="" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.597856 4886 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.597864 4886 flags.go:64] FLAG: --pod-manifest-path="" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.597868 4886 flags.go:64] FLAG: --pod-max-pids="-1" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.597873 4886 flags.go:64] FLAG: --pods-per-core="0" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.597877 4886 flags.go:64] FLAG: --port="10250" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.597881 4886 flags.go:64] FLAG: --protect-kernel-defaults="false" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.597885 4886 flags.go:64] FLAG: --provider-id="" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.597890 4886 flags.go:64] FLAG: --qos-reserved="" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.597895 4886 flags.go:64] FLAG: --read-only-port="10255" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.597899 4886 flags.go:64] FLAG: --register-node="true" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.597903 4886 flags.go:64] FLAG: --register-schedulable="true" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.597907 4886 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.597914 4886 flags.go:64] FLAG: --registry-burst="10" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.597918 4886 flags.go:64] FLAG: --registry-qps="5" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.597922 4886 flags.go:64] FLAG: --reserved-cpus="" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.597926 4886 flags.go:64] FLAG: --reserved-memory="" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.597932 4886 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.597936 4886 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.597941 4886 flags.go:64] FLAG: --rotate-certificates="false" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.597945 4886 flags.go:64] FLAG: --rotate-server-certificates="false" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.597949 4886 flags.go:64] FLAG: --runonce="false" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.597954 4886 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.597959 4886 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.597964 4886 flags.go:64] FLAG: --seccomp-default="false" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.597968 4886 flags.go:64] FLAG: --serialize-image-pulls="true" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.597972 4886 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.597976 4886 flags.go:64] FLAG: --storage-driver-db="cadvisor" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.597981 4886 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.597986 4886 flags.go:64] FLAG: --storage-driver-password="root" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.597991 4886 flags.go:64] FLAG: --storage-driver-secure="false" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.597995 4886 flags.go:64] FLAG: --storage-driver-table="stats" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.598000 4886 flags.go:64] FLAG: --storage-driver-user="root" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.598004 4886 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.598008 4886 flags.go:64] FLAG: --sync-frequency="1m0s" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.598013 4886 flags.go:64] FLAG: --system-cgroups="" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.598017 4886 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.598025 4886 flags.go:64] FLAG: --system-reserved-cgroup="" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.598030 4886 flags.go:64] FLAG: --tls-cert-file="" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.598034 4886 flags.go:64] FLAG: --tls-cipher-suites="[]" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.598039 4886 flags.go:64] FLAG: --tls-min-version="" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.598043 4886 flags.go:64] FLAG: --tls-private-key-file="" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.598048 4886 flags.go:64] FLAG: --topology-manager-policy="none" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.598052 4886 flags.go:64] FLAG: --topology-manager-policy-options="" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.598057 4886 flags.go:64] FLAG: --topology-manager-scope="container" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.598062 4886 flags.go:64] FLAG: --v="2" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.598068 4886 flags.go:64] FLAG: --version="false" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.598074 4886 flags.go:64] FLAG: --vmodule="" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.598079 4886 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.598084 4886 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Nov 24 08:49:04 crc kubenswrapper[4886]: W1124 08:49:04.598205 4886 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Nov 24 08:49:04 crc kubenswrapper[4886]: W1124 08:49:04.598211 4886 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Nov 24 08:49:04 crc kubenswrapper[4886]: W1124 08:49:04.598217 4886 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Nov 24 08:49:04 crc kubenswrapper[4886]: W1124 08:49:04.598227 4886 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Nov 24 08:49:04 crc kubenswrapper[4886]: W1124 08:49:04.598234 4886 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Nov 24 08:49:04 crc kubenswrapper[4886]: W1124 08:49:04.598238 4886 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Nov 24 08:49:04 crc kubenswrapper[4886]: W1124 08:49:04.598243 4886 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Nov 24 08:49:04 crc kubenswrapper[4886]: W1124 08:49:04.598247 4886 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Nov 24 08:49:04 crc kubenswrapper[4886]: W1124 08:49:04.598251 4886 feature_gate.go:330] unrecognized feature gate: PlatformOperators Nov 24 08:49:04 crc kubenswrapper[4886]: W1124 08:49:04.598255 4886 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Nov 24 08:49:04 crc kubenswrapper[4886]: W1124 08:49:04.598259 4886 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Nov 24 08:49:04 crc kubenswrapper[4886]: W1124 08:49:04.598262 4886 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Nov 24 08:49:04 crc kubenswrapper[4886]: W1124 08:49:04.598266 4886 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Nov 24 08:49:04 crc kubenswrapper[4886]: W1124 08:49:04.598270 4886 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Nov 24 08:49:04 crc kubenswrapper[4886]: W1124 08:49:04.598274 4886 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Nov 24 08:49:04 crc kubenswrapper[4886]: W1124 08:49:04.598277 4886 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Nov 24 08:49:04 crc kubenswrapper[4886]: W1124 08:49:04.598281 4886 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Nov 24 08:49:04 crc kubenswrapper[4886]: W1124 08:49:04.598285 4886 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Nov 24 08:49:04 crc kubenswrapper[4886]: W1124 08:49:04.598288 4886 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Nov 24 08:49:04 crc kubenswrapper[4886]: W1124 08:49:04.598292 4886 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Nov 24 08:49:04 crc kubenswrapper[4886]: W1124 08:49:04.598298 4886 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Nov 24 08:49:04 crc kubenswrapper[4886]: W1124 08:49:04.598302 4886 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Nov 24 08:49:04 crc kubenswrapper[4886]: W1124 08:49:04.598307 4886 feature_gate.go:330] unrecognized feature gate: OVNObservability Nov 24 08:49:04 crc kubenswrapper[4886]: W1124 08:49:04.598311 4886 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Nov 24 08:49:04 crc kubenswrapper[4886]: W1124 08:49:04.598314 4886 feature_gate.go:330] unrecognized feature gate: SignatureStores Nov 24 08:49:04 crc kubenswrapper[4886]: W1124 08:49:04.598318 4886 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Nov 24 08:49:04 crc kubenswrapper[4886]: W1124 08:49:04.598322 4886 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Nov 24 08:49:04 crc kubenswrapper[4886]: W1124 08:49:04.598326 4886 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Nov 24 08:49:04 crc kubenswrapper[4886]: W1124 08:49:04.598330 4886 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Nov 24 08:49:04 crc kubenswrapper[4886]: W1124 08:49:04.598334 4886 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Nov 24 08:49:04 crc kubenswrapper[4886]: W1124 08:49:04.598337 4886 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Nov 24 08:49:04 crc kubenswrapper[4886]: W1124 08:49:04.598341 4886 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Nov 24 08:49:04 crc kubenswrapper[4886]: W1124 08:49:04.598345 4886 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Nov 24 08:49:04 crc kubenswrapper[4886]: W1124 08:49:04.598348 4886 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Nov 24 08:49:04 crc kubenswrapper[4886]: W1124 08:49:04.598352 4886 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Nov 24 08:49:04 crc kubenswrapper[4886]: W1124 08:49:04.598358 4886 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Nov 24 08:49:04 crc kubenswrapper[4886]: W1124 08:49:04.598362 4886 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Nov 24 08:49:04 crc kubenswrapper[4886]: W1124 08:49:04.598365 4886 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Nov 24 08:49:04 crc kubenswrapper[4886]: W1124 08:49:04.598369 4886 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Nov 24 08:49:04 crc kubenswrapper[4886]: W1124 08:49:04.598372 4886 feature_gate.go:330] unrecognized feature gate: GatewayAPI Nov 24 08:49:04 crc kubenswrapper[4886]: W1124 08:49:04.598376 4886 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Nov 24 08:49:04 crc kubenswrapper[4886]: W1124 08:49:04.598380 4886 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Nov 24 08:49:04 crc kubenswrapper[4886]: W1124 08:49:04.598383 4886 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Nov 24 08:49:04 crc kubenswrapper[4886]: W1124 08:49:04.598387 4886 feature_gate.go:330] unrecognized feature gate: PinnedImages Nov 24 08:49:04 crc kubenswrapper[4886]: W1124 08:49:04.598391 4886 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Nov 24 08:49:04 crc kubenswrapper[4886]: W1124 08:49:04.598394 4886 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Nov 24 08:49:04 crc kubenswrapper[4886]: W1124 08:49:04.598398 4886 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Nov 24 08:49:04 crc kubenswrapper[4886]: W1124 08:49:04.598402 4886 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Nov 24 08:49:04 crc kubenswrapper[4886]: W1124 08:49:04.598405 4886 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Nov 24 08:49:04 crc kubenswrapper[4886]: W1124 08:49:04.598409 4886 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Nov 24 08:49:04 crc kubenswrapper[4886]: W1124 08:49:04.598413 4886 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Nov 24 08:49:04 crc kubenswrapper[4886]: W1124 08:49:04.598417 4886 feature_gate.go:330] unrecognized feature gate: InsightsConfig Nov 24 08:49:04 crc kubenswrapper[4886]: W1124 08:49:04.598422 4886 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Nov 24 08:49:04 crc kubenswrapper[4886]: W1124 08:49:04.598426 4886 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Nov 24 08:49:04 crc kubenswrapper[4886]: W1124 08:49:04.598431 4886 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Nov 24 08:49:04 crc kubenswrapper[4886]: W1124 08:49:04.598435 4886 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Nov 24 08:49:04 crc kubenswrapper[4886]: W1124 08:49:04.598439 4886 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Nov 24 08:49:04 crc kubenswrapper[4886]: W1124 08:49:04.598443 4886 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Nov 24 08:49:04 crc kubenswrapper[4886]: W1124 08:49:04.598448 4886 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Nov 24 08:49:04 crc kubenswrapper[4886]: W1124 08:49:04.598452 4886 feature_gate.go:330] unrecognized feature gate: Example Nov 24 08:49:04 crc kubenswrapper[4886]: W1124 08:49:04.598456 4886 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Nov 24 08:49:04 crc kubenswrapper[4886]: W1124 08:49:04.598460 4886 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Nov 24 08:49:04 crc kubenswrapper[4886]: W1124 08:49:04.598464 4886 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Nov 24 08:49:04 crc kubenswrapper[4886]: W1124 08:49:04.598468 4886 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Nov 24 08:49:04 crc kubenswrapper[4886]: W1124 08:49:04.598472 4886 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Nov 24 08:49:04 crc kubenswrapper[4886]: W1124 08:49:04.598475 4886 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Nov 24 08:49:04 crc kubenswrapper[4886]: W1124 08:49:04.598479 4886 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Nov 24 08:49:04 crc kubenswrapper[4886]: W1124 08:49:04.598485 4886 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Nov 24 08:49:04 crc kubenswrapper[4886]: W1124 08:49:04.598490 4886 feature_gate.go:330] unrecognized feature gate: NewOLM Nov 24 08:49:04 crc kubenswrapper[4886]: W1124 08:49:04.598494 4886 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Nov 24 08:49:04 crc kubenswrapper[4886]: W1124 08:49:04.598497 4886 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.598512 4886 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.608077 4886 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.608129 4886 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 24 08:49:04 crc kubenswrapper[4886]: W1124 08:49:04.608219 4886 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Nov 24 08:49:04 crc kubenswrapper[4886]: W1124 08:49:04.608232 4886 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Nov 24 08:49:04 crc kubenswrapper[4886]: W1124 08:49:04.608239 4886 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Nov 24 08:49:04 crc kubenswrapper[4886]: W1124 08:49:04.608245 4886 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Nov 24 08:49:04 crc kubenswrapper[4886]: W1124 08:49:04.608250 4886 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Nov 24 08:49:04 crc kubenswrapper[4886]: W1124 08:49:04.608255 4886 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Nov 24 08:49:04 crc kubenswrapper[4886]: W1124 08:49:04.608259 4886 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Nov 24 08:49:04 crc kubenswrapper[4886]: W1124 08:49:04.608263 4886 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Nov 24 08:49:04 crc kubenswrapper[4886]: W1124 08:49:04.608268 4886 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Nov 24 08:49:04 crc kubenswrapper[4886]: W1124 08:49:04.608272 4886 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Nov 24 08:49:04 crc kubenswrapper[4886]: W1124 08:49:04.608282 4886 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Nov 24 08:49:04 crc kubenswrapper[4886]: W1124 08:49:04.608289 4886 feature_gate.go:330] unrecognized feature gate: InsightsConfig Nov 24 08:49:04 crc kubenswrapper[4886]: W1124 08:49:04.608294 4886 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Nov 24 08:49:04 crc kubenswrapper[4886]: W1124 08:49:04.608299 4886 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Nov 24 08:49:04 crc kubenswrapper[4886]: W1124 08:49:04.608304 4886 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Nov 24 08:49:04 crc kubenswrapper[4886]: W1124 08:49:04.608309 4886 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Nov 24 08:49:04 crc kubenswrapper[4886]: W1124 08:49:04.608314 4886 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Nov 24 08:49:04 crc kubenswrapper[4886]: W1124 08:49:04.608318 4886 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Nov 24 08:49:04 crc kubenswrapper[4886]: W1124 08:49:04.608323 4886 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Nov 24 08:49:04 crc kubenswrapper[4886]: W1124 08:49:04.608326 4886 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Nov 24 08:49:04 crc kubenswrapper[4886]: W1124 08:49:04.608331 4886 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Nov 24 08:49:04 crc kubenswrapper[4886]: W1124 08:49:04.608336 4886 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Nov 24 08:49:04 crc kubenswrapper[4886]: W1124 08:49:04.608345 4886 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Nov 24 08:49:04 crc kubenswrapper[4886]: W1124 08:49:04.608351 4886 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Nov 24 08:49:04 crc kubenswrapper[4886]: W1124 08:49:04.608356 4886 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Nov 24 08:49:04 crc kubenswrapper[4886]: W1124 08:49:04.608361 4886 feature_gate.go:330] unrecognized feature gate: NewOLM Nov 24 08:49:04 crc kubenswrapper[4886]: W1124 08:49:04.608366 4886 feature_gate.go:330] unrecognized feature gate: PinnedImages Nov 24 08:49:04 crc kubenswrapper[4886]: W1124 08:49:04.608371 4886 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Nov 24 08:49:04 crc kubenswrapper[4886]: W1124 08:49:04.608376 4886 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Nov 24 08:49:04 crc kubenswrapper[4886]: W1124 08:49:04.608381 4886 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Nov 24 08:49:04 crc kubenswrapper[4886]: W1124 08:49:04.608385 4886 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Nov 24 08:49:04 crc kubenswrapper[4886]: W1124 08:49:04.608389 4886 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Nov 24 08:49:04 crc kubenswrapper[4886]: W1124 08:49:04.608393 4886 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Nov 24 08:49:04 crc kubenswrapper[4886]: W1124 08:49:04.608397 4886 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Nov 24 08:49:04 crc kubenswrapper[4886]: W1124 08:49:04.608401 4886 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Nov 24 08:49:04 crc kubenswrapper[4886]: W1124 08:49:04.608405 4886 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Nov 24 08:49:04 crc kubenswrapper[4886]: W1124 08:49:04.608409 4886 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Nov 24 08:49:04 crc kubenswrapper[4886]: W1124 08:49:04.608412 4886 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Nov 24 08:49:04 crc kubenswrapper[4886]: W1124 08:49:04.608416 4886 feature_gate.go:330] unrecognized feature gate: PlatformOperators Nov 24 08:49:04 crc kubenswrapper[4886]: W1124 08:49:04.608420 4886 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Nov 24 08:49:04 crc kubenswrapper[4886]: W1124 08:49:04.608424 4886 feature_gate.go:330] unrecognized feature gate: OVNObservability Nov 24 08:49:04 crc kubenswrapper[4886]: W1124 08:49:04.608428 4886 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Nov 24 08:49:04 crc kubenswrapper[4886]: W1124 08:49:04.608432 4886 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Nov 24 08:49:04 crc kubenswrapper[4886]: W1124 08:49:04.608435 4886 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Nov 24 08:49:04 crc kubenswrapper[4886]: W1124 08:49:04.608439 4886 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Nov 24 08:49:04 crc kubenswrapper[4886]: W1124 08:49:04.608443 4886 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Nov 24 08:49:04 crc kubenswrapper[4886]: W1124 08:49:04.608447 4886 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Nov 24 08:49:04 crc kubenswrapper[4886]: W1124 08:49:04.608507 4886 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Nov 24 08:49:04 crc kubenswrapper[4886]: W1124 08:49:04.608511 4886 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Nov 24 08:49:04 crc kubenswrapper[4886]: W1124 08:49:04.608515 4886 feature_gate.go:330] unrecognized feature gate: Example Nov 24 08:49:04 crc kubenswrapper[4886]: W1124 08:49:04.608519 4886 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Nov 24 08:49:04 crc kubenswrapper[4886]: W1124 08:49:04.608526 4886 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Nov 24 08:49:04 crc kubenswrapper[4886]: W1124 08:49:04.608531 4886 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Nov 24 08:49:04 crc kubenswrapper[4886]: W1124 08:49:04.608535 4886 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Nov 24 08:49:04 crc kubenswrapper[4886]: W1124 08:49:04.608538 4886 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Nov 24 08:49:04 crc kubenswrapper[4886]: W1124 08:49:04.608543 4886 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Nov 24 08:49:04 crc kubenswrapper[4886]: W1124 08:49:04.608547 4886 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Nov 24 08:49:04 crc kubenswrapper[4886]: W1124 08:49:04.608551 4886 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Nov 24 08:49:04 crc kubenswrapper[4886]: W1124 08:49:04.608556 4886 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Nov 24 08:49:04 crc kubenswrapper[4886]: W1124 08:49:04.608560 4886 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Nov 24 08:49:04 crc kubenswrapper[4886]: W1124 08:49:04.608564 4886 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Nov 24 08:49:04 crc kubenswrapper[4886]: W1124 08:49:04.608567 4886 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Nov 24 08:49:04 crc kubenswrapper[4886]: W1124 08:49:04.608571 4886 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Nov 24 08:49:04 crc kubenswrapper[4886]: W1124 08:49:04.608574 4886 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Nov 24 08:49:04 crc kubenswrapper[4886]: W1124 08:49:04.608578 4886 feature_gate.go:330] unrecognized feature gate: GatewayAPI Nov 24 08:49:04 crc kubenswrapper[4886]: W1124 08:49:04.608582 4886 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Nov 24 08:49:04 crc kubenswrapper[4886]: W1124 08:49:04.608585 4886 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Nov 24 08:49:04 crc kubenswrapper[4886]: W1124 08:49:04.608589 4886 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Nov 24 08:49:04 crc kubenswrapper[4886]: W1124 08:49:04.608592 4886 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Nov 24 08:49:04 crc kubenswrapper[4886]: W1124 08:49:04.608596 4886 feature_gate.go:330] unrecognized feature gate: SignatureStores Nov 24 08:49:04 crc kubenswrapper[4886]: W1124 08:49:04.608599 4886 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.608607 4886 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Nov 24 08:49:04 crc kubenswrapper[4886]: W1124 08:49:04.608718 4886 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Nov 24 08:49:04 crc kubenswrapper[4886]: W1124 08:49:04.608726 4886 feature_gate.go:330] unrecognized feature gate: NewOLM Nov 24 08:49:04 crc kubenswrapper[4886]: W1124 08:49:04.608730 4886 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Nov 24 08:49:04 crc kubenswrapper[4886]: W1124 08:49:04.608735 4886 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Nov 24 08:49:04 crc kubenswrapper[4886]: W1124 08:49:04.608739 4886 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Nov 24 08:49:04 crc kubenswrapper[4886]: W1124 08:49:04.608742 4886 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Nov 24 08:49:04 crc kubenswrapper[4886]: W1124 08:49:04.608746 4886 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Nov 24 08:49:04 crc kubenswrapper[4886]: W1124 08:49:04.608749 4886 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Nov 24 08:49:04 crc kubenswrapper[4886]: W1124 08:49:04.608753 4886 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Nov 24 08:49:04 crc kubenswrapper[4886]: W1124 08:49:04.608756 4886 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Nov 24 08:49:04 crc kubenswrapper[4886]: W1124 08:49:04.608760 4886 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Nov 24 08:49:04 crc kubenswrapper[4886]: W1124 08:49:04.608764 4886 feature_gate.go:330] unrecognized feature gate: PinnedImages Nov 24 08:49:04 crc kubenswrapper[4886]: W1124 08:49:04.608768 4886 feature_gate.go:330] unrecognized feature gate: SignatureStores Nov 24 08:49:04 crc kubenswrapper[4886]: W1124 08:49:04.608773 4886 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Nov 24 08:49:04 crc kubenswrapper[4886]: W1124 08:49:04.608778 4886 feature_gate.go:330] unrecognized feature gate: InsightsConfig Nov 24 08:49:04 crc kubenswrapper[4886]: W1124 08:49:04.608782 4886 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Nov 24 08:49:04 crc kubenswrapper[4886]: W1124 08:49:04.608786 4886 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Nov 24 08:49:04 crc kubenswrapper[4886]: W1124 08:49:04.608790 4886 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Nov 24 08:49:04 crc kubenswrapper[4886]: W1124 08:49:04.608793 4886 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Nov 24 08:49:04 crc kubenswrapper[4886]: W1124 08:49:04.608797 4886 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Nov 24 08:49:04 crc kubenswrapper[4886]: W1124 08:49:04.608801 4886 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Nov 24 08:49:04 crc kubenswrapper[4886]: W1124 08:49:04.608805 4886 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Nov 24 08:49:04 crc kubenswrapper[4886]: W1124 08:49:04.608809 4886 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Nov 24 08:49:04 crc kubenswrapper[4886]: W1124 08:49:04.608813 4886 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Nov 24 08:49:04 crc kubenswrapper[4886]: W1124 08:49:04.608816 4886 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Nov 24 08:49:04 crc kubenswrapper[4886]: W1124 08:49:04.608820 4886 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Nov 24 08:49:04 crc kubenswrapper[4886]: W1124 08:49:04.608824 4886 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Nov 24 08:49:04 crc kubenswrapper[4886]: W1124 08:49:04.608829 4886 feature_gate.go:330] unrecognized feature gate: PlatformOperators Nov 24 08:49:04 crc kubenswrapper[4886]: W1124 08:49:04.608833 4886 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Nov 24 08:49:04 crc kubenswrapper[4886]: W1124 08:49:04.608837 4886 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Nov 24 08:49:04 crc kubenswrapper[4886]: W1124 08:49:04.608840 4886 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Nov 24 08:49:04 crc kubenswrapper[4886]: W1124 08:49:04.608844 4886 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Nov 24 08:49:04 crc kubenswrapper[4886]: W1124 08:49:04.608847 4886 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Nov 24 08:49:04 crc kubenswrapper[4886]: W1124 08:49:04.608851 4886 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Nov 24 08:49:04 crc kubenswrapper[4886]: W1124 08:49:04.608855 4886 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Nov 24 08:49:04 crc kubenswrapper[4886]: W1124 08:49:04.608858 4886 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Nov 24 08:49:04 crc kubenswrapper[4886]: W1124 08:49:04.608862 4886 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Nov 24 08:49:04 crc kubenswrapper[4886]: W1124 08:49:04.608866 4886 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Nov 24 08:49:04 crc kubenswrapper[4886]: W1124 08:49:04.608869 4886 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Nov 24 08:49:04 crc kubenswrapper[4886]: W1124 08:49:04.608872 4886 feature_gate.go:330] unrecognized feature gate: OVNObservability Nov 24 08:49:04 crc kubenswrapper[4886]: W1124 08:49:04.608876 4886 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Nov 24 08:49:04 crc kubenswrapper[4886]: W1124 08:49:04.608880 4886 feature_gate.go:330] unrecognized feature gate: GatewayAPI Nov 24 08:49:04 crc kubenswrapper[4886]: W1124 08:49:04.608884 4886 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Nov 24 08:49:04 crc kubenswrapper[4886]: W1124 08:49:04.608891 4886 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Nov 24 08:49:04 crc kubenswrapper[4886]: W1124 08:49:04.608895 4886 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Nov 24 08:49:04 crc kubenswrapper[4886]: W1124 08:49:04.608899 4886 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Nov 24 08:49:04 crc kubenswrapper[4886]: W1124 08:49:04.608903 4886 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Nov 24 08:49:04 crc kubenswrapper[4886]: W1124 08:49:04.608908 4886 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Nov 24 08:49:04 crc kubenswrapper[4886]: W1124 08:49:04.608912 4886 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Nov 24 08:49:04 crc kubenswrapper[4886]: W1124 08:49:04.608916 4886 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Nov 24 08:49:04 crc kubenswrapper[4886]: W1124 08:49:04.608925 4886 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Nov 24 08:49:04 crc kubenswrapper[4886]: W1124 08:49:04.608933 4886 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Nov 24 08:49:04 crc kubenswrapper[4886]: W1124 08:49:04.608938 4886 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Nov 24 08:49:04 crc kubenswrapper[4886]: W1124 08:49:04.608943 4886 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Nov 24 08:49:04 crc kubenswrapper[4886]: W1124 08:49:04.608950 4886 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Nov 24 08:49:04 crc kubenswrapper[4886]: W1124 08:49:04.608955 4886 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Nov 24 08:49:04 crc kubenswrapper[4886]: W1124 08:49:04.608962 4886 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Nov 24 08:49:04 crc kubenswrapper[4886]: W1124 08:49:04.608968 4886 feature_gate.go:330] unrecognized feature gate: Example Nov 24 08:49:04 crc kubenswrapper[4886]: W1124 08:49:04.608974 4886 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Nov 24 08:49:04 crc kubenswrapper[4886]: W1124 08:49:04.608981 4886 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Nov 24 08:49:04 crc kubenswrapper[4886]: W1124 08:49:04.608988 4886 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Nov 24 08:49:04 crc kubenswrapper[4886]: W1124 08:49:04.608995 4886 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Nov 24 08:49:04 crc kubenswrapper[4886]: W1124 08:49:04.609001 4886 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Nov 24 08:49:04 crc kubenswrapper[4886]: W1124 08:49:04.609007 4886 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Nov 24 08:49:04 crc kubenswrapper[4886]: W1124 08:49:04.609013 4886 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Nov 24 08:49:04 crc kubenswrapper[4886]: W1124 08:49:04.609020 4886 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Nov 24 08:49:04 crc kubenswrapper[4886]: W1124 08:49:04.609025 4886 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Nov 24 08:49:04 crc kubenswrapper[4886]: W1124 08:49:04.609029 4886 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Nov 24 08:49:04 crc kubenswrapper[4886]: W1124 08:49:04.609033 4886 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Nov 24 08:49:04 crc kubenswrapper[4886]: W1124 08:49:04.609037 4886 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Nov 24 08:49:04 crc kubenswrapper[4886]: W1124 08:49:04.609041 4886 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.609049 4886 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.610844 4886 server.go:940] "Client rotation is on, will bootstrap in background" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.615612 4886 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.615755 4886 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.618260 4886 server.go:997] "Starting client certificate rotation" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.618298 4886 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.620106 4886 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2025-12-07 19:44:05.841980765 +0000 UTC Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.620190 4886 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 322h55m1.221793535s for next certificate rotation Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.655627 4886 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.658717 4886 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.675724 4886 log.go:25] "Validated CRI v1 runtime API" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.716104 4886 log.go:25] "Validated CRI v1 image API" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.718674 4886 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.727238 4886 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2025-11-24-08-44-35-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.727294 4886 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:43 fsType:tmpfs blockSize:0}] Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.743314 4886 manager.go:217] Machine: {Timestamp:2025-11-24 08:49:04.740175735 +0000 UTC m=+0.626913890 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2800000 MemoryCapacity:33654128640 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:b95cd08d-0a26-454e-842e-e33553e0c6a8 BootID:bf6e1d17-5641-40b5-abdc-9697895ace84 Filesystems:[{Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:43 Capacity:1073741824 Type:vfs Inodes:4108170 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827064320 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:3365412864 Type:vfs Inodes:821634 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:a3:94:35 Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:a3:94:35 Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:22:df:f1 Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:4f:c7:31 Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:52:f9:36 Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:db:9e:1b Speed:-1 Mtu:1496} {Name:eth10 MacAddress:32:68:2c:d3:68:d2 Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:a2:a0:4a:ae:32:26 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654128640 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.743624 4886 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.743839 4886 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.746451 4886 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.746656 4886 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.746702 4886 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.747054 4886 topology_manager.go:138] "Creating topology manager with none policy" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.747067 4886 container_manager_linux.go:303] "Creating device plugin manager" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.747607 4886 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.747642 4886 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.747865 4886 state_mem.go:36] "Initialized new in-memory state store" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.747958 4886 server.go:1245] "Using root directory" path="/var/lib/kubelet" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.754269 4886 kubelet.go:418] "Attempting to sync node with API server" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.754327 4886 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.754364 4886 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.754383 4886 kubelet.go:324] "Adding apiserver pod source" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.754404 4886 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.761415 4886 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Nov 24 08:49:04 crc kubenswrapper[4886]: W1124 08:49:04.761541 4886 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.129.56.194:6443: connect: connection refused Nov 24 08:49:04 crc kubenswrapper[4886]: W1124 08:49:04.761531 4886 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.129.56.194:6443: connect: connection refused Nov 24 08:49:04 crc kubenswrapper[4886]: E1124 08:49:04.761767 4886 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.129.56.194:6443: connect: connection refused" logger="UnhandledError" Nov 24 08:49:04 crc kubenswrapper[4886]: E1124 08:49:04.761794 4886 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.129.56.194:6443: connect: connection refused" logger="UnhandledError" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.762905 4886 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.766467 4886 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.768897 4886 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.768944 4886 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.768954 4886 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.768962 4886 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.768977 4886 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.768987 4886 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.768997 4886 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.769017 4886 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.769030 4886 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.769040 4886 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.769057 4886 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.769066 4886 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.771272 4886 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.771953 4886 server.go:1280] "Started kubelet" Nov 24 08:49:04 crc systemd[1]: Started Kubernetes Kubelet. Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.777001 4886 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.777010 4886 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.777637 4886 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.129.56.194:6443: connect: connection refused Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.778424 4886 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.778472 4886 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.778621 4886 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 11:15:17.027672593 +0000 UTC Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.778694 4886 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 770h26m12.248982755s for next certificate rotation Nov 24 08:49:04 crc kubenswrapper[4886]: E1124 08:49:04.778769 4886 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.778834 4886 volume_manager.go:287] "The desired_state_of_world populator starts" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.778851 4886 volume_manager.go:289] "Starting Kubelet Volume Manager" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.779067 4886 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.778919 4886 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.780887 4886 factory.go:55] Registering systemd factory Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.780916 4886 factory.go:221] Registration of the systemd container factory successfully Nov 24 08:49:04 crc kubenswrapper[4886]: W1124 08:49:04.781183 4886 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.129.56.194:6443: connect: connection refused Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.781282 4886 factory.go:153] Registering CRI-O factory Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.781317 4886 factory.go:221] Registration of the crio container factory successfully Nov 24 08:49:04 crc kubenswrapper[4886]: E1124 08:49:04.781295 4886 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.129.56.194:6443: connect: connection refused" logger="UnhandledError" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.781440 4886 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.781490 4886 factory.go:103] Registering Raw factory Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.781527 4886 manager.go:1196] Started watching for new ooms in manager Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.788670 4886 manager.go:319] Starting recovery of all containers Nov 24 08:49:04 crc kubenswrapper[4886]: E1124 08:49:04.788959 4886 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.194:6443: connect: connection refused" interval="200ms" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.789813 4886 server.go:460] "Adding debug handlers to kubelet server" Nov 24 08:49:04 crc kubenswrapper[4886]: E1124 08:49:04.789564 4886 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.129.56.194:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.187ae523841a7087 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-11-24 08:49:04.771911815 +0000 UTC m=+0.658649950,LastTimestamp:2025-11-24 08:49:04.771911815 +0000 UTC m=+0.658649950,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.795129 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.795209 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.795219 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.795235 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.795244 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.795254 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.795263 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.795273 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.795286 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.795295 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.795305 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.795315 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.795325 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.795336 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.795346 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.795361 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.795375 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.795386 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.795397 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.795407 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.795417 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.795428 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.795438 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.795447 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.795457 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.795469 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.795486 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.795496 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.795508 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.795517 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.795527 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.795538 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.795549 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.795567 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.795577 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.795586 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.795595 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.795609 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.795617 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.795627 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.795637 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.795646 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.795655 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.795665 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.795673 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.795683 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.795694 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.795705 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.795714 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.795724 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.795734 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.795743 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.795764 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.795777 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.795786 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.795796 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.795807 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.795816 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.795825 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.795837 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.795847 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.795863 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.795872 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.795881 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.795892 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.795902 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.795910 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.795919 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.795929 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.795938 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.795947 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.801028 4886 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.801089 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.801106 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.801121 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.801135 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.801164 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.801178 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.801193 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.801206 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.801223 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.801235 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.801246 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.801257 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.801269 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.801280 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.801291 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.801305 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.801316 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.801328 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.801338 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.801350 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.801362 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.801373 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.801385 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.801398 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.801410 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.801424 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.801436 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.801446 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.801458 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.801469 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.801480 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.801491 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.801504 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.801524 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.801540 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.801554 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.801568 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.801580 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.801593 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.801603 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.801617 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.801629 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.801639 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.801650 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.801661 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.801675 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.801689 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.801703 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.801717 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.801730 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.801743 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.801758 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.801771 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.801788 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.801800 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.801812 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.801825 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.801835 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.801846 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.801857 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.801868 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.801882 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.801894 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.801905 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.801915 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.801925 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.801937 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.801947 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.801959 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.801977 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.801989 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.802000 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.802011 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.802024 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.802034 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.802046 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.802056 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.802065 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.802076 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.802086 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.802097 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.802110 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.802121 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.802132 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.802143 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.802166 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.802175 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.802186 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.802199 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.802210 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.802220 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.802229 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.802240 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.802251 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.802261 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.802270 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.802281 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.802291 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.802301 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.802311 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.802328 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.802340 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.802351 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.802362 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.802374 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.802384 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.802392 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.802402 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.802412 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.802424 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.802434 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.802444 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.802455 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.802466 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.802476 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.802485 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.802495 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.802505 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.802515 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.802524 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.802535 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.802545 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.802555 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.802564 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.802574 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.802585 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.802594 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.802605 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.802615 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.802625 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.802635 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.802644 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.802654 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.802663 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.802673 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.802683 4886 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.802695 4886 reconstruct.go:97] "Volume reconstruction finished" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.802703 4886 reconciler.go:26] "Reconciler: start to sync state" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.811463 4886 manager.go:324] Recovery completed Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.823503 4886 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.826182 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.826222 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.826233 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.827981 4886 cpu_manager.go:225] "Starting CPU manager" policy="none" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.828010 4886 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.828033 4886 state_mem.go:36] "Initialized new in-memory state store" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.845342 4886 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.847770 4886 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.847836 4886 status_manager.go:217] "Starting to sync pod status with apiserver" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.847886 4886 kubelet.go:2335] "Starting kubelet main sync loop" Nov 24 08:49:04 crc kubenswrapper[4886]: E1124 08:49:04.847951 4886 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 24 08:49:04 crc kubenswrapper[4886]: W1124 08:49:04.850946 4886 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.129.56.194:6443: connect: connection refused Nov 24 08:49:04 crc kubenswrapper[4886]: E1124 08:49:04.851017 4886 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.129.56.194:6443: connect: connection refused" logger="UnhandledError" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.856700 4886 policy_none.go:49] "None policy: Start" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.858126 4886 memory_manager.go:170] "Starting memorymanager" policy="None" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.858198 4886 state_mem.go:35] "Initializing new in-memory state store" Nov 24 08:49:04 crc kubenswrapper[4886]: E1124 08:49:04.879623 4886 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.909637 4886 manager.go:334] "Starting Device Plugin manager" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.909716 4886 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.909733 4886 server.go:79] "Starting device plugin registration server" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.910312 4886 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.910327 4886 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.910536 4886 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.910630 4886 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.910640 4886 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 24 08:49:04 crc kubenswrapper[4886]: E1124 08:49:04.917988 4886 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.948126 4886 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc"] Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.948300 4886 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.949715 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.949774 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.949788 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.949987 4886 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.950338 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.950430 4886 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.951101 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.951136 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.951166 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.951313 4886 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.951444 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.951489 4886 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.951569 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.951602 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.951614 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.952134 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.952192 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.952206 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.952213 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.952237 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.952249 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.952455 4886 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.952606 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.952637 4886 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.953426 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.953448 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.953462 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.953470 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.953452 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.953492 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.953589 4886 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.953781 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.953812 4886 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.955813 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.955835 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.955845 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.955815 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.955977 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.955993 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.956289 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.956363 4886 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.957185 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.957209 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:04 crc kubenswrapper[4886]: I1124 08:49:04.957220 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:04 crc kubenswrapper[4886]: E1124 08:49:04.989864 4886 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.194:6443: connect: connection refused" interval="400ms" Nov 24 08:49:05 crc kubenswrapper[4886]: I1124 08:49:05.004288 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 24 08:49:05 crc kubenswrapper[4886]: I1124 08:49:05.004374 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 24 08:49:05 crc kubenswrapper[4886]: I1124 08:49:05.004413 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 24 08:49:05 crc kubenswrapper[4886]: I1124 08:49:05.004438 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 24 08:49:05 crc kubenswrapper[4886]: I1124 08:49:05.004464 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 24 08:49:05 crc kubenswrapper[4886]: I1124 08:49:05.004488 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 24 08:49:05 crc kubenswrapper[4886]: I1124 08:49:05.004510 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 24 08:49:05 crc kubenswrapper[4886]: I1124 08:49:05.004532 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 24 08:49:05 crc kubenswrapper[4886]: I1124 08:49:05.004637 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 24 08:49:05 crc kubenswrapper[4886]: I1124 08:49:05.004702 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 24 08:49:05 crc kubenswrapper[4886]: I1124 08:49:05.004732 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 24 08:49:05 crc kubenswrapper[4886]: I1124 08:49:05.004761 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 24 08:49:05 crc kubenswrapper[4886]: I1124 08:49:05.004788 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 24 08:49:05 crc kubenswrapper[4886]: I1124 08:49:05.004812 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 24 08:49:05 crc kubenswrapper[4886]: I1124 08:49:05.004892 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 24 08:49:05 crc kubenswrapper[4886]: I1124 08:49:05.010457 4886 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 08:49:05 crc kubenswrapper[4886]: I1124 08:49:05.011850 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:05 crc kubenswrapper[4886]: I1124 08:49:05.011894 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:05 crc kubenswrapper[4886]: I1124 08:49:05.011904 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:05 crc kubenswrapper[4886]: I1124 08:49:05.011935 4886 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 24 08:49:05 crc kubenswrapper[4886]: E1124 08:49:05.015494 4886 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.129.56.194:6443: connect: connection refused" node="crc" Nov 24 08:49:05 crc kubenswrapper[4886]: I1124 08:49:05.105946 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 24 08:49:05 crc kubenswrapper[4886]: I1124 08:49:05.106007 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 24 08:49:05 crc kubenswrapper[4886]: I1124 08:49:05.106031 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 24 08:49:05 crc kubenswrapper[4886]: I1124 08:49:05.106046 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 24 08:49:05 crc kubenswrapper[4886]: I1124 08:49:05.106059 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 24 08:49:05 crc kubenswrapper[4886]: I1124 08:49:05.106073 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 24 08:49:05 crc kubenswrapper[4886]: I1124 08:49:05.106091 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 24 08:49:05 crc kubenswrapper[4886]: I1124 08:49:05.106106 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 24 08:49:05 crc kubenswrapper[4886]: I1124 08:49:05.106121 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 24 08:49:05 crc kubenswrapper[4886]: I1124 08:49:05.106137 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 24 08:49:05 crc kubenswrapper[4886]: I1124 08:49:05.106162 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 24 08:49:05 crc kubenswrapper[4886]: I1124 08:49:05.106176 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 24 08:49:05 crc kubenswrapper[4886]: I1124 08:49:05.106189 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 24 08:49:05 crc kubenswrapper[4886]: I1124 08:49:05.106192 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 24 08:49:05 crc kubenswrapper[4886]: I1124 08:49:05.106219 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 24 08:49:05 crc kubenswrapper[4886]: I1124 08:49:05.106239 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 24 08:49:05 crc kubenswrapper[4886]: I1124 08:49:05.106305 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 24 08:49:05 crc kubenswrapper[4886]: I1124 08:49:05.106338 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 24 08:49:05 crc kubenswrapper[4886]: I1124 08:49:05.106401 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 24 08:49:05 crc kubenswrapper[4886]: I1124 08:49:05.106436 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 24 08:49:05 crc kubenswrapper[4886]: I1124 08:49:05.106443 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 24 08:49:05 crc kubenswrapper[4886]: I1124 08:49:05.106443 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 24 08:49:05 crc kubenswrapper[4886]: I1124 08:49:05.106506 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 24 08:49:05 crc kubenswrapper[4886]: I1124 08:49:05.106145 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 24 08:49:05 crc kubenswrapper[4886]: I1124 08:49:05.106522 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 24 08:49:05 crc kubenswrapper[4886]: I1124 08:49:05.106484 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 24 08:49:05 crc kubenswrapper[4886]: I1124 08:49:05.106565 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 24 08:49:05 crc kubenswrapper[4886]: I1124 08:49:05.106559 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 24 08:49:05 crc kubenswrapper[4886]: I1124 08:49:05.106448 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 24 08:49:05 crc kubenswrapper[4886]: I1124 08:49:05.106602 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 24 08:49:05 crc kubenswrapper[4886]: I1124 08:49:05.216289 4886 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 08:49:05 crc kubenswrapper[4886]: I1124 08:49:05.218645 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:05 crc kubenswrapper[4886]: I1124 08:49:05.218681 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:05 crc kubenswrapper[4886]: I1124 08:49:05.218688 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:05 crc kubenswrapper[4886]: I1124 08:49:05.218713 4886 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 24 08:49:05 crc kubenswrapper[4886]: E1124 08:49:05.219301 4886 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.129.56.194:6443: connect: connection refused" node="crc" Nov 24 08:49:05 crc kubenswrapper[4886]: I1124 08:49:05.272895 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 24 08:49:05 crc kubenswrapper[4886]: I1124 08:49:05.290092 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Nov 24 08:49:05 crc kubenswrapper[4886]: I1124 08:49:05.306638 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 24 08:49:05 crc kubenswrapper[4886]: I1124 08:49:05.318898 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 24 08:49:05 crc kubenswrapper[4886]: I1124 08:49:05.320849 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 24 08:49:05 crc kubenswrapper[4886]: W1124 08:49:05.344901 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-6394b1dcf6237447c981507be02b7db1dd2c712149c7c8080317b1d1779e9375 WatchSource:0}: Error finding container 6394b1dcf6237447c981507be02b7db1dd2c712149c7c8080317b1d1779e9375: Status 404 returned error can't find the container with id 6394b1dcf6237447c981507be02b7db1dd2c712149c7c8080317b1d1779e9375 Nov 24 08:49:05 crc kubenswrapper[4886]: W1124 08:49:05.347311 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-f18b60eeb3f520366fbfc6be856067b34b6baf09ae8d736fad17742512697415 WatchSource:0}: Error finding container f18b60eeb3f520366fbfc6be856067b34b6baf09ae8d736fad17742512697415: Status 404 returned error can't find the container with id f18b60eeb3f520366fbfc6be856067b34b6baf09ae8d736fad17742512697415 Nov 24 08:49:05 crc kubenswrapper[4886]: W1124 08:49:05.356301 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-12ce30891a0ec9b3d0e192176595ed538fc1e06b28f884a93868f1ce2dcaa25d WatchSource:0}: Error finding container 12ce30891a0ec9b3d0e192176595ed538fc1e06b28f884a93868f1ce2dcaa25d: Status 404 returned error can't find the container with id 12ce30891a0ec9b3d0e192176595ed538fc1e06b28f884a93868f1ce2dcaa25d Nov 24 08:49:05 crc kubenswrapper[4886]: E1124 08:49:05.391729 4886 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.194:6443: connect: connection refused" interval="800ms" Nov 24 08:49:05 crc kubenswrapper[4886]: I1124 08:49:05.620092 4886 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 08:49:05 crc kubenswrapper[4886]: I1124 08:49:05.621274 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:05 crc kubenswrapper[4886]: I1124 08:49:05.621311 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:05 crc kubenswrapper[4886]: I1124 08:49:05.621322 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:05 crc kubenswrapper[4886]: I1124 08:49:05.621350 4886 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 24 08:49:05 crc kubenswrapper[4886]: E1124 08:49:05.621647 4886 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.129.56.194:6443: connect: connection refused" node="crc" Nov 24 08:49:05 crc kubenswrapper[4886]: W1124 08:49:05.752761 4886 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.129.56.194:6443: connect: connection refused Nov 24 08:49:05 crc kubenswrapper[4886]: E1124 08:49:05.752900 4886 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.129.56.194:6443: connect: connection refused" logger="UnhandledError" Nov 24 08:49:05 crc kubenswrapper[4886]: I1124 08:49:05.779351 4886 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.129.56.194:6443: connect: connection refused Nov 24 08:49:05 crc kubenswrapper[4886]: I1124 08:49:05.858174 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"76a260578ce2092b609fefd19e09148f9a6a2923a1bee78ac069a253eb9deaf1"} Nov 24 08:49:05 crc kubenswrapper[4886]: I1124 08:49:05.859870 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"f18b60eeb3f520366fbfc6be856067b34b6baf09ae8d736fad17742512697415"} Nov 24 08:49:05 crc kubenswrapper[4886]: I1124 08:49:05.861657 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"6394b1dcf6237447c981507be02b7db1dd2c712149c7c8080317b1d1779e9375"} Nov 24 08:49:05 crc kubenswrapper[4886]: I1124 08:49:05.862933 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"12ce30891a0ec9b3d0e192176595ed538fc1e06b28f884a93868f1ce2dcaa25d"} Nov 24 08:49:05 crc kubenswrapper[4886]: I1124 08:49:05.864100 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"024b03052d62455f41063ae159557eab85c958d7a5345cad00d35c7f00a837ae"} Nov 24 08:49:06 crc kubenswrapper[4886]: W1124 08:49:06.088171 4886 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.129.56.194:6443: connect: connection refused Nov 24 08:49:06 crc kubenswrapper[4886]: E1124 08:49:06.088644 4886 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.129.56.194:6443: connect: connection refused" logger="UnhandledError" Nov 24 08:49:06 crc kubenswrapper[4886]: E1124 08:49:06.193019 4886 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.194:6443: connect: connection refused" interval="1.6s" Nov 24 08:49:06 crc kubenswrapper[4886]: W1124 08:49:06.220273 4886 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.129.56.194:6443: connect: connection refused Nov 24 08:49:06 crc kubenswrapper[4886]: E1124 08:49:06.220389 4886 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.129.56.194:6443: connect: connection refused" logger="UnhandledError" Nov 24 08:49:06 crc kubenswrapper[4886]: W1124 08:49:06.335349 4886 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.129.56.194:6443: connect: connection refused Nov 24 08:49:06 crc kubenswrapper[4886]: E1124 08:49:06.335466 4886 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.129.56.194:6443: connect: connection refused" logger="UnhandledError" Nov 24 08:49:06 crc kubenswrapper[4886]: I1124 08:49:06.422044 4886 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 08:49:06 crc kubenswrapper[4886]: I1124 08:49:06.424543 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:06 crc kubenswrapper[4886]: I1124 08:49:06.424594 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:06 crc kubenswrapper[4886]: I1124 08:49:06.424603 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:06 crc kubenswrapper[4886]: I1124 08:49:06.424629 4886 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 24 08:49:06 crc kubenswrapper[4886]: E1124 08:49:06.425235 4886 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.129.56.194:6443: connect: connection refused" node="crc" Nov 24 08:49:06 crc kubenswrapper[4886]: I1124 08:49:06.779610 4886 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.129.56.194:6443: connect: connection refused Nov 24 08:49:06 crc kubenswrapper[4886]: I1124 08:49:06.868479 4886 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="fc5ed0e2c8f6299305d6c26194c4b43e6e884785effd5ae99254221d738e7662" exitCode=0 Nov 24 08:49:06 crc kubenswrapper[4886]: I1124 08:49:06.868561 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"fc5ed0e2c8f6299305d6c26194c4b43e6e884785effd5ae99254221d738e7662"} Nov 24 08:49:06 crc kubenswrapper[4886]: I1124 08:49:06.868632 4886 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 08:49:06 crc kubenswrapper[4886]: I1124 08:49:06.869437 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:06 crc kubenswrapper[4886]: I1124 08:49:06.869470 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:06 crc kubenswrapper[4886]: I1124 08:49:06.869479 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:06 crc kubenswrapper[4886]: I1124 08:49:06.871611 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"243dc8c38e86fbd96d073b5dc2edaf3f378f052563abfedf86cd78e6270f11b1"} Nov 24 08:49:06 crc kubenswrapper[4886]: I1124 08:49:06.871650 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"879cdf4650f88cd23cfed38dabd912619f3c95ab3c254c0eb893eb4cbb2f8c5c"} Nov 24 08:49:06 crc kubenswrapper[4886]: I1124 08:49:06.871699 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"e99af72fba7cd02a022aede4ff904f6dc0d263429468b460f3886cbbf8767e9d"} Nov 24 08:49:06 crc kubenswrapper[4886]: I1124 08:49:06.871708 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"d3a5e301ad7f424b4db6c7c2d4054973bf707f225de81d58a13077e1a130fe59"} Nov 24 08:49:06 crc kubenswrapper[4886]: I1124 08:49:06.871780 4886 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 08:49:06 crc kubenswrapper[4886]: I1124 08:49:06.871793 4886 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 08:49:06 crc kubenswrapper[4886]: I1124 08:49:06.872764 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:06 crc kubenswrapper[4886]: I1124 08:49:06.872795 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:06 crc kubenswrapper[4886]: I1124 08:49:06.872807 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:06 crc kubenswrapper[4886]: I1124 08:49:06.872906 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:06 crc kubenswrapper[4886]: I1124 08:49:06.872931 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:06 crc kubenswrapper[4886]: I1124 08:49:06.872946 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:06 crc kubenswrapper[4886]: I1124 08:49:06.874312 4886 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="98787b856c8680640854ec328aa9dc502d03f8617e5b19283b2bd03c3cca7988" exitCode=0 Nov 24 08:49:06 crc kubenswrapper[4886]: I1124 08:49:06.874394 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"98787b856c8680640854ec328aa9dc502d03f8617e5b19283b2bd03c3cca7988"} Nov 24 08:49:06 crc kubenswrapper[4886]: I1124 08:49:06.874466 4886 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 08:49:06 crc kubenswrapper[4886]: I1124 08:49:06.875789 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:06 crc kubenswrapper[4886]: I1124 08:49:06.875837 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:06 crc kubenswrapper[4886]: I1124 08:49:06.875849 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:06 crc kubenswrapper[4886]: I1124 08:49:06.876999 4886 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="8ee3470d2b357bc4a39b9fa7da8800930ea9f4d4a3926aa9d2000be8ca2da241" exitCode=0 Nov 24 08:49:06 crc kubenswrapper[4886]: I1124 08:49:06.877075 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"8ee3470d2b357bc4a39b9fa7da8800930ea9f4d4a3926aa9d2000be8ca2da241"} Nov 24 08:49:06 crc kubenswrapper[4886]: I1124 08:49:06.877164 4886 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 08:49:06 crc kubenswrapper[4886]: I1124 08:49:06.878211 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:06 crc kubenswrapper[4886]: I1124 08:49:06.878234 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:06 crc kubenswrapper[4886]: I1124 08:49:06.878245 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:06 crc kubenswrapper[4886]: I1124 08:49:06.879542 4886 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="ad1286679fd7aff86aba670b7335b6a43e3a6843438b0c437b43abd8cfcce795" exitCode=0 Nov 24 08:49:06 crc kubenswrapper[4886]: I1124 08:49:06.879568 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"ad1286679fd7aff86aba670b7335b6a43e3a6843438b0c437b43abd8cfcce795"} Nov 24 08:49:06 crc kubenswrapper[4886]: I1124 08:49:06.879605 4886 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 08:49:06 crc kubenswrapper[4886]: I1124 08:49:06.880312 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:06 crc kubenswrapper[4886]: I1124 08:49:06.880336 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:06 crc kubenswrapper[4886]: I1124 08:49:06.880348 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:07 crc kubenswrapper[4886]: W1124 08:49:07.437115 4886 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.129.56.194:6443: connect: connection refused Nov 24 08:49:07 crc kubenswrapper[4886]: E1124 08:49:07.437246 4886 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.129.56.194:6443: connect: connection refused" logger="UnhandledError" Nov 24 08:49:07 crc kubenswrapper[4886]: I1124 08:49:07.779394 4886 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.129.56.194:6443: connect: connection refused Nov 24 08:49:07 crc kubenswrapper[4886]: E1124 08:49:07.794169 4886 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.194:6443: connect: connection refused" interval="3.2s" Nov 24 08:49:07 crc kubenswrapper[4886]: W1124 08:49:07.808917 4886 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.129.56.194:6443: connect: connection refused Nov 24 08:49:07 crc kubenswrapper[4886]: E1124 08:49:07.809057 4886 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.129.56.194:6443: connect: connection refused" logger="UnhandledError" Nov 24 08:49:07 crc kubenswrapper[4886]: I1124 08:49:07.884936 4886 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 08:49:07 crc kubenswrapper[4886]: I1124 08:49:07.884887 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"f133f636d03a079b660cd8e5e63cffd98bdf567a60e1d11c34f0775b2f443228"} Nov 24 08:49:07 crc kubenswrapper[4886]: I1124 08:49:07.885816 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:07 crc kubenswrapper[4886]: I1124 08:49:07.885850 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:07 crc kubenswrapper[4886]: I1124 08:49:07.885859 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:07 crc kubenswrapper[4886]: I1124 08:49:07.888632 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"97e022f9b9279047d700f687148beaeef5ce4f72de418be1bdf0edde4e5211da"} Nov 24 08:49:07 crc kubenswrapper[4886]: I1124 08:49:07.888704 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"ff0e663f6feec9c77de1cd993c443c9b3df3492612c554cc8b95c8e4e4a026b9"} Nov 24 08:49:07 crc kubenswrapper[4886]: I1124 08:49:07.888719 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"146e753c49089634c3d3105a3debf9737245745d8ed9aee61ae2cf4f56ae37ce"} Nov 24 08:49:07 crc kubenswrapper[4886]: I1124 08:49:07.888729 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"b4cd419a96834f5093a6a3bad2d2b99f9ab6f4abdbe5d7fddadf1505a0fd1f3d"} Nov 24 08:49:07 crc kubenswrapper[4886]: I1124 08:49:07.891677 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"d0c4a9d27ad84a91160eed506a2a6b0b34968fc71063ce3493326092049d2bf4"} Nov 24 08:49:07 crc kubenswrapper[4886]: I1124 08:49:07.891706 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"6b4868ba55a55a9c6373e02adc88283d51196d5e19b554d0f351c59599731130"} Nov 24 08:49:07 crc kubenswrapper[4886]: I1124 08:49:07.891718 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"44ecd99cee98ca44de5a0af10ab152bc27aca5ab8f94d9d58b295804261846d9"} Nov 24 08:49:07 crc kubenswrapper[4886]: I1124 08:49:07.891846 4886 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 08:49:07 crc kubenswrapper[4886]: I1124 08:49:07.893163 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:07 crc kubenswrapper[4886]: I1124 08:49:07.893192 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:07 crc kubenswrapper[4886]: I1124 08:49:07.893205 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:07 crc kubenswrapper[4886]: I1124 08:49:07.903590 4886 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="74bf2e186184e37d22515fc9a765b4ff0530354eb4b6af7679cf6f046651f9e8" exitCode=0 Nov 24 08:49:07 crc kubenswrapper[4886]: I1124 08:49:07.903691 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"74bf2e186184e37d22515fc9a765b4ff0530354eb4b6af7679cf6f046651f9e8"} Nov 24 08:49:07 crc kubenswrapper[4886]: I1124 08:49:07.903755 4886 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 08:49:07 crc kubenswrapper[4886]: I1124 08:49:07.903756 4886 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 08:49:07 crc kubenswrapper[4886]: I1124 08:49:07.905099 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:07 crc kubenswrapper[4886]: I1124 08:49:07.905140 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:07 crc kubenswrapper[4886]: I1124 08:49:07.905169 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:07 crc kubenswrapper[4886]: I1124 08:49:07.906195 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:07 crc kubenswrapper[4886]: I1124 08:49:07.906238 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:07 crc kubenswrapper[4886]: I1124 08:49:07.906251 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:08 crc kubenswrapper[4886]: I1124 08:49:08.026382 4886 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 08:49:08 crc kubenswrapper[4886]: I1124 08:49:08.027875 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:08 crc kubenswrapper[4886]: I1124 08:49:08.027928 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:08 crc kubenswrapper[4886]: I1124 08:49:08.027938 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:08 crc kubenswrapper[4886]: I1124 08:49:08.027968 4886 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 24 08:49:08 crc kubenswrapper[4886]: E1124 08:49:08.028574 4886 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.129.56.194:6443: connect: connection refused" node="crc" Nov 24 08:49:08 crc kubenswrapper[4886]: W1124 08:49:08.577319 4886 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.129.56.194:6443: connect: connection refused Nov 24 08:49:08 crc kubenswrapper[4886]: E1124 08:49:08.577462 4886 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.129.56.194:6443: connect: connection refused" logger="UnhandledError" Nov 24 08:49:08 crc kubenswrapper[4886]: I1124 08:49:08.911942 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"8cd396a94b39804c5730c17f380b8a20d993b60e18e213ea8ac8e69eb8fa5862"} Nov 24 08:49:08 crc kubenswrapper[4886]: I1124 08:49:08.912073 4886 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 08:49:08 crc kubenswrapper[4886]: I1124 08:49:08.913040 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:08 crc kubenswrapper[4886]: I1124 08:49:08.913081 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:08 crc kubenswrapper[4886]: I1124 08:49:08.913092 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:08 crc kubenswrapper[4886]: I1124 08:49:08.914661 4886 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="1794ee7f2b8bf736d63385a3755d5a50e5f05cc74470ed04c3f21054476ff5f5" exitCode=0 Nov 24 08:49:08 crc kubenswrapper[4886]: I1124 08:49:08.914839 4886 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 08:49:08 crc kubenswrapper[4886]: I1124 08:49:08.915484 4886 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 08:49:08 crc kubenswrapper[4886]: I1124 08:49:08.915804 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"1794ee7f2b8bf736d63385a3755d5a50e5f05cc74470ed04c3f21054476ff5f5"} Nov 24 08:49:08 crc kubenswrapper[4886]: I1124 08:49:08.915883 4886 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 08:49:08 crc kubenswrapper[4886]: I1124 08:49:08.916230 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 24 08:49:08 crc kubenswrapper[4886]: I1124 08:49:08.916758 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:08 crc kubenswrapper[4886]: I1124 08:49:08.916797 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:08 crc kubenswrapper[4886]: I1124 08:49:08.916810 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:08 crc kubenswrapper[4886]: I1124 08:49:08.916904 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:08 crc kubenswrapper[4886]: I1124 08:49:08.916928 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:08 crc kubenswrapper[4886]: I1124 08:49:08.916936 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:08 crc kubenswrapper[4886]: I1124 08:49:08.917585 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:08 crc kubenswrapper[4886]: I1124 08:49:08.917623 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:08 crc kubenswrapper[4886]: I1124 08:49:08.917636 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:09 crc kubenswrapper[4886]: I1124 08:49:09.921476 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"e34e380e0dd3b6788878e07e1d497d003445742fb3e976e78f116fd775fb0ae8"} Nov 24 08:49:09 crc kubenswrapper[4886]: I1124 08:49:09.921541 4886 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 24 08:49:09 crc kubenswrapper[4886]: I1124 08:49:09.921610 4886 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 08:49:09 crc kubenswrapper[4886]: I1124 08:49:09.921631 4886 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 08:49:09 crc kubenswrapper[4886]: I1124 08:49:09.921548 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"6cae19a77290a6aa02abc366057b242de03f58a183d136dc19248cdfa2c0fa75"} Nov 24 08:49:09 crc kubenswrapper[4886]: I1124 08:49:09.921674 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"52f6359fcbe915c66d972a4cdf89646528d2c1d5099c96cb3436e2da54d1099a"} Nov 24 08:49:09 crc kubenswrapper[4886]: I1124 08:49:09.922704 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:09 crc kubenswrapper[4886]: I1124 08:49:09.922748 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:09 crc kubenswrapper[4886]: I1124 08:49:09.922761 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:09 crc kubenswrapper[4886]: I1124 08:49:09.922764 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:09 crc kubenswrapper[4886]: I1124 08:49:09.922780 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:09 crc kubenswrapper[4886]: I1124 08:49:09.922796 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:10 crc kubenswrapper[4886]: I1124 08:49:10.929570 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"66381e4a8de0eb3ec683f0643c26082dac4f16a6e152164bf9a8b14449a23b0a"} Nov 24 08:49:10 crc kubenswrapper[4886]: I1124 08:49:10.930093 4886 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 08:49:10 crc kubenswrapper[4886]: I1124 08:49:10.929764 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"4aa26820df6b9179aede13abc90067edc933b9bae6f11a6a1c32cf1a3ee494c3"} Nov 24 08:49:10 crc kubenswrapper[4886]: I1124 08:49:10.933759 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:10 crc kubenswrapper[4886]: I1124 08:49:10.934628 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:10 crc kubenswrapper[4886]: I1124 08:49:10.934643 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:11 crc kubenswrapper[4886]: I1124 08:49:11.087590 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Nov 24 08:49:11 crc kubenswrapper[4886]: I1124 08:49:11.229654 4886 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 08:49:11 crc kubenswrapper[4886]: I1124 08:49:11.231363 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:11 crc kubenswrapper[4886]: I1124 08:49:11.231411 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:11 crc kubenswrapper[4886]: I1124 08:49:11.231438 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:11 crc kubenswrapper[4886]: I1124 08:49:11.231467 4886 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 24 08:49:11 crc kubenswrapper[4886]: I1124 08:49:11.932422 4886 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 08:49:11 crc kubenswrapper[4886]: I1124 08:49:11.933543 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:11 crc kubenswrapper[4886]: I1124 08:49:11.933587 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:11 crc kubenswrapper[4886]: I1124 08:49:11.933602 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:12 crc kubenswrapper[4886]: I1124 08:49:12.491262 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 24 08:49:12 crc kubenswrapper[4886]: I1124 08:49:12.491523 4886 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 24 08:49:12 crc kubenswrapper[4886]: I1124 08:49:12.491579 4886 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 08:49:12 crc kubenswrapper[4886]: I1124 08:49:12.493130 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:12 crc kubenswrapper[4886]: I1124 08:49:12.493189 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:12 crc kubenswrapper[4886]: I1124 08:49:12.493206 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:12 crc kubenswrapper[4886]: I1124 08:49:12.936487 4886 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 08:49:12 crc kubenswrapper[4886]: I1124 08:49:12.937660 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:12 crc kubenswrapper[4886]: I1124 08:49:12.937716 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:12 crc kubenswrapper[4886]: I1124 08:49:12.937729 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:13 crc kubenswrapper[4886]: I1124 08:49:13.407268 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 24 08:49:13 crc kubenswrapper[4886]: I1124 08:49:13.407499 4886 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 08:49:13 crc kubenswrapper[4886]: I1124 08:49:13.408986 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:13 crc kubenswrapper[4886]: I1124 08:49:13.409036 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:13 crc kubenswrapper[4886]: I1124 08:49:13.409050 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:13 crc kubenswrapper[4886]: I1124 08:49:13.706991 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 24 08:49:13 crc kubenswrapper[4886]: I1124 08:49:13.939669 4886 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 08:49:13 crc kubenswrapper[4886]: I1124 08:49:13.940616 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:13 crc kubenswrapper[4886]: I1124 08:49:13.940651 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:13 crc kubenswrapper[4886]: I1124 08:49:13.940664 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:14 crc kubenswrapper[4886]: I1124 08:49:14.748101 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 24 08:49:14 crc kubenswrapper[4886]: I1124 08:49:14.748436 4886 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 08:49:14 crc kubenswrapper[4886]: I1124 08:49:14.749967 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:14 crc kubenswrapper[4886]: I1124 08:49:14.750047 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:14 crc kubenswrapper[4886]: I1124 08:49:14.750073 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:14 crc kubenswrapper[4886]: E1124 08:49:14.918161 4886 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Nov 24 08:49:15 crc kubenswrapper[4886]: I1124 08:49:15.479428 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 24 08:49:15 crc kubenswrapper[4886]: I1124 08:49:15.479617 4886 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 08:49:15 crc kubenswrapper[4886]: I1124 08:49:15.481136 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:15 crc kubenswrapper[4886]: I1124 08:49:15.481220 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:15 crc kubenswrapper[4886]: I1124 08:49:15.481241 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:15 crc kubenswrapper[4886]: I1124 08:49:15.588244 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 24 08:49:15 crc kubenswrapper[4886]: I1124 08:49:15.946623 4886 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 08:49:15 crc kubenswrapper[4886]: I1124 08:49:15.948376 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:15 crc kubenswrapper[4886]: I1124 08:49:15.948435 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:15 crc kubenswrapper[4886]: I1124 08:49:15.948450 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:16 crc kubenswrapper[4886]: I1124 08:49:16.716127 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 24 08:49:16 crc kubenswrapper[4886]: I1124 08:49:16.723952 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 24 08:49:16 crc kubenswrapper[4886]: I1124 08:49:16.948923 4886 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 08:49:16 crc kubenswrapper[4886]: I1124 08:49:16.949764 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:16 crc kubenswrapper[4886]: I1124 08:49:16.949792 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:16 crc kubenswrapper[4886]: I1124 08:49:16.949801 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:16 crc kubenswrapper[4886]: I1124 08:49:16.953452 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 24 08:49:17 crc kubenswrapper[4886]: I1124 08:49:17.310607 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Nov 24 08:49:17 crc kubenswrapper[4886]: I1124 08:49:17.311498 4886 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 08:49:17 crc kubenswrapper[4886]: I1124 08:49:17.312926 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:17 crc kubenswrapper[4886]: I1124 08:49:17.312957 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:17 crc kubenswrapper[4886]: I1124 08:49:17.312966 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:17 crc kubenswrapper[4886]: I1124 08:49:17.749201 4886 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Nov 24 08:49:17 crc kubenswrapper[4886]: I1124 08:49:17.749317 4886 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 24 08:49:17 crc kubenswrapper[4886]: I1124 08:49:17.951958 4886 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 08:49:17 crc kubenswrapper[4886]: I1124 08:49:17.952851 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:17 crc kubenswrapper[4886]: I1124 08:49:17.952883 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:17 crc kubenswrapper[4886]: I1124 08:49:17.952896 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:18 crc kubenswrapper[4886]: W1124 08:49:18.685196 4886 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": net/http: TLS handshake timeout Nov 24 08:49:18 crc kubenswrapper[4886]: I1124 08:49:18.685332 4886 trace.go:236] Trace[238477701]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (24-Nov-2025 08:49:08.683) (total time: 10001ms): Nov 24 08:49:18 crc kubenswrapper[4886]: Trace[238477701]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (08:49:18.685) Nov 24 08:49:18 crc kubenswrapper[4886]: Trace[238477701]: [10.001725002s] [10.001725002s] END Nov 24 08:49:18 crc kubenswrapper[4886]: E1124 08:49:18.685369 4886 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Nov 24 08:49:18 crc kubenswrapper[4886]: I1124 08:49:18.779554 4886 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Nov 24 08:49:18 crc kubenswrapper[4886]: I1124 08:49:18.956107 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Nov 24 08:49:18 crc kubenswrapper[4886]: I1124 08:49:18.958010 4886 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="8cd396a94b39804c5730c17f380b8a20d993b60e18e213ea8ac8e69eb8fa5862" exitCode=255 Nov 24 08:49:18 crc kubenswrapper[4886]: I1124 08:49:18.958115 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"8cd396a94b39804c5730c17f380b8a20d993b60e18e213ea8ac8e69eb8fa5862"} Nov 24 08:49:18 crc kubenswrapper[4886]: I1124 08:49:18.958141 4886 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 08:49:18 crc kubenswrapper[4886]: I1124 08:49:18.958392 4886 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 08:49:18 crc kubenswrapper[4886]: I1124 08:49:18.959474 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:18 crc kubenswrapper[4886]: I1124 08:49:18.959507 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:18 crc kubenswrapper[4886]: I1124 08:49:18.959506 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:18 crc kubenswrapper[4886]: I1124 08:49:18.959583 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:18 crc kubenswrapper[4886]: I1124 08:49:18.959599 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:18 crc kubenswrapper[4886]: I1124 08:49:18.959545 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:18 crc kubenswrapper[4886]: I1124 08:49:18.960332 4886 scope.go:117] "RemoveContainer" containerID="8cd396a94b39804c5730c17f380b8a20d993b60e18e213ea8ac8e69eb8fa5862" Nov 24 08:49:19 crc kubenswrapper[4886]: I1124 08:49:19.600386 4886 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Nov 24 08:49:19 crc kubenswrapper[4886]: I1124 08:49:19.600475 4886 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Nov 24 08:49:19 crc kubenswrapper[4886]: I1124 08:49:19.608523 4886 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Nov 24 08:49:19 crc kubenswrapper[4886]: I1124 08:49:19.608612 4886 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Nov 24 08:49:19 crc kubenswrapper[4886]: I1124 08:49:19.961837 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Nov 24 08:49:19 crc kubenswrapper[4886]: I1124 08:49:19.963325 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"b6d0c617fe424a39e246d154cd6c0d8dc451c251e24dffaf45d644354e94e01e"} Nov 24 08:49:19 crc kubenswrapper[4886]: I1124 08:49:19.963526 4886 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 08:49:19 crc kubenswrapper[4886]: I1124 08:49:19.964381 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:19 crc kubenswrapper[4886]: I1124 08:49:19.964422 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:19 crc kubenswrapper[4886]: I1124 08:49:19.964438 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:22 crc kubenswrapper[4886]: I1124 08:49:22.498128 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 24 08:49:22 crc kubenswrapper[4886]: I1124 08:49:22.498637 4886 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 08:49:22 crc kubenswrapper[4886]: I1124 08:49:22.499014 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 24 08:49:22 crc kubenswrapper[4886]: I1124 08:49:22.500255 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:22 crc kubenswrapper[4886]: I1124 08:49:22.500281 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:22 crc kubenswrapper[4886]: I1124 08:49:22.500293 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:22 crc kubenswrapper[4886]: I1124 08:49:22.503529 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 24 08:49:22 crc kubenswrapper[4886]: I1124 08:49:22.533762 4886 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Nov 24 08:49:22 crc kubenswrapper[4886]: I1124 08:49:22.974704 4886 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 08:49:22 crc kubenswrapper[4886]: I1124 08:49:22.975798 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:22 crc kubenswrapper[4886]: I1124 08:49:22.975839 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:22 crc kubenswrapper[4886]: I1124 08:49:22.975849 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:23 crc kubenswrapper[4886]: I1124 08:49:23.977064 4886 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 08:49:23 crc kubenswrapper[4886]: I1124 08:49:23.978050 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:23 crc kubenswrapper[4886]: I1124 08:49:23.978080 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:23 crc kubenswrapper[4886]: I1124 08:49:23.978088 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.600605 4886 trace.go:236] Trace[619403954]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (24-Nov-2025 08:49:11.137) (total time: 13462ms): Nov 24 08:49:24 crc kubenswrapper[4886]: Trace[619403954]: ---"Objects listed" error: 13462ms (08:49:24.600) Nov 24 08:49:24 crc kubenswrapper[4886]: Trace[619403954]: [13.462808615s] [13.462808615s] END Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.600645 4886 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Nov 24 08:49:24 crc kubenswrapper[4886]: E1124 08:49:24.600895 4886 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="6.4s" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.601936 4886 trace.go:236] Trace[2087594043]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (24-Nov-2025 08:49:13.655) (total time: 10946ms): Nov 24 08:49:24 crc kubenswrapper[4886]: Trace[2087594043]: ---"Objects listed" error: 10946ms (08:49:24.601) Nov 24 08:49:24 crc kubenswrapper[4886]: Trace[2087594043]: [10.946106561s] [10.946106561s] END Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.602069 4886 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.604422 4886 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.604756 4886 trace.go:236] Trace[1538592054]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (24-Nov-2025 08:49:11.079) (total time: 13525ms): Nov 24 08:49:24 crc kubenswrapper[4886]: Trace[1538592054]: ---"Objects listed" error: 13525ms (08:49:24.604) Nov 24 08:49:24 crc kubenswrapper[4886]: Trace[1538592054]: [13.525213034s] [13.525213034s] END Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.604791 4886 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Nov 24 08:49:24 crc kubenswrapper[4886]: E1124 08:49:24.605472 4886 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.766518 4886 apiserver.go:52] "Watching apiserver" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.771001 4886 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.771308 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf"] Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.771882 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 08:49:24 crc kubenswrapper[4886]: E1124 08:49:24.771964 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.772023 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.772066 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.772230 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 08:49:24 crc kubenswrapper[4886]: E1124 08:49:24.772279 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.772289 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.772807 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 08:49:24 crc kubenswrapper[4886]: E1124 08:49:24.772920 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.774663 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.774682 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.774905 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.774983 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.774915 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.774665 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.775352 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.776038 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.776414 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.780117 4886 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.805727 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.805789 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.805817 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.805844 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.805876 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.805908 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.805940 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.805966 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.805990 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.806016 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.806045 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.806071 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.806096 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.806138 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.806143 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.806226 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.806258 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.806259 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.806294 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.806392 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.806421 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.806429 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.806508 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.806541 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.806569 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.806593 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.806617 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.806628 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.806645 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.806673 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.806699 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.806721 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.806750 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.806813 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.806777 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.806811 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.806854 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.806880 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.806906 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.806931 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.806955 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.806978 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.806979 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.807040 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.807065 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.807101 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.807313 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.807350 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.807374 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.807397 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.807375 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.807421 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.807447 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.807481 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.807505 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.807549 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.807571 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.807592 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.807614 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.807618 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.807646 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.807672 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.807694 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.807720 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.807742 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.807769 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.807927 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.807952 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.807976 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.808002 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.808031 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.808060 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.808095 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.808123 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.808513 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.808561 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.808592 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.808621 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.808650 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.808672 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.808695 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.808717 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.808741 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.808763 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.808833 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.808857 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.808890 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.808916 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.808939 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.808966 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.808992 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.809019 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.810394 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.810812 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.810854 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.810886 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.810922 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.810964 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.810997 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.811035 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.811075 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.811108 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.811142 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.811192 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.811223 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.811248 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.811279 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.811312 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.811340 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.811372 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.811408 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.811442 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.811465 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.811502 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.811532 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.811556 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.811587 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.811620 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.811656 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.811684 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.811717 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.811748 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.811775 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.811809 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.811842 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.811866 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.811905 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.811940 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.811962 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.811992 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.812017 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.812044 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.812069 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.812100 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.812133 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.812171 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.812204 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.812231 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.812259 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.812284 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.812316 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.812344 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.812373 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.812401 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.812428 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.812454 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.812487 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.812458 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.812517 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.812547 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.812572 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.812605 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.812635 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.812659 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.812689 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.812729 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.812755 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.812783 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.812812 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.812836 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.812865 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.812891 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.812914 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.812939 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.812964 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.812987 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.813015 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.813040 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.813068 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.813091 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.813118 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.813158 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.813181 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.813207 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.813237 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.813260 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.813286 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.813314 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.813340 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.813361 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.813387 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.813412 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.813434 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.813462 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.813488 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.813513 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.813567 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.813593 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.813623 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.813647 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.813675 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.813703 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.813727 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.813756 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.813784 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.813811 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.813835 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.813861 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.807693 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.807734 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.807801 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.807957 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.808034 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.808088 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.808246 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.808256 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.808298 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.808369 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.808498 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.808530 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.808680 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.808840 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.808991 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.810082 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.810220 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.810236 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.810202 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.810341 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.810724 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.810783 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.810922 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.811129 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.811191 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.811406 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.811421 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.811449 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.811697 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.811885 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.811904 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.811998 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.812074 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.812546 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.812656 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.813034 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.813268 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.813461 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.813716 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.814547 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.814644 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.815321 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.815364 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.815591 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.815727 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.816401 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.816406 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.816815 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.817089 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.817166 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.815761 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.817329 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.817568 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.817791 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.817997 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.818218 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.818581 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.818948 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.819023 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.819464 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.820816 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.821014 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.821084 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.821462 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.821846 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.822591 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.822677 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 08:49:24 crc kubenswrapper[4886]: E1124 08:49:24.822894 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 08:49:25.322867831 +0000 UTC m=+21.209605976 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.823031 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.823251 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.823285 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.824057 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.824263 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.824608 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.824842 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.822437 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.827653 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.811556 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.827902 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.828028 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.828143 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.828369 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.828685 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.828707 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.829004 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.829456 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.829711 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.829768 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.830448 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.830529 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.830595 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.830641 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.830675 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.830782 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.830816 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.830987 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.831060 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.830444 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.830808 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.831682 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.832374 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.832407 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.832577 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.832727 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.832955 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.832996 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.833422 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.833652 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.833679 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.833802 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.833856 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.834020 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.834225 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.834297 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.845789 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.846136 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.846607 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.846790 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.846945 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.846976 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.847047 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.847212 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.847277 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.847510 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.847610 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.847624 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.847925 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.852738 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.853175 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.853438 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.853661 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.853809 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.854012 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.854394 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.854732 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.854977 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.855350 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.855658 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.855915 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.856013 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.856193 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.856277 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.856416 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.856937 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.856982 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.857086 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.856983 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.857479 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.857121 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.857615 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.857643 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.857204 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.857892 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.857904 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.857936 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.858101 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.858501 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.858511 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.858576 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.858635 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.858967 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.832063 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.859124 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.859190 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.859247 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.859286 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.859318 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.859679 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.859836 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.859884 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.859913 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.859937 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.859971 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.859994 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.860119 4886 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.860136 4886 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.860158 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.860169 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.860183 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.860193 4886 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.860203 4886 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.860213 4886 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.860226 4886 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.860236 4886 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.860245 4886 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.860255 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.860600 4886 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.860621 4886 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.860635 4886 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.860656 4886 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.860668 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.860682 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.860696 4886 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.860705 4886 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.860729 4886 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.860739 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.860751 4886 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.860762 4886 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.860772 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.860780 4886 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.860792 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.860362 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.860801 4886 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.861400 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.861411 4886 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.861425 4886 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.861438 4886 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.861448 4886 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.861457 4886 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.861525 4886 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 08:49:24 crc kubenswrapper[4886]: E1124 08:49:24.859284 4886 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.859575 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.859819 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.860272 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.860432 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.860762 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 08:49:24 crc kubenswrapper[4886]: E1124 08:49:24.860872 4886 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.861327 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.861377 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.861940 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.869578 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.869634 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.869734 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.869844 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 08:49:24 crc kubenswrapper[4886]: E1124 08:49:24.870181 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-24 08:49:25.361597948 +0000 UTC m=+21.248336083 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.870202 4886 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.870297 4886 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.870325 4886 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.870346 4886 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.870363 4886 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.870384 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.870403 4886 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.870421 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.870439 4886 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.870456 4886 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.870474 4886 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.870492 4886 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.870508 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.870523 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.870492 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 08:49:24 crc kubenswrapper[4886]: E1124 08:49:24.870554 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-24 08:49:25.370530278 +0000 UTC m=+21.257268413 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.870570 4886 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.870587 4886 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.870600 4886 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.870614 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.870629 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.870643 4886 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.870656 4886 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.871763 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.872210 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.873695 4886 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.873721 4886 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.873735 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.873748 4886 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.873760 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.873776 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.873788 4886 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.873801 4886 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.873814 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.873828 4886 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.873841 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.873854 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.873866 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.873878 4886 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.873890 4886 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.873903 4886 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.873915 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.873927 4886 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.873939 4886 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.873952 4886 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.873966 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.873978 4886 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.873991 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.874002 4886 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.874015 4886 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.874027 4886 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.874039 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.874054 4886 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.874065 4886 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.874078 4886 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.874089 4886 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.874101 4886 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.874113 4886 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.874128 4886 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.874140 4886 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.874169 4886 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.874182 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.874194 4886 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.874205 4886 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.874245 4886 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.874258 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.874271 4886 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.874283 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.874296 4886 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.874308 4886 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.874320 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.874333 4886 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.874344 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.874357 4886 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.874370 4886 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.874382 4886 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.874396 4886 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.874408 4886 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.874421 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.874433 4886 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.874445 4886 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.874456 4886 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.874467 4886 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.874478 4886 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.874489 4886 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.874501 4886 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.874513 4886 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.874526 4886 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.874537 4886 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.874549 4886 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.874565 4886 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.874577 4886 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.874589 4886 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.874604 4886 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.874615 4886 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.874627 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.874639 4886 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.874651 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.874664 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.874676 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.874688 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.874701 4886 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.874712 4886 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.874724 4886 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.874736 4886 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.874747 4886 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.874758 4886 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.874771 4886 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.874782 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.874797 4886 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.874808 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.874821 4886 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.874833 4886 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.874844 4886 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.874855 4886 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.874867 4886 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.874877 4886 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.874891 4886 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.874903 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.874918 4886 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.874932 4886 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.874943 4886 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.874954 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.874967 4886 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.874979 4886 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.874991 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.875002 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.875014 4886 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.875106 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.875320 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.878429 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.880315 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.882371 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.885389 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.885911 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.888030 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.888897 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.889033 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 08:49:24 crc kubenswrapper[4886]: E1124 08:49:24.889087 4886 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 24 08:49:24 crc kubenswrapper[4886]: E1124 08:49:24.889104 4886 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 24 08:49:24 crc kubenswrapper[4886]: E1124 08:49:24.889116 4886 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 08:49:24 crc kubenswrapper[4886]: E1124 08:49:24.889208 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-24 08:49:25.389184671 +0000 UTC m=+21.275922866 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.889289 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.890428 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.891319 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.891554 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.892117 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.892295 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.892371 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.892353 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.892495 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.894452 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.894733 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.894968 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.894991 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.895043 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.895247 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.895961 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.897667 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.898608 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Nov 24 08:49:24 crc kubenswrapper[4886]: E1124 08:49:24.899020 4886 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 24 08:49:24 crc kubenswrapper[4886]: E1124 08:49:24.899076 4886 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 24 08:49:24 crc kubenswrapper[4886]: E1124 08:49:24.899099 4886 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 08:49:24 crc kubenswrapper[4886]: E1124 08:49:24.899206 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-24 08:49:25.399176482 +0000 UTC m=+21.285914617 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.900064 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.900786 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.901727 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.902809 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.902936 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.903083 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.904252 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.906322 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.907537 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.908337 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.909759 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.910768 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.911650 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.913212 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.914017 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.914920 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.915531 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.916042 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.916767 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.919065 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.919912 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.921263 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.921876 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.922727 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.923275 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.924063 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.924937 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.925007 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:24Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.926392 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.927226 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.928665 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.929357 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.930469 4886 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.930593 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.932637 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.933930 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.934754 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.936628 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.937494 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.938801 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.939605 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.941345 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.941707 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.942180 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.943770 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.944615 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.946112 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.946627 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.947844 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.948468 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.949740 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.950303 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.951869 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.952555 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.954234 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.954590 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:24Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.954919 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.955884 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.966913 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.976098 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.976256 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.976318 4886 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.976322 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.976333 4886 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.976388 4886 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.976402 4886 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.976414 4886 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.976426 4886 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.976437 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.976450 4886 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.976462 4886 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.976474 4886 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.976485 4886 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.976498 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.976510 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.976522 4886 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.976534 4886 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.976546 4886 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.976557 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.976570 4886 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.976582 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.976595 4886 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.976607 4886 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.976620 4886 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.976632 4886 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.976644 4886 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.976655 4886 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.976668 4886 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.976679 4886 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.976690 4886 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.976701 4886 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.976758 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.976795 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.976815 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.976832 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.976852 4886 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.976846 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.976867 4886 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.978758 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 08:49:24 crc kubenswrapper[4886]: I1124 08:49:24.989166 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 08:49:25 crc kubenswrapper[4886]: I1124 08:49:25.000621 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:24Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 08:49:25 crc kubenswrapper[4886]: I1124 08:49:25.012363 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 08:49:25 crc kubenswrapper[4886]: I1124 08:49:25.023049 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:24Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 08:49:25 crc kubenswrapper[4886]: I1124 08:49:25.086630 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 24 08:49:25 crc kubenswrapper[4886]: I1124 08:49:25.093592 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 24 08:49:25 crc kubenswrapper[4886]: I1124 08:49:25.101941 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 24 08:49:25 crc kubenswrapper[4886]: I1124 08:49:25.381010 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 08:49:25 crc kubenswrapper[4886]: E1124 08:49:25.381263 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 08:49:26.381240401 +0000 UTC m=+22.267978536 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 08:49:25 crc kubenswrapper[4886]: I1124 08:49:25.381460 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 08:49:25 crc kubenswrapper[4886]: I1124 08:49:25.381504 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 08:49:25 crc kubenswrapper[4886]: E1124 08:49:25.381632 4886 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 24 08:49:25 crc kubenswrapper[4886]: E1124 08:49:25.381674 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-24 08:49:26.381666183 +0000 UTC m=+22.268404318 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 24 08:49:25 crc kubenswrapper[4886]: E1124 08:49:25.381759 4886 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 24 08:49:25 crc kubenswrapper[4886]: E1124 08:49:25.381823 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-24 08:49:26.381809187 +0000 UTC m=+22.268547332 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 24 08:49:25 crc kubenswrapper[4886]: I1124 08:49:25.482645 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 08:49:25 crc kubenswrapper[4886]: I1124 08:49:25.482711 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 08:49:25 crc kubenswrapper[4886]: E1124 08:49:25.482894 4886 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 24 08:49:25 crc kubenswrapper[4886]: E1124 08:49:25.482911 4886 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 24 08:49:25 crc kubenswrapper[4886]: E1124 08:49:25.482956 4886 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 24 08:49:25 crc kubenswrapper[4886]: E1124 08:49:25.482974 4886 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 08:49:25 crc kubenswrapper[4886]: E1124 08:49:25.483055 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-24 08:49:26.483030603 +0000 UTC m=+22.369768798 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 08:49:25 crc kubenswrapper[4886]: E1124 08:49:25.482924 4886 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 24 08:49:25 crc kubenswrapper[4886]: E1124 08:49:25.483091 4886 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 08:49:25 crc kubenswrapper[4886]: E1124 08:49:25.483123 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-24 08:49:26.483115286 +0000 UTC m=+22.369853421 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 08:49:25 crc kubenswrapper[4886]: I1124 08:49:25.639624 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 24 08:49:25 crc kubenswrapper[4886]: I1124 08:49:25.644315 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 24 08:49:25 crc kubenswrapper[4886]: I1124 08:49:25.651764 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 08:49:25 crc kubenswrapper[4886]: I1124 08:49:25.675133 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 08:49:25 crc kubenswrapper[4886]: I1124 08:49:25.683201 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Nov 24 08:49:25 crc kubenswrapper[4886]: I1124 08:49:25.690782 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:24Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 08:49:25 crc kubenswrapper[4886]: I1124 08:49:25.705882 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 08:49:25 crc kubenswrapper[4886]: I1124 08:49:25.723596 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:24Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 08:49:25 crc kubenswrapper[4886]: I1124 08:49:25.736464 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 08:49:25 crc kubenswrapper[4886]: I1124 08:49:25.751961 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 08:49:25 crc kubenswrapper[4886]: I1124 08:49:25.773266 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:24Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 08:49:25 crc kubenswrapper[4886]: I1124 08:49:25.790593 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb6a474a-7674-4280-8da4-784cc3c4fc67\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e99af72fba7cd02a022aede4ff904f6dc0d263429468b460f3886cbbf8767e9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3a5e301ad7f424b4db6c7c2d4054973bf707f225de81d58a13077e1a130fe59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://879cdf4650f88cd23cfed38dabd912619f3c95ab3c254c0eb893eb4cbb2f8c5c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://243dc8c38e86fbd96d073b5dc2edaf3f378f052563abfedf86cd78e6270f11b1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:04Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 08:49:25 crc kubenswrapper[4886]: I1124 08:49:25.806422 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 08:49:25 crc kubenswrapper[4886]: I1124 08:49:25.822098 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 08:49:25 crc kubenswrapper[4886]: I1124 08:49:25.837734 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:24Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 08:49:25 crc kubenswrapper[4886]: I1124 08:49:25.851394 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 08:49:25 crc kubenswrapper[4886]: I1124 08:49:25.923270 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-5g6ld"] Nov 24 08:49:25 crc kubenswrapper[4886]: I1124 08:49:25.923824 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-5g6ld" Nov 24 08:49:25 crc kubenswrapper[4886]: I1124 08:49:25.926479 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Nov 24 08:49:25 crc kubenswrapper[4886]: I1124 08:49:25.927228 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Nov 24 08:49:25 crc kubenswrapper[4886]: I1124 08:49:25.931956 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Nov 24 08:49:25 crc kubenswrapper[4886]: I1124 08:49:25.944967 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 08:49:25 crc kubenswrapper[4886]: I1124 08:49:25.962786 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 08:49:25 crc kubenswrapper[4886]: I1124 08:49:25.976685 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:24Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 08:49:25 crc kubenswrapper[4886]: I1124 08:49:25.984473 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"a801ceb771722e998fb540f100059097656e88c39302d518c96e7d3723d0df92"} Nov 24 08:49:25 crc kubenswrapper[4886]: I1124 08:49:25.987251 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"336d2d1c7e69b9e2c481681be12af7bfd17b8517e7fd97a67b3ca82dbc84ac35"} Nov 24 08:49:25 crc kubenswrapper[4886]: I1124 08:49:25.987310 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"c72af163fd4d5f2dc2a642984873ee83c62e9f1f96fac318fd17d5a74e5ea82f"} Nov 24 08:49:25 crc kubenswrapper[4886]: I1124 08:49:25.987335 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"20d28d36ebf0f9fe972e5de21a83d07b6d3e0aad451db96f09e6aab15d403278"} Nov 24 08:49:25 crc kubenswrapper[4886]: I1124 08:49:25.988643 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"56bddcbe3378ccc43de3046352a1284d9590790973736e218e891b804bfe1ff8"} Nov 24 08:49:25 crc kubenswrapper[4886]: I1124 08:49:25.988681 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"0153ab95719a0245d8df5a28a5d663accf9721aa87b64b1a6874253f79abdb41"} Nov 24 08:49:25 crc kubenswrapper[4886]: I1124 08:49:25.990968 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Nov 24 08:49:25 crc kubenswrapper[4886]: I1124 08:49:25.991521 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Nov 24 08:49:25 crc kubenswrapper[4886]: I1124 08:49:25.994463 4886 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="b6d0c617fe424a39e246d154cd6c0d8dc451c251e24dffaf45d644354e94e01e" exitCode=255 Nov 24 08:49:25 crc kubenswrapper[4886]: I1124 08:49:25.994548 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"b6d0c617fe424a39e246d154cd6c0d8dc451c251e24dffaf45d644354e94e01e"} Nov 24 08:49:25 crc kubenswrapper[4886]: I1124 08:49:25.994642 4886 scope.go:117] "RemoveContainer" containerID="8cd396a94b39804c5730c17f380b8a20d993b60e18e213ea8ac8e69eb8fa5862" Nov 24 08:49:26 crc kubenswrapper[4886]: I1124 08:49:26.002245 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 08:49:26 crc kubenswrapper[4886]: E1124 08:49:26.003613 4886 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-crc\" already exists" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 24 08:49:26 crc kubenswrapper[4886]: I1124 08:49:26.014842 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5g6ld" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82737edd-859f-4e06-8559-47375deb3a1a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ffhbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5g6ld\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 08:49:26 crc kubenswrapper[4886]: I1124 08:49:26.026076 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 08:49:26 crc kubenswrapper[4886]: I1124 08:49:26.038765 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:24Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 08:49:26 crc kubenswrapper[4886]: I1124 08:49:26.060171 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb6a474a-7674-4280-8da4-784cc3c4fc67\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e99af72fba7cd02a022aede4ff904f6dc0d263429468b460f3886cbbf8767e9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3a5e301ad7f424b4db6c7c2d4054973bf707f225de81d58a13077e1a130fe59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://879cdf4650f88cd23cfed38dabd912619f3c95ab3c254c0eb893eb4cbb2f8c5c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://243dc8c38e86fbd96d073b5dc2edaf3f378f052563abfedf86cd78e6270f11b1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:04Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 08:49:26 crc kubenswrapper[4886]: I1124 08:49:26.087759 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ffhbd\" (UniqueName: \"kubernetes.io/projected/82737edd-859f-4e06-8559-47375deb3a1a-kube-api-access-ffhbd\") pod \"node-resolver-5g6ld\" (UID: \"82737edd-859f-4e06-8559-47375deb3a1a\") " pod="openshift-dns/node-resolver-5g6ld" Nov 24 08:49:26 crc kubenswrapper[4886]: I1124 08:49:26.087877 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/82737edd-859f-4e06-8559-47375deb3a1a-hosts-file\") pod \"node-resolver-5g6ld\" (UID: \"82737edd-859f-4e06-8559-47375deb3a1a\") " pod="openshift-dns/node-resolver-5g6ld" Nov 24 08:49:26 crc kubenswrapper[4886]: I1124 08:49:26.099579 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:26Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:26 crc kubenswrapper[4886]: I1124 08:49:26.150762 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:24Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:26Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:26 crc kubenswrapper[4886]: I1124 08:49:26.155998 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Nov 24 08:49:26 crc kubenswrapper[4886]: I1124 08:49:26.156967 4886 scope.go:117] "RemoveContainer" containerID="b6d0c617fe424a39e246d154cd6c0d8dc451c251e24dffaf45d644354e94e01e" Nov 24 08:49:26 crc kubenswrapper[4886]: E1124 08:49:26.157289 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Nov 24 08:49:26 crc kubenswrapper[4886]: I1124 08:49:26.178485 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:26Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:26 crc kubenswrapper[4886]: I1124 08:49:26.189598 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/82737edd-859f-4e06-8559-47375deb3a1a-hosts-file\") pod \"node-resolver-5g6ld\" (UID: \"82737edd-859f-4e06-8559-47375deb3a1a\") " pod="openshift-dns/node-resolver-5g6ld" Nov 24 08:49:26 crc kubenswrapper[4886]: I1124 08:49:26.189679 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ffhbd\" (UniqueName: \"kubernetes.io/projected/82737edd-859f-4e06-8559-47375deb3a1a-kube-api-access-ffhbd\") pod \"node-resolver-5g6ld\" (UID: \"82737edd-859f-4e06-8559-47375deb3a1a\") " pod="openshift-dns/node-resolver-5g6ld" Nov 24 08:49:26 crc kubenswrapper[4886]: I1124 08:49:26.190082 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/82737edd-859f-4e06-8559-47375deb3a1a-hosts-file\") pod \"node-resolver-5g6ld\" (UID: \"82737edd-859f-4e06-8559-47375deb3a1a\") " pod="openshift-dns/node-resolver-5g6ld" Nov 24 08:49:26 crc kubenswrapper[4886]: I1124 08:49:26.214118 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5g6ld" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82737edd-859f-4e06-8559-47375deb3a1a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ffhbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5g6ld\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:26Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:26 crc kubenswrapper[4886]: I1124 08:49:26.221904 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ffhbd\" (UniqueName: \"kubernetes.io/projected/82737edd-859f-4e06-8559-47375deb3a1a-kube-api-access-ffhbd\") pod \"node-resolver-5g6ld\" (UID: \"82737edd-859f-4e06-8559-47375deb3a1a\") " pod="openshift-dns/node-resolver-5g6ld" Nov 24 08:49:26 crc kubenswrapper[4886]: I1124 08:49:26.236404 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56bddcbe3378ccc43de3046352a1284d9590790973736e218e891b804bfe1ff8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:26Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:26 crc kubenswrapper[4886]: I1124 08:49:26.253721 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb6a474a-7674-4280-8da4-784cc3c4fc67\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e99af72fba7cd02a022aede4ff904f6dc0d263429468b460f3886cbbf8767e9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3a5e301ad7f424b4db6c7c2d4054973bf707f225de81d58a13077e1a130fe59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://879cdf4650f88cd23cfed38dabd912619f3c95ab3c254c0eb893eb4cbb2f8c5c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://243dc8c38e86fbd96d073b5dc2edaf3f378f052563abfedf86cd78e6270f11b1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:04Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:26Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:26 crc kubenswrapper[4886]: I1124 08:49:26.270178 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:26Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:26 crc kubenswrapper[4886]: I1124 08:49:26.294101 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://336d2d1c7e69b9e2c481681be12af7bfd17b8517e7fd97a67b3ca82dbc84ac35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c72af163fd4d5f2dc2a642984873ee83c62e9f1f96fac318fd17d5a74e5ea82f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:26Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:26 crc kubenswrapper[4886]: I1124 08:49:26.327430 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-kl8k6"] Nov 24 08:49:26 crc kubenswrapper[4886]: I1124 08:49:26.329083 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-2dk8j"] Nov 24 08:49:26 crc kubenswrapper[4886]: I1124 08:49:26.329467 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-zc46q"] Nov 24 08:49:26 crc kubenswrapper[4886]: I1124 08:49:26.330129 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-kl8k6" Nov 24 08:49:26 crc kubenswrapper[4886]: I1124 08:49:26.330187 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-zc46q" Nov 24 08:49:26 crc kubenswrapper[4886]: I1124 08:49:26.330349 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-2dk8j" Nov 24 08:49:26 crc kubenswrapper[4886]: I1124 08:49:26.334593 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Nov 24 08:49:26 crc kubenswrapper[4886]: I1124 08:49:26.334939 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Nov 24 08:49:26 crc kubenswrapper[4886]: I1124 08:49:26.334719 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Nov 24 08:49:26 crc kubenswrapper[4886]: I1124 08:49:26.335095 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Nov 24 08:49:26 crc kubenswrapper[4886]: I1124 08:49:26.334805 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Nov 24 08:49:26 crc kubenswrapper[4886]: I1124 08:49:26.334856 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Nov 24 08:49:26 crc kubenswrapper[4886]: I1124 08:49:26.335308 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Nov 24 08:49:26 crc kubenswrapper[4886]: I1124 08:49:26.336652 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Nov 24 08:49:26 crc kubenswrapper[4886]: I1124 08:49:26.337380 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Nov 24 08:49:26 crc kubenswrapper[4886]: I1124 08:49:26.337589 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Nov 24 08:49:26 crc kubenswrapper[4886]: I1124 08:49:26.337705 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Nov 24 08:49:26 crc kubenswrapper[4886]: I1124 08:49:26.339047 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Nov 24 08:49:26 crc kubenswrapper[4886]: I1124 08:49:26.358454 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5g6ld" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82737edd-859f-4e06-8559-47375deb3a1a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ffhbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5g6ld\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:26Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:26 crc kubenswrapper[4886]: I1124 08:49:26.375729 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kl8k6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b578bbaf-7246-42d9-9d2d-346bd1da2c41\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6ng9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6ng9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6ng9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6ng9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6ng9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6ng9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6ng9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kl8k6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:26Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:26 crc kubenswrapper[4886]: I1124 08:49:26.380701 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-5g6ld" Nov 24 08:49:26 crc kubenswrapper[4886]: I1124 08:49:26.391996 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 08:49:26 crc kubenswrapper[4886]: I1124 08:49:26.392076 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 08:49:26 crc kubenswrapper[4886]: I1124 08:49:26.392123 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 08:49:26 crc kubenswrapper[4886]: E1124 08:49:26.392194 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 08:49:28.392168911 +0000 UTC m=+24.278907056 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 08:49:26 crc kubenswrapper[4886]: E1124 08:49:26.392228 4886 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 24 08:49:26 crc kubenswrapper[4886]: E1124 08:49:26.392275 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-24 08:49:28.392265244 +0000 UTC m=+24.279003379 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 24 08:49:26 crc kubenswrapper[4886]: E1124 08:49:26.392295 4886 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 24 08:49:26 crc kubenswrapper[4886]: E1124 08:49:26.392336 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-24 08:49:28.392325265 +0000 UTC m=+24.279063400 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 24 08:49:26 crc kubenswrapper[4886]: I1124 08:49:26.393341 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b56e78f-e569-4f01-9a0f-6b9db3b505d7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4cd419a96834f5093a6a3bad2d2b99f9ab6f4abdbe5d7fddadf1505a0fd1f3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff0e663f6feec9c77de1cd993c443c9b3df3492612c554cc8b95c8e4e4a026b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://146e753c49089634c3d3105a3debf9737245745d8ed9aee61ae2cf4f56ae37ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6d0c617fe424a39e246d154cd6c0d8dc451c251e24dffaf45d644354e94e01e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8cd396a94b39804c5730c17f380b8a20d993b60e18e213ea8ac8e69eb8fa5862\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T08:49:18Z\\\",\\\"message\\\":\\\"W1124 08:49:08.065821 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1124 08:49:08.066350 1 crypto.go:601] Generating new CA for check-endpoints-signer@1763974148 cert, and key in /tmp/serving-cert-1713431650/serving-signer.crt, /tmp/serving-cert-1713431650/serving-signer.key\\\\nI1124 08:49:08.489753 1 observer_polling.go:159] Starting file observer\\\\nW1124 08:49:08.492449 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1124 08:49:08.492620 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 08:49:08.494754 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1713431650/tls.crt::/tmp/serving-cert-1713431650/tls.key\\\\\\\"\\\\nF1124 08:49:18.704666 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:07Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6d0c617fe424a39e246d154cd6c0d8dc451c251e24dffaf45d644354e94e01e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"message\\\":\\\" 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1124 08:49:25.277531 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1124 08:49:25.277556 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1124 08:49:25.277707 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1124 08:49:25.277722 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1124 08:49:25.278962 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-505038271/tls.crt::/tmp/serving-cert-505038271/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1763974159\\\\\\\\\\\\\\\" (2025-11-24 08:49:18 +0000 UTC to 2025-12-24 08:49:19 +0000 UTC (now=2025-11-24 08:49:25.278915173 +0000 UTC))\\\\\\\"\\\\nI1124 08:49:25.279136 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1763974165\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1763974164\\\\\\\\\\\\\\\" (2025-11-24 07:49:24 +0000 UTC to 2026-11-24 07:49:24 +0000 UTC (now=2025-11-24 08:49:25.279105228 +0000 UTC))\\\\\\\"\\\\nI1124 08:49:25.279186 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1124 08:49:25.279230 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1124 08:49:25.279272 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-505038271/tls.crt::/tmp/serving-cert-505038271/tls.key\\\\\\\"\\\\nI1124 08:49:25.279422 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF1124 08:49:25.273255 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://97e022f9b9279047d700f687148beaeef5ce4f72de418be1bdf0edde4e5211da\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc5ed0e2c8f6299305d6c26194c4b43e6e884785effd5ae99254221d738e7662\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc5ed0e2c8f6299305d6c26194c4b43e6e884785effd5ae99254221d738e7662\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:26Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:26 crc kubenswrapper[4886]: I1124 08:49:26.408264 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:24Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:26Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:26 crc kubenswrapper[4886]: I1124 08:49:26.422574 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:26Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:26 crc kubenswrapper[4886]: I1124 08:49:26.435053 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:26Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:26 crc kubenswrapper[4886]: I1124 08:49:26.453397 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zc46q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23cb993e-0360-4449-b604-8ddd825a6502\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8hf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8hf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zc46q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:26Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:26 crc kubenswrapper[4886]: I1124 08:49:26.470892 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56bddcbe3378ccc43de3046352a1284d9590790973736e218e891b804bfe1ff8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:26Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:26 crc kubenswrapper[4886]: I1124 08:49:26.491948 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:26Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:26 crc kubenswrapper[4886]: I1124 08:49:26.492471 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/5d515fec-60f3-4bf7-9ba4-697bb691b670-multus-socket-dir-parent\") pod \"multus-2dk8j\" (UID: \"5d515fec-60f3-4bf7-9ba4-697bb691b670\") " pod="openshift-multus/multus-2dk8j" Nov 24 08:49:26 crc kubenswrapper[4886]: I1124 08:49:26.492501 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/5d515fec-60f3-4bf7-9ba4-697bb691b670-host-run-multus-certs\") pod \"multus-2dk8j\" (UID: \"5d515fec-60f3-4bf7-9ba4-697bb691b670\") " pod="openshift-multus/multus-2dk8j" Nov 24 08:49:26 crc kubenswrapper[4886]: I1124 08:49:26.492528 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h8hf8\" (UniqueName: \"kubernetes.io/projected/23cb993e-0360-4449-b604-8ddd825a6502-kube-api-access-h8hf8\") pod \"machine-config-daemon-zc46q\" (UID: \"23cb993e-0360-4449-b604-8ddd825a6502\") " pod="openshift-machine-config-operator/machine-config-daemon-zc46q" Nov 24 08:49:26 crc kubenswrapper[4886]: I1124 08:49:26.492548 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/5d515fec-60f3-4bf7-9ba4-697bb691b670-multus-cni-dir\") pod \"multus-2dk8j\" (UID: \"5d515fec-60f3-4bf7-9ba4-697bb691b670\") " pod="openshift-multus/multus-2dk8j" Nov 24 08:49:26 crc kubenswrapper[4886]: I1124 08:49:26.492583 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/5d515fec-60f3-4bf7-9ba4-697bb691b670-host-var-lib-cni-bin\") pod \"multus-2dk8j\" (UID: \"5d515fec-60f3-4bf7-9ba4-697bb691b670\") " pod="openshift-multus/multus-2dk8j" Nov 24 08:49:26 crc kubenswrapper[4886]: I1124 08:49:26.492692 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/23cb993e-0360-4449-b604-8ddd825a6502-rootfs\") pod \"machine-config-daemon-zc46q\" (UID: \"23cb993e-0360-4449-b604-8ddd825a6502\") " pod="openshift-machine-config-operator/machine-config-daemon-zc46q" Nov 24 08:49:26 crc kubenswrapper[4886]: I1124 08:49:26.492752 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/23cb993e-0360-4449-b604-8ddd825a6502-proxy-tls\") pod \"machine-config-daemon-zc46q\" (UID: \"23cb993e-0360-4449-b604-8ddd825a6502\") " pod="openshift-machine-config-operator/machine-config-daemon-zc46q" Nov 24 08:49:26 crc kubenswrapper[4886]: I1124 08:49:26.492782 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/5d515fec-60f3-4bf7-9ba4-697bb691b670-os-release\") pod \"multus-2dk8j\" (UID: \"5d515fec-60f3-4bf7-9ba4-697bb691b670\") " pod="openshift-multus/multus-2dk8j" Nov 24 08:49:26 crc kubenswrapper[4886]: I1124 08:49:26.492810 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/5d515fec-60f3-4bf7-9ba4-697bb691b670-host-var-lib-kubelet\") pod \"multus-2dk8j\" (UID: \"5d515fec-60f3-4bf7-9ba4-697bb691b670\") " pod="openshift-multus/multus-2dk8j" Nov 24 08:49:26 crc kubenswrapper[4886]: I1124 08:49:26.492830 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/5d515fec-60f3-4bf7-9ba4-697bb691b670-host-run-k8s-cni-cncf-io\") pod \"multus-2dk8j\" (UID: \"5d515fec-60f3-4bf7-9ba4-697bb691b670\") " pod="openshift-multus/multus-2dk8j" Nov 24 08:49:26 crc kubenswrapper[4886]: I1124 08:49:26.492853 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p6pfs\" (UniqueName: \"kubernetes.io/projected/5d515fec-60f3-4bf7-9ba4-697bb691b670-kube-api-access-p6pfs\") pod \"multus-2dk8j\" (UID: \"5d515fec-60f3-4bf7-9ba4-697bb691b670\") " pod="openshift-multus/multus-2dk8j" Nov 24 08:49:26 crc kubenswrapper[4886]: I1124 08:49:26.492872 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/5d515fec-60f3-4bf7-9ba4-697bb691b670-cnibin\") pod \"multus-2dk8j\" (UID: \"5d515fec-60f3-4bf7-9ba4-697bb691b670\") " pod="openshift-multus/multus-2dk8j" Nov 24 08:49:26 crc kubenswrapper[4886]: I1124 08:49:26.492910 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/5d515fec-60f3-4bf7-9ba4-697bb691b670-host-run-netns\") pod \"multus-2dk8j\" (UID: \"5d515fec-60f3-4bf7-9ba4-697bb691b670\") " pod="openshift-multus/multus-2dk8j" Nov 24 08:49:26 crc kubenswrapper[4886]: I1124 08:49:26.492970 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/5d515fec-60f3-4bf7-9ba4-697bb691b670-multus-conf-dir\") pod \"multus-2dk8j\" (UID: \"5d515fec-60f3-4bf7-9ba4-697bb691b670\") " pod="openshift-multus/multus-2dk8j" Nov 24 08:49:26 crc kubenswrapper[4886]: I1124 08:49:26.492994 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/5d515fec-60f3-4bf7-9ba4-697bb691b670-multus-daemon-config\") pod \"multus-2dk8j\" (UID: \"5d515fec-60f3-4bf7-9ba4-697bb691b670\") " pod="openshift-multus/multus-2dk8j" Nov 24 08:49:26 crc kubenswrapper[4886]: I1124 08:49:26.493066 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/b578bbaf-7246-42d9-9d2d-346bd1da2c41-system-cni-dir\") pod \"multus-additional-cni-plugins-kl8k6\" (UID: \"b578bbaf-7246-42d9-9d2d-346bd1da2c41\") " pod="openshift-multus/multus-additional-cni-plugins-kl8k6" Nov 24 08:49:26 crc kubenswrapper[4886]: I1124 08:49:26.493099 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b6ng9\" (UniqueName: \"kubernetes.io/projected/b578bbaf-7246-42d9-9d2d-346bd1da2c41-kube-api-access-b6ng9\") pod \"multus-additional-cni-plugins-kl8k6\" (UID: \"b578bbaf-7246-42d9-9d2d-346bd1da2c41\") " pod="openshift-multus/multus-additional-cni-plugins-kl8k6" Nov 24 08:49:26 crc kubenswrapper[4886]: I1124 08:49:26.493136 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 08:49:26 crc kubenswrapper[4886]: I1124 08:49:26.493219 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5d515fec-60f3-4bf7-9ba4-697bb691b670-etc-kubernetes\") pod \"multus-2dk8j\" (UID: \"5d515fec-60f3-4bf7-9ba4-697bb691b670\") " pod="openshift-multus/multus-2dk8j" Nov 24 08:49:26 crc kubenswrapper[4886]: I1124 08:49:26.493330 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/b578bbaf-7246-42d9-9d2d-346bd1da2c41-os-release\") pod \"multus-additional-cni-plugins-kl8k6\" (UID: \"b578bbaf-7246-42d9-9d2d-346bd1da2c41\") " pod="openshift-multus/multus-additional-cni-plugins-kl8k6" Nov 24 08:49:26 crc kubenswrapper[4886]: I1124 08:49:26.493359 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/5d515fec-60f3-4bf7-9ba4-697bb691b670-host-var-lib-cni-multus\") pod \"multus-2dk8j\" (UID: \"5d515fec-60f3-4bf7-9ba4-697bb691b670\") " pod="openshift-multus/multus-2dk8j" Nov 24 08:49:26 crc kubenswrapper[4886]: E1124 08:49:26.493407 4886 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 24 08:49:26 crc kubenswrapper[4886]: E1124 08:49:26.493448 4886 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 24 08:49:26 crc kubenswrapper[4886]: E1124 08:49:26.493469 4886 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 08:49:26 crc kubenswrapper[4886]: I1124 08:49:26.493414 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/b578bbaf-7246-42d9-9d2d-346bd1da2c41-cnibin\") pod \"multus-additional-cni-plugins-kl8k6\" (UID: \"b578bbaf-7246-42d9-9d2d-346bd1da2c41\") " pod="openshift-multus/multus-additional-cni-plugins-kl8k6" Nov 24 08:49:26 crc kubenswrapper[4886]: I1124 08:49:26.493513 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/23cb993e-0360-4449-b604-8ddd825a6502-mcd-auth-proxy-config\") pod \"machine-config-daemon-zc46q\" (UID: \"23cb993e-0360-4449-b604-8ddd825a6502\") " pod="openshift-machine-config-operator/machine-config-daemon-zc46q" Nov 24 08:49:26 crc kubenswrapper[4886]: E1124 08:49:26.493561 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-24 08:49:28.493534441 +0000 UTC m=+24.380272756 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 08:49:26 crc kubenswrapper[4886]: I1124 08:49:26.493614 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/5d515fec-60f3-4bf7-9ba4-697bb691b670-cni-binary-copy\") pod \"multus-2dk8j\" (UID: \"5d515fec-60f3-4bf7-9ba4-697bb691b670\") " pod="openshift-multus/multus-2dk8j" Nov 24 08:49:26 crc kubenswrapper[4886]: I1124 08:49:26.493681 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/b578bbaf-7246-42d9-9d2d-346bd1da2c41-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-kl8k6\" (UID: \"b578bbaf-7246-42d9-9d2d-346bd1da2c41\") " pod="openshift-multus/multus-additional-cni-plugins-kl8k6" Nov 24 08:49:26 crc kubenswrapper[4886]: I1124 08:49:26.493703 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/5d515fec-60f3-4bf7-9ba4-697bb691b670-system-cni-dir\") pod \"multus-2dk8j\" (UID: \"5d515fec-60f3-4bf7-9ba4-697bb691b670\") " pod="openshift-multus/multus-2dk8j" Nov 24 08:49:26 crc kubenswrapper[4886]: I1124 08:49:26.493725 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/5d515fec-60f3-4bf7-9ba4-697bb691b670-hostroot\") pod \"multus-2dk8j\" (UID: \"5d515fec-60f3-4bf7-9ba4-697bb691b670\") " pod="openshift-multus/multus-2dk8j" Nov 24 08:49:26 crc kubenswrapper[4886]: I1124 08:49:26.493757 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 08:49:26 crc kubenswrapper[4886]: I1124 08:49:26.493782 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/b578bbaf-7246-42d9-9d2d-346bd1da2c41-cni-binary-copy\") pod \"multus-additional-cni-plugins-kl8k6\" (UID: \"b578bbaf-7246-42d9-9d2d-346bd1da2c41\") " pod="openshift-multus/multus-additional-cni-plugins-kl8k6" Nov 24 08:49:26 crc kubenswrapper[4886]: I1124 08:49:26.493803 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/b578bbaf-7246-42d9-9d2d-346bd1da2c41-tuning-conf-dir\") pod \"multus-additional-cni-plugins-kl8k6\" (UID: \"b578bbaf-7246-42d9-9d2d-346bd1da2c41\") " pod="openshift-multus/multus-additional-cni-plugins-kl8k6" Nov 24 08:49:26 crc kubenswrapper[4886]: E1124 08:49:26.493955 4886 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 24 08:49:26 crc kubenswrapper[4886]: E1124 08:49:26.493970 4886 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 24 08:49:26 crc kubenswrapper[4886]: E1124 08:49:26.493982 4886 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 08:49:26 crc kubenswrapper[4886]: E1124 08:49:26.494023 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-24 08:49:28.494012945 +0000 UTC m=+24.380751230 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 08:49:26 crc kubenswrapper[4886]: I1124 08:49:26.506500 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb6a474a-7674-4280-8da4-784cc3c4fc67\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e99af72fba7cd02a022aede4ff904f6dc0d263429468b460f3886cbbf8767e9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3a5e301ad7f424b4db6c7c2d4054973bf707f225de81d58a13077e1a130fe59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://879cdf4650f88cd23cfed38dabd912619f3c95ab3c254c0eb893eb4cbb2f8c5c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://243dc8c38e86fbd96d073b5dc2edaf3f378f052563abfedf86cd78e6270f11b1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:04Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:26Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:26 crc kubenswrapper[4886]: I1124 08:49:26.532548 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://336d2d1c7e69b9e2c481681be12af7bfd17b8517e7fd97a67b3ca82dbc84ac35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c72af163fd4d5f2dc2a642984873ee83c62e9f1f96fac318fd17d5a74e5ea82f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:26Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:26 crc kubenswrapper[4886]: I1124 08:49:26.547260 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:24Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:26Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:26 crc kubenswrapper[4886]: I1124 08:49:26.567774 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:26Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:26 crc kubenswrapper[4886]: I1124 08:49:26.578275 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5g6ld" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82737edd-859f-4e06-8559-47375deb3a1a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ffhbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5g6ld\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:26Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:26 crc kubenswrapper[4886]: I1124 08:49:26.594866 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/5d515fec-60f3-4bf7-9ba4-697bb691b670-multus-socket-dir-parent\") pod \"multus-2dk8j\" (UID: \"5d515fec-60f3-4bf7-9ba4-697bb691b670\") " pod="openshift-multus/multus-2dk8j" Nov 24 08:49:26 crc kubenswrapper[4886]: I1124 08:49:26.594919 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/5d515fec-60f3-4bf7-9ba4-697bb691b670-host-run-multus-certs\") pod \"multus-2dk8j\" (UID: \"5d515fec-60f3-4bf7-9ba4-697bb691b670\") " pod="openshift-multus/multus-2dk8j" Nov 24 08:49:26 crc kubenswrapper[4886]: I1124 08:49:26.594948 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h8hf8\" (UniqueName: \"kubernetes.io/projected/23cb993e-0360-4449-b604-8ddd825a6502-kube-api-access-h8hf8\") pod \"machine-config-daemon-zc46q\" (UID: \"23cb993e-0360-4449-b604-8ddd825a6502\") " pod="openshift-machine-config-operator/machine-config-daemon-zc46q" Nov 24 08:49:26 crc kubenswrapper[4886]: I1124 08:49:26.594998 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/5d515fec-60f3-4bf7-9ba4-697bb691b670-multus-cni-dir\") pod \"multus-2dk8j\" (UID: \"5d515fec-60f3-4bf7-9ba4-697bb691b670\") " pod="openshift-multus/multus-2dk8j" Nov 24 08:49:26 crc kubenswrapper[4886]: I1124 08:49:26.595022 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/5d515fec-60f3-4bf7-9ba4-697bb691b670-host-run-multus-certs\") pod \"multus-2dk8j\" (UID: \"5d515fec-60f3-4bf7-9ba4-697bb691b670\") " pod="openshift-multus/multus-2dk8j" Nov 24 08:49:26 crc kubenswrapper[4886]: I1124 08:49:26.595077 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/5d515fec-60f3-4bf7-9ba4-697bb691b670-multus-socket-dir-parent\") pod \"multus-2dk8j\" (UID: \"5d515fec-60f3-4bf7-9ba4-697bb691b670\") " pod="openshift-multus/multus-2dk8j" Nov 24 08:49:26 crc kubenswrapper[4886]: I1124 08:49:26.595125 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/5d515fec-60f3-4bf7-9ba4-697bb691b670-multus-cni-dir\") pod \"multus-2dk8j\" (UID: \"5d515fec-60f3-4bf7-9ba4-697bb691b670\") " pod="openshift-multus/multus-2dk8j" Nov 24 08:49:26 crc kubenswrapper[4886]: I1124 08:49:26.595193 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/5d515fec-60f3-4bf7-9ba4-697bb691b670-host-var-lib-cni-bin\") pod \"multus-2dk8j\" (UID: \"5d515fec-60f3-4bf7-9ba4-697bb691b670\") " pod="openshift-multus/multus-2dk8j" Nov 24 08:49:26 crc kubenswrapper[4886]: I1124 08:49:26.595017 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/5d515fec-60f3-4bf7-9ba4-697bb691b670-host-var-lib-cni-bin\") pod \"multus-2dk8j\" (UID: \"5d515fec-60f3-4bf7-9ba4-697bb691b670\") " pod="openshift-multus/multus-2dk8j" Nov 24 08:49:26 crc kubenswrapper[4886]: I1124 08:49:26.595481 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/23cb993e-0360-4449-b604-8ddd825a6502-rootfs\") pod \"machine-config-daemon-zc46q\" (UID: \"23cb993e-0360-4449-b604-8ddd825a6502\") " pod="openshift-machine-config-operator/machine-config-daemon-zc46q" Nov 24 08:49:26 crc kubenswrapper[4886]: I1124 08:49:26.595505 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/23cb993e-0360-4449-b604-8ddd825a6502-proxy-tls\") pod \"machine-config-daemon-zc46q\" (UID: \"23cb993e-0360-4449-b604-8ddd825a6502\") " pod="openshift-machine-config-operator/machine-config-daemon-zc46q" Nov 24 08:49:26 crc kubenswrapper[4886]: I1124 08:49:26.595591 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/5d515fec-60f3-4bf7-9ba4-697bb691b670-os-release\") pod \"multus-2dk8j\" (UID: \"5d515fec-60f3-4bf7-9ba4-697bb691b670\") " pod="openshift-multus/multus-2dk8j" Nov 24 08:49:26 crc kubenswrapper[4886]: I1124 08:49:26.595630 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/5d515fec-60f3-4bf7-9ba4-697bb691b670-host-var-lib-kubelet\") pod \"multus-2dk8j\" (UID: \"5d515fec-60f3-4bf7-9ba4-697bb691b670\") " pod="openshift-multus/multus-2dk8j" Nov 24 08:49:26 crc kubenswrapper[4886]: I1124 08:49:26.595650 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/5d515fec-60f3-4bf7-9ba4-697bb691b670-host-run-k8s-cni-cncf-io\") pod \"multus-2dk8j\" (UID: \"5d515fec-60f3-4bf7-9ba4-697bb691b670\") " pod="openshift-multus/multus-2dk8j" Nov 24 08:49:26 crc kubenswrapper[4886]: I1124 08:49:26.595697 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/5d515fec-60f3-4bf7-9ba4-697bb691b670-host-var-lib-kubelet\") pod \"multus-2dk8j\" (UID: \"5d515fec-60f3-4bf7-9ba4-697bb691b670\") " pod="openshift-multus/multus-2dk8j" Nov 24 08:49:26 crc kubenswrapper[4886]: I1124 08:49:26.595560 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/23cb993e-0360-4449-b604-8ddd825a6502-rootfs\") pod \"machine-config-daemon-zc46q\" (UID: \"23cb993e-0360-4449-b604-8ddd825a6502\") " pod="openshift-machine-config-operator/machine-config-daemon-zc46q" Nov 24 08:49:26 crc kubenswrapper[4886]: I1124 08:49:26.595491 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kl8k6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b578bbaf-7246-42d9-9d2d-346bd1da2c41\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6ng9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6ng9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6ng9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6ng9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6ng9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6ng9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6ng9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kl8k6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:26Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:26 crc kubenswrapper[4886]: I1124 08:49:26.595787 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/5d515fec-60f3-4bf7-9ba4-697bb691b670-os-release\") pod \"multus-2dk8j\" (UID: \"5d515fec-60f3-4bf7-9ba4-697bb691b670\") " pod="openshift-multus/multus-2dk8j" Nov 24 08:49:26 crc kubenswrapper[4886]: I1124 08:49:26.595828 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p6pfs\" (UniqueName: \"kubernetes.io/projected/5d515fec-60f3-4bf7-9ba4-697bb691b670-kube-api-access-p6pfs\") pod \"multus-2dk8j\" (UID: \"5d515fec-60f3-4bf7-9ba4-697bb691b670\") " pod="openshift-multus/multus-2dk8j" Nov 24 08:49:26 crc kubenswrapper[4886]: I1124 08:49:26.595852 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/5d515fec-60f3-4bf7-9ba4-697bb691b670-cnibin\") pod \"multus-2dk8j\" (UID: \"5d515fec-60f3-4bf7-9ba4-697bb691b670\") " pod="openshift-multus/multus-2dk8j" Nov 24 08:49:26 crc kubenswrapper[4886]: I1124 08:49:26.595859 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/5d515fec-60f3-4bf7-9ba4-697bb691b670-host-run-k8s-cni-cncf-io\") pod \"multus-2dk8j\" (UID: \"5d515fec-60f3-4bf7-9ba4-697bb691b670\") " pod="openshift-multus/multus-2dk8j" Nov 24 08:49:26 crc kubenswrapper[4886]: I1124 08:49:26.595899 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/5d515fec-60f3-4bf7-9ba4-697bb691b670-host-run-netns\") pod \"multus-2dk8j\" (UID: \"5d515fec-60f3-4bf7-9ba4-697bb691b670\") " pod="openshift-multus/multus-2dk8j" Nov 24 08:49:26 crc kubenswrapper[4886]: I1124 08:49:26.595980 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/5d515fec-60f3-4bf7-9ba4-697bb691b670-cnibin\") pod \"multus-2dk8j\" (UID: \"5d515fec-60f3-4bf7-9ba4-697bb691b670\") " pod="openshift-multus/multus-2dk8j" Nov 24 08:49:26 crc kubenswrapper[4886]: I1124 08:49:26.596034 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/5d515fec-60f3-4bf7-9ba4-697bb691b670-host-run-netns\") pod \"multus-2dk8j\" (UID: \"5d515fec-60f3-4bf7-9ba4-697bb691b670\") " pod="openshift-multus/multus-2dk8j" Nov 24 08:49:26 crc kubenswrapper[4886]: I1124 08:49:26.595932 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/5d515fec-60f3-4bf7-9ba4-697bb691b670-multus-conf-dir\") pod \"multus-2dk8j\" (UID: \"5d515fec-60f3-4bf7-9ba4-697bb691b670\") " pod="openshift-multus/multus-2dk8j" Nov 24 08:49:26 crc kubenswrapper[4886]: I1124 08:49:26.596089 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/5d515fec-60f3-4bf7-9ba4-697bb691b670-multus-daemon-config\") pod \"multus-2dk8j\" (UID: \"5d515fec-60f3-4bf7-9ba4-697bb691b670\") " pod="openshift-multus/multus-2dk8j" Nov 24 08:49:26 crc kubenswrapper[4886]: I1124 08:49:26.596103 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/5d515fec-60f3-4bf7-9ba4-697bb691b670-multus-conf-dir\") pod \"multus-2dk8j\" (UID: \"5d515fec-60f3-4bf7-9ba4-697bb691b670\") " pod="openshift-multus/multus-2dk8j" Nov 24 08:49:26 crc kubenswrapper[4886]: I1124 08:49:26.596260 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/b578bbaf-7246-42d9-9d2d-346bd1da2c41-system-cni-dir\") pod \"multus-additional-cni-plugins-kl8k6\" (UID: \"b578bbaf-7246-42d9-9d2d-346bd1da2c41\") " pod="openshift-multus/multus-additional-cni-plugins-kl8k6" Nov 24 08:49:26 crc kubenswrapper[4886]: I1124 08:49:26.596988 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/5d515fec-60f3-4bf7-9ba4-697bb691b670-multus-daemon-config\") pod \"multus-2dk8j\" (UID: \"5d515fec-60f3-4bf7-9ba4-697bb691b670\") " pod="openshift-multus/multus-2dk8j" Nov 24 08:49:26 crc kubenswrapper[4886]: I1124 08:49:26.597025 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/b578bbaf-7246-42d9-9d2d-346bd1da2c41-system-cni-dir\") pod \"multus-additional-cni-plugins-kl8k6\" (UID: \"b578bbaf-7246-42d9-9d2d-346bd1da2c41\") " pod="openshift-multus/multus-additional-cni-plugins-kl8k6" Nov 24 08:49:26 crc kubenswrapper[4886]: I1124 08:49:26.597064 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b6ng9\" (UniqueName: \"kubernetes.io/projected/b578bbaf-7246-42d9-9d2d-346bd1da2c41-kube-api-access-b6ng9\") pod \"multus-additional-cni-plugins-kl8k6\" (UID: \"b578bbaf-7246-42d9-9d2d-346bd1da2c41\") " pod="openshift-multus/multus-additional-cni-plugins-kl8k6" Nov 24 08:49:26 crc kubenswrapper[4886]: I1124 08:49:26.597095 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5d515fec-60f3-4bf7-9ba4-697bb691b670-etc-kubernetes\") pod \"multus-2dk8j\" (UID: \"5d515fec-60f3-4bf7-9ba4-697bb691b670\") " pod="openshift-multus/multus-2dk8j" Nov 24 08:49:26 crc kubenswrapper[4886]: I1124 08:49:26.597333 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/b578bbaf-7246-42d9-9d2d-346bd1da2c41-os-release\") pod \"multus-additional-cni-plugins-kl8k6\" (UID: \"b578bbaf-7246-42d9-9d2d-346bd1da2c41\") " pod="openshift-multus/multus-additional-cni-plugins-kl8k6" Nov 24 08:49:26 crc kubenswrapper[4886]: I1124 08:49:26.597389 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5d515fec-60f3-4bf7-9ba4-697bb691b670-etc-kubernetes\") pod \"multus-2dk8j\" (UID: \"5d515fec-60f3-4bf7-9ba4-697bb691b670\") " pod="openshift-multus/multus-2dk8j" Nov 24 08:49:26 crc kubenswrapper[4886]: I1124 08:49:26.597434 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/b578bbaf-7246-42d9-9d2d-346bd1da2c41-os-release\") pod \"multus-additional-cni-plugins-kl8k6\" (UID: \"b578bbaf-7246-42d9-9d2d-346bd1da2c41\") " pod="openshift-multus/multus-additional-cni-plugins-kl8k6" Nov 24 08:49:26 crc kubenswrapper[4886]: I1124 08:49:26.597352 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/5d515fec-60f3-4bf7-9ba4-697bb691b670-host-var-lib-cni-multus\") pod \"multus-2dk8j\" (UID: \"5d515fec-60f3-4bf7-9ba4-697bb691b670\") " pod="openshift-multus/multus-2dk8j" Nov 24 08:49:26 crc kubenswrapper[4886]: I1124 08:49:26.597484 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/b578bbaf-7246-42d9-9d2d-346bd1da2c41-cnibin\") pod \"multus-additional-cni-plugins-kl8k6\" (UID: \"b578bbaf-7246-42d9-9d2d-346bd1da2c41\") " pod="openshift-multus/multus-additional-cni-plugins-kl8k6" Nov 24 08:49:26 crc kubenswrapper[4886]: I1124 08:49:26.597526 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/5d515fec-60f3-4bf7-9ba4-697bb691b670-host-var-lib-cni-multus\") pod \"multus-2dk8j\" (UID: \"5d515fec-60f3-4bf7-9ba4-697bb691b670\") " pod="openshift-multus/multus-2dk8j" Nov 24 08:49:26 crc kubenswrapper[4886]: I1124 08:49:26.597569 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/b578bbaf-7246-42d9-9d2d-346bd1da2c41-cnibin\") pod \"multus-additional-cni-plugins-kl8k6\" (UID: \"b578bbaf-7246-42d9-9d2d-346bd1da2c41\") " pod="openshift-multus/multus-additional-cni-plugins-kl8k6" Nov 24 08:49:26 crc kubenswrapper[4886]: I1124 08:49:26.597604 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/23cb993e-0360-4449-b604-8ddd825a6502-mcd-auth-proxy-config\") pod \"machine-config-daemon-zc46q\" (UID: \"23cb993e-0360-4449-b604-8ddd825a6502\") " pod="openshift-machine-config-operator/machine-config-daemon-zc46q" Nov 24 08:49:26 crc kubenswrapper[4886]: I1124 08:49:26.598112 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/23cb993e-0360-4449-b604-8ddd825a6502-mcd-auth-proxy-config\") pod \"machine-config-daemon-zc46q\" (UID: \"23cb993e-0360-4449-b604-8ddd825a6502\") " pod="openshift-machine-config-operator/machine-config-daemon-zc46q" Nov 24 08:49:26 crc kubenswrapper[4886]: I1124 08:49:26.597636 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/5d515fec-60f3-4bf7-9ba4-697bb691b670-cni-binary-copy\") pod \"multus-2dk8j\" (UID: \"5d515fec-60f3-4bf7-9ba4-697bb691b670\") " pod="openshift-multus/multus-2dk8j" Nov 24 08:49:26 crc kubenswrapper[4886]: I1124 08:49:26.598766 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/b578bbaf-7246-42d9-9d2d-346bd1da2c41-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-kl8k6\" (UID: \"b578bbaf-7246-42d9-9d2d-346bd1da2c41\") " pod="openshift-multus/multus-additional-cni-plugins-kl8k6" Nov 24 08:49:26 crc kubenswrapper[4886]: I1124 08:49:26.598795 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/5d515fec-60f3-4bf7-9ba4-697bb691b670-system-cni-dir\") pod \"multus-2dk8j\" (UID: \"5d515fec-60f3-4bf7-9ba4-697bb691b670\") " pod="openshift-multus/multus-2dk8j" Nov 24 08:49:26 crc kubenswrapper[4886]: I1124 08:49:26.598827 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/5d515fec-60f3-4bf7-9ba4-697bb691b670-hostroot\") pod \"multus-2dk8j\" (UID: \"5d515fec-60f3-4bf7-9ba4-697bb691b670\") " pod="openshift-multus/multus-2dk8j" Nov 24 08:49:26 crc kubenswrapper[4886]: I1124 08:49:26.598871 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/b578bbaf-7246-42d9-9d2d-346bd1da2c41-cni-binary-copy\") pod \"multus-additional-cni-plugins-kl8k6\" (UID: \"b578bbaf-7246-42d9-9d2d-346bd1da2c41\") " pod="openshift-multus/multus-additional-cni-plugins-kl8k6" Nov 24 08:49:26 crc kubenswrapper[4886]: I1124 08:49:26.598698 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/5d515fec-60f3-4bf7-9ba4-697bb691b670-cni-binary-copy\") pod \"multus-2dk8j\" (UID: \"5d515fec-60f3-4bf7-9ba4-697bb691b670\") " pod="openshift-multus/multus-2dk8j" Nov 24 08:49:26 crc kubenswrapper[4886]: I1124 08:49:26.598900 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/b578bbaf-7246-42d9-9d2d-346bd1da2c41-tuning-conf-dir\") pod \"multus-additional-cni-plugins-kl8k6\" (UID: \"b578bbaf-7246-42d9-9d2d-346bd1da2c41\") " pod="openshift-multus/multus-additional-cni-plugins-kl8k6" Nov 24 08:49:26 crc kubenswrapper[4886]: I1124 08:49:26.598932 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/5d515fec-60f3-4bf7-9ba4-697bb691b670-hostroot\") pod \"multus-2dk8j\" (UID: \"5d515fec-60f3-4bf7-9ba4-697bb691b670\") " pod="openshift-multus/multus-2dk8j" Nov 24 08:49:26 crc kubenswrapper[4886]: I1124 08:49:26.598973 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/5d515fec-60f3-4bf7-9ba4-697bb691b670-system-cni-dir\") pod \"multus-2dk8j\" (UID: \"5d515fec-60f3-4bf7-9ba4-697bb691b670\") " pod="openshift-multus/multus-2dk8j" Nov 24 08:49:26 crc kubenswrapper[4886]: I1124 08:49:26.599594 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/23cb993e-0360-4449-b604-8ddd825a6502-proxy-tls\") pod \"machine-config-daemon-zc46q\" (UID: \"23cb993e-0360-4449-b604-8ddd825a6502\") " pod="openshift-machine-config-operator/machine-config-daemon-zc46q" Nov 24 08:49:26 crc kubenswrapper[4886]: I1124 08:49:26.599790 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/b578bbaf-7246-42d9-9d2d-346bd1da2c41-tuning-conf-dir\") pod \"multus-additional-cni-plugins-kl8k6\" (UID: \"b578bbaf-7246-42d9-9d2d-346bd1da2c41\") " pod="openshift-multus/multus-additional-cni-plugins-kl8k6" Nov 24 08:49:26 crc kubenswrapper[4886]: I1124 08:49:26.599884 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/b578bbaf-7246-42d9-9d2d-346bd1da2c41-cni-binary-copy\") pod \"multus-additional-cni-plugins-kl8k6\" (UID: \"b578bbaf-7246-42d9-9d2d-346bd1da2c41\") " pod="openshift-multus/multus-additional-cni-plugins-kl8k6" Nov 24 08:49:26 crc kubenswrapper[4886]: I1124 08:49:26.599991 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/b578bbaf-7246-42d9-9d2d-346bd1da2c41-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-kl8k6\" (UID: \"b578bbaf-7246-42d9-9d2d-346bd1da2c41\") " pod="openshift-multus/multus-additional-cni-plugins-kl8k6" Nov 24 08:49:26 crc kubenswrapper[4886]: I1124 08:49:26.612972 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b56e78f-e569-4f01-9a0f-6b9db3b505d7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4cd419a96834f5093a6a3bad2d2b99f9ab6f4abdbe5d7fddadf1505a0fd1f3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff0e663f6feec9c77de1cd993c443c9b3df3492612c554cc8b95c8e4e4a026b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://146e753c49089634c3d3105a3debf9737245745d8ed9aee61ae2cf4f56ae37ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6d0c617fe424a39e246d154cd6c0d8dc451c251e24dffaf45d644354e94e01e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8cd396a94b39804c5730c17f380b8a20d993b60e18e213ea8ac8e69eb8fa5862\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T08:49:18Z\\\",\\\"message\\\":\\\"W1124 08:49:08.065821 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1124 08:49:08.066350 1 crypto.go:601] Generating new CA for check-endpoints-signer@1763974148 cert, and key in /tmp/serving-cert-1713431650/serving-signer.crt, /tmp/serving-cert-1713431650/serving-signer.key\\\\nI1124 08:49:08.489753 1 observer_polling.go:159] Starting file observer\\\\nW1124 08:49:08.492449 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1124 08:49:08.492620 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 08:49:08.494754 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1713431650/tls.crt::/tmp/serving-cert-1713431650/tls.key\\\\\\\"\\\\nF1124 08:49:18.704666 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:07Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6d0c617fe424a39e246d154cd6c0d8dc451c251e24dffaf45d644354e94e01e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"message\\\":\\\" 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1124 08:49:25.277531 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1124 08:49:25.277556 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1124 08:49:25.277707 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1124 08:49:25.277722 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1124 08:49:25.278962 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-505038271/tls.crt::/tmp/serving-cert-505038271/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1763974159\\\\\\\\\\\\\\\" (2025-11-24 08:49:18 +0000 UTC to 2025-12-24 08:49:19 +0000 UTC (now=2025-11-24 08:49:25.278915173 +0000 UTC))\\\\\\\"\\\\nI1124 08:49:25.279136 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1763974165\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1763974164\\\\\\\\\\\\\\\" (2025-11-24 07:49:24 +0000 UTC to 2026-11-24 07:49:24 +0000 UTC (now=2025-11-24 08:49:25.279105228 +0000 UTC))\\\\\\\"\\\\nI1124 08:49:25.279186 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1124 08:49:25.279230 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1124 08:49:25.279272 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-505038271/tls.crt::/tmp/serving-cert-505038271/tls.key\\\\\\\"\\\\nI1124 08:49:25.279422 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF1124 08:49:25.273255 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://97e022f9b9279047d700f687148beaeef5ce4f72de418be1bdf0edde4e5211da\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc5ed0e2c8f6299305d6c26194c4b43e6e884785effd5ae99254221d738e7662\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc5ed0e2c8f6299305d6c26194c4b43e6e884785effd5ae99254221d738e7662\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:26Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:26 crc kubenswrapper[4886]: I1124 08:49:26.613761 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p6pfs\" (UniqueName: \"kubernetes.io/projected/5d515fec-60f3-4bf7-9ba4-697bb691b670-kube-api-access-p6pfs\") pod \"multus-2dk8j\" (UID: \"5d515fec-60f3-4bf7-9ba4-697bb691b670\") " pod="openshift-multus/multus-2dk8j" Nov 24 08:49:26 crc kubenswrapper[4886]: I1124 08:49:26.615039 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h8hf8\" (UniqueName: \"kubernetes.io/projected/23cb993e-0360-4449-b604-8ddd825a6502-kube-api-access-h8hf8\") pod \"machine-config-daemon-zc46q\" (UID: \"23cb993e-0360-4449-b604-8ddd825a6502\") " pod="openshift-machine-config-operator/machine-config-daemon-zc46q" Nov 24 08:49:26 crc kubenswrapper[4886]: I1124 08:49:26.618966 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b6ng9\" (UniqueName: \"kubernetes.io/projected/b578bbaf-7246-42d9-9d2d-346bd1da2c41-kube-api-access-b6ng9\") pod \"multus-additional-cni-plugins-kl8k6\" (UID: \"b578bbaf-7246-42d9-9d2d-346bd1da2c41\") " pod="openshift-multus/multus-additional-cni-plugins-kl8k6" Nov 24 08:49:26 crc kubenswrapper[4886]: I1124 08:49:26.625046 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zc46q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23cb993e-0360-4449-b604-8ddd825a6502\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8hf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8hf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zc46q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:26Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:26 crc kubenswrapper[4886]: I1124 08:49:26.638339 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2dk8j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d515fec-60f3-4bf7-9ba4-697bb691b670\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6pfs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2dk8j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:26Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:26 crc kubenswrapper[4886]: I1124 08:49:26.645452 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-kl8k6" Nov 24 08:49:26 crc kubenswrapper[4886]: I1124 08:49:26.651907 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:26Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:26 crc kubenswrapper[4886]: I1124 08:49:26.666753 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56bddcbe3378ccc43de3046352a1284d9590790973736e218e891b804bfe1ff8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:26Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:26 crc kubenswrapper[4886]: I1124 08:49:26.686378 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:26Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:26 crc kubenswrapper[4886]: I1124 08:49:26.701240 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb6a474a-7674-4280-8da4-784cc3c4fc67\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e99af72fba7cd02a022aede4ff904f6dc0d263429468b460f3886cbbf8767e9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3a5e301ad7f424b4db6c7c2d4054973bf707f225de81d58a13077e1a130fe59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://879cdf4650f88cd23cfed38dabd912619f3c95ab3c254c0eb893eb4cbb2f8c5c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://243dc8c38e86fbd96d073b5dc2edaf3f378f052563abfedf86cd78e6270f11b1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:04Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:26Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:26 crc kubenswrapper[4886]: I1124 08:49:26.718028 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://336d2d1c7e69b9e2c481681be12af7bfd17b8517e7fd97a67b3ca82dbc84ac35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c72af163fd4d5f2dc2a642984873ee83c62e9f1f96fac318fd17d5a74e5ea82f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:26Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:26 crc kubenswrapper[4886]: I1124 08:49:26.718344 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-657wc"] Nov 24 08:49:26 crc kubenswrapper[4886]: I1124 08:49:26.719427 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-657wc" Nov 24 08:49:26 crc kubenswrapper[4886]: I1124 08:49:26.722698 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Nov 24 08:49:26 crc kubenswrapper[4886]: I1124 08:49:26.723034 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Nov 24 08:49:26 crc kubenswrapper[4886]: I1124 08:49:26.723240 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Nov 24 08:49:26 crc kubenswrapper[4886]: I1124 08:49:26.723286 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Nov 24 08:49:26 crc kubenswrapper[4886]: I1124 08:49:26.723489 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Nov 24 08:49:26 crc kubenswrapper[4886]: I1124 08:49:26.724634 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Nov 24 08:49:26 crc kubenswrapper[4886]: I1124 08:49:26.724785 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Nov 24 08:49:26 crc kubenswrapper[4886]: I1124 08:49:26.738287 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb6a474a-7674-4280-8da4-784cc3c4fc67\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e99af72fba7cd02a022aede4ff904f6dc0d263429468b460f3886cbbf8767e9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3a5e301ad7f424b4db6c7c2d4054973bf707f225de81d58a13077e1a130fe59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://879cdf4650f88cd23cfed38dabd912619f3c95ab3c254c0eb893eb4cbb2f8c5c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://243dc8c38e86fbd96d073b5dc2edaf3f378f052563abfedf86cd78e6270f11b1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:04Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:26Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:26 crc kubenswrapper[4886]: W1124 08:49:26.751433 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb578bbaf_7246_42d9_9d2d_346bd1da2c41.slice/crio-3678319f500bc0ecdec2d6de650fa7a26d78f0bd8828fe7f292ce44c56dcb0e0 WatchSource:0}: Error finding container 3678319f500bc0ecdec2d6de650fa7a26d78f0bd8828fe7f292ce44c56dcb0e0: Status 404 returned error can't find the container with id 3678319f500bc0ecdec2d6de650fa7a26d78f0bd8828fe7f292ce44c56dcb0e0 Nov 24 08:49:26 crc kubenswrapper[4886]: I1124 08:49:26.756308 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://336d2d1c7e69b9e2c481681be12af7bfd17b8517e7fd97a67b3ca82dbc84ac35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c72af163fd4d5f2dc2a642984873ee83c62e9f1f96fac318fd17d5a74e5ea82f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:26Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:26 crc kubenswrapper[4886]: I1124 08:49:26.759709 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-2dk8j" Nov 24 08:49:26 crc kubenswrapper[4886]: I1124 08:49:26.770130 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-zc46q" Nov 24 08:49:26 crc kubenswrapper[4886]: W1124 08:49:26.777693 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5d515fec_60f3_4bf7_9ba4_697bb691b670.slice/crio-9bb3e918db91beb74c808f1f8563500a1e4135aa31038514bd60e84eb052f13a WatchSource:0}: Error finding container 9bb3e918db91beb74c808f1f8563500a1e4135aa31038514bd60e84eb052f13a: Status 404 returned error can't find the container with id 9bb3e918db91beb74c808f1f8563500a1e4135aa31038514bd60e84eb052f13a Nov 24 08:49:26 crc kubenswrapper[4886]: I1124 08:49:26.777928 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:26Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:26 crc kubenswrapper[4886]: W1124 08:49:26.793752 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod23cb993e_0360_4449_b604_8ddd825a6502.slice/crio-e91649034af5bea8eb6d7414b96dc30597797638583086ffd8896b755ebf5c17 WatchSource:0}: Error finding container e91649034af5bea8eb6d7414b96dc30597797638583086ffd8896b755ebf5c17: Status 404 returned error can't find the container with id e91649034af5bea8eb6d7414b96dc30597797638583086ffd8896b755ebf5c17 Nov 24 08:49:26 crc kubenswrapper[4886]: I1124 08:49:26.794269 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5g6ld" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82737edd-859f-4e06-8559-47375deb3a1a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ffhbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5g6ld\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:26Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:26 crc kubenswrapper[4886]: I1124 08:49:26.801414 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/03f9078c-6b20-46d5-ae2a-2eb20e236769-host-run-netns\") pod \"ovnkube-node-657wc\" (UID: \"03f9078c-6b20-46d5-ae2a-2eb20e236769\") " pod="openshift-ovn-kubernetes/ovnkube-node-657wc" Nov 24 08:49:26 crc kubenswrapper[4886]: I1124 08:49:26.801480 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m55nz\" (UniqueName: \"kubernetes.io/projected/03f9078c-6b20-46d5-ae2a-2eb20e236769-kube-api-access-m55nz\") pod \"ovnkube-node-657wc\" (UID: \"03f9078c-6b20-46d5-ae2a-2eb20e236769\") " pod="openshift-ovn-kubernetes/ovnkube-node-657wc" Nov 24 08:49:26 crc kubenswrapper[4886]: I1124 08:49:26.801534 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/03f9078c-6b20-46d5-ae2a-2eb20e236769-systemd-units\") pod \"ovnkube-node-657wc\" (UID: \"03f9078c-6b20-46d5-ae2a-2eb20e236769\") " pod="openshift-ovn-kubernetes/ovnkube-node-657wc" Nov 24 08:49:26 crc kubenswrapper[4886]: I1124 08:49:26.801558 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/03f9078c-6b20-46d5-ae2a-2eb20e236769-host-slash\") pod \"ovnkube-node-657wc\" (UID: \"03f9078c-6b20-46d5-ae2a-2eb20e236769\") " pod="openshift-ovn-kubernetes/ovnkube-node-657wc" Nov 24 08:49:26 crc kubenswrapper[4886]: I1124 08:49:26.801580 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/03f9078c-6b20-46d5-ae2a-2eb20e236769-var-lib-openvswitch\") pod \"ovnkube-node-657wc\" (UID: \"03f9078c-6b20-46d5-ae2a-2eb20e236769\") " pod="openshift-ovn-kubernetes/ovnkube-node-657wc" Nov 24 08:49:26 crc kubenswrapper[4886]: I1124 08:49:26.801602 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/03f9078c-6b20-46d5-ae2a-2eb20e236769-ovn-node-metrics-cert\") pod \"ovnkube-node-657wc\" (UID: \"03f9078c-6b20-46d5-ae2a-2eb20e236769\") " pod="openshift-ovn-kubernetes/ovnkube-node-657wc" Nov 24 08:49:26 crc kubenswrapper[4886]: I1124 08:49:26.801629 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/03f9078c-6b20-46d5-ae2a-2eb20e236769-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-657wc\" (UID: \"03f9078c-6b20-46d5-ae2a-2eb20e236769\") " pod="openshift-ovn-kubernetes/ovnkube-node-657wc" Nov 24 08:49:26 crc kubenswrapper[4886]: I1124 08:49:26.801655 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/03f9078c-6b20-46d5-ae2a-2eb20e236769-env-overrides\") pod \"ovnkube-node-657wc\" (UID: \"03f9078c-6b20-46d5-ae2a-2eb20e236769\") " pod="openshift-ovn-kubernetes/ovnkube-node-657wc" Nov 24 08:49:26 crc kubenswrapper[4886]: I1124 08:49:26.801719 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/03f9078c-6b20-46d5-ae2a-2eb20e236769-log-socket\") pod \"ovnkube-node-657wc\" (UID: \"03f9078c-6b20-46d5-ae2a-2eb20e236769\") " pod="openshift-ovn-kubernetes/ovnkube-node-657wc" Nov 24 08:49:26 crc kubenswrapper[4886]: I1124 08:49:26.801817 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/03f9078c-6b20-46d5-ae2a-2eb20e236769-host-cni-bin\") pod \"ovnkube-node-657wc\" (UID: \"03f9078c-6b20-46d5-ae2a-2eb20e236769\") " pod="openshift-ovn-kubernetes/ovnkube-node-657wc" Nov 24 08:49:26 crc kubenswrapper[4886]: I1124 08:49:26.801863 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/03f9078c-6b20-46d5-ae2a-2eb20e236769-host-cni-netd\") pod \"ovnkube-node-657wc\" (UID: \"03f9078c-6b20-46d5-ae2a-2eb20e236769\") " pod="openshift-ovn-kubernetes/ovnkube-node-657wc" Nov 24 08:49:26 crc kubenswrapper[4886]: I1124 08:49:26.801891 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/03f9078c-6b20-46d5-ae2a-2eb20e236769-node-log\") pod \"ovnkube-node-657wc\" (UID: \"03f9078c-6b20-46d5-ae2a-2eb20e236769\") " pod="openshift-ovn-kubernetes/ovnkube-node-657wc" Nov 24 08:49:26 crc kubenswrapper[4886]: I1124 08:49:26.802016 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/03f9078c-6b20-46d5-ae2a-2eb20e236769-etc-openvswitch\") pod \"ovnkube-node-657wc\" (UID: \"03f9078c-6b20-46d5-ae2a-2eb20e236769\") " pod="openshift-ovn-kubernetes/ovnkube-node-657wc" Nov 24 08:49:26 crc kubenswrapper[4886]: I1124 08:49:26.802069 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/03f9078c-6b20-46d5-ae2a-2eb20e236769-run-openvswitch\") pod \"ovnkube-node-657wc\" (UID: \"03f9078c-6b20-46d5-ae2a-2eb20e236769\") " pod="openshift-ovn-kubernetes/ovnkube-node-657wc" Nov 24 08:49:26 crc kubenswrapper[4886]: I1124 08:49:26.802098 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/03f9078c-6b20-46d5-ae2a-2eb20e236769-ovnkube-config\") pod \"ovnkube-node-657wc\" (UID: \"03f9078c-6b20-46d5-ae2a-2eb20e236769\") " pod="openshift-ovn-kubernetes/ovnkube-node-657wc" Nov 24 08:49:26 crc kubenswrapper[4886]: I1124 08:49:26.802128 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/03f9078c-6b20-46d5-ae2a-2eb20e236769-ovnkube-script-lib\") pod \"ovnkube-node-657wc\" (UID: \"03f9078c-6b20-46d5-ae2a-2eb20e236769\") " pod="openshift-ovn-kubernetes/ovnkube-node-657wc" Nov 24 08:49:26 crc kubenswrapper[4886]: I1124 08:49:26.802196 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/03f9078c-6b20-46d5-ae2a-2eb20e236769-run-systemd\") pod \"ovnkube-node-657wc\" (UID: \"03f9078c-6b20-46d5-ae2a-2eb20e236769\") " pod="openshift-ovn-kubernetes/ovnkube-node-657wc" Nov 24 08:49:26 crc kubenswrapper[4886]: I1124 08:49:26.802225 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/03f9078c-6b20-46d5-ae2a-2eb20e236769-run-ovn\") pod \"ovnkube-node-657wc\" (UID: \"03f9078c-6b20-46d5-ae2a-2eb20e236769\") " pod="openshift-ovn-kubernetes/ovnkube-node-657wc" Nov 24 08:49:26 crc kubenswrapper[4886]: I1124 08:49:26.802252 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/03f9078c-6b20-46d5-ae2a-2eb20e236769-host-kubelet\") pod \"ovnkube-node-657wc\" (UID: \"03f9078c-6b20-46d5-ae2a-2eb20e236769\") " pod="openshift-ovn-kubernetes/ovnkube-node-657wc" Nov 24 08:49:26 crc kubenswrapper[4886]: I1124 08:49:26.802292 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/03f9078c-6b20-46d5-ae2a-2eb20e236769-host-run-ovn-kubernetes\") pod \"ovnkube-node-657wc\" (UID: \"03f9078c-6b20-46d5-ae2a-2eb20e236769\") " pod="openshift-ovn-kubernetes/ovnkube-node-657wc" Nov 24 08:49:26 crc kubenswrapper[4886]: I1124 08:49:26.812782 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kl8k6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b578bbaf-7246-42d9-9d2d-346bd1da2c41\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6ng9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6ng9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6ng9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6ng9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6ng9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6ng9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6ng9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kl8k6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:26Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:26 crc kubenswrapper[4886]: I1124 08:49:26.836315 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-657wc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03f9078c-6b20-46d5-ae2a-2eb20e236769\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-657wc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:26Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:26 crc kubenswrapper[4886]: I1124 08:49:26.855100 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 08:49:26 crc kubenswrapper[4886]: I1124 08:49:26.855226 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 08:49:26 crc kubenswrapper[4886]: I1124 08:49:26.855438 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 08:49:26 crc kubenswrapper[4886]: E1124 08:49:26.855609 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 08:49:26 crc kubenswrapper[4886]: E1124 08:49:26.856086 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 08:49:26 crc kubenswrapper[4886]: E1124 08:49:26.856144 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 08:49:26 crc kubenswrapper[4886]: I1124 08:49:26.856140 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b56e78f-e569-4f01-9a0f-6b9db3b505d7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4cd419a96834f5093a6a3bad2d2b99f9ab6f4abdbe5d7fddadf1505a0fd1f3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff0e663f6feec9c77de1cd993c443c9b3df3492612c554cc8b95c8e4e4a026b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://146e753c49089634c3d3105a3debf9737245745d8ed9aee61ae2cf4f56ae37ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6d0c617fe424a39e246d154cd6c0d8dc451c251e24dffaf45d644354e94e01e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8cd396a94b39804c5730c17f380b8a20d993b60e18e213ea8ac8e69eb8fa5862\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T08:49:18Z\\\",\\\"message\\\":\\\"W1124 08:49:08.065821 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1124 08:49:08.066350 1 crypto.go:601] Generating new CA for check-endpoints-signer@1763974148 cert, and key in /tmp/serving-cert-1713431650/serving-signer.crt, /tmp/serving-cert-1713431650/serving-signer.key\\\\nI1124 08:49:08.489753 1 observer_polling.go:159] Starting file observer\\\\nW1124 08:49:08.492449 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1124 08:49:08.492620 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 08:49:08.494754 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1713431650/tls.crt::/tmp/serving-cert-1713431650/tls.key\\\\\\\"\\\\nF1124 08:49:18.704666 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:07Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6d0c617fe424a39e246d154cd6c0d8dc451c251e24dffaf45d644354e94e01e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"message\\\":\\\" 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1124 08:49:25.277531 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1124 08:49:25.277556 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1124 08:49:25.277707 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1124 08:49:25.277722 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1124 08:49:25.278962 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-505038271/tls.crt::/tmp/serving-cert-505038271/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1763974159\\\\\\\\\\\\\\\" (2025-11-24 08:49:18 +0000 UTC to 2025-12-24 08:49:19 +0000 UTC (now=2025-11-24 08:49:25.278915173 +0000 UTC))\\\\\\\"\\\\nI1124 08:49:25.279136 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1763974165\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1763974164\\\\\\\\\\\\\\\" (2025-11-24 07:49:24 +0000 UTC to 2026-11-24 07:49:24 +0000 UTC (now=2025-11-24 08:49:25.279105228 +0000 UTC))\\\\\\\"\\\\nI1124 08:49:25.279186 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1124 08:49:25.279230 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1124 08:49:25.279272 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-505038271/tls.crt::/tmp/serving-cert-505038271/tls.key\\\\\\\"\\\\nI1124 08:49:25.279422 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF1124 08:49:25.273255 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://97e022f9b9279047d700f687148beaeef5ce4f72de418be1bdf0edde4e5211da\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc5ed0e2c8f6299305d6c26194c4b43e6e884785effd5ae99254221d738e7662\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc5ed0e2c8f6299305d6c26194c4b43e6e884785effd5ae99254221d738e7662\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:26Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:26 crc kubenswrapper[4886]: I1124 08:49:26.889755 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:24Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:26Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:26 crc kubenswrapper[4886]: I1124 08:49:26.902918 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/03f9078c-6b20-46d5-ae2a-2eb20e236769-host-cni-bin\") pod \"ovnkube-node-657wc\" (UID: \"03f9078c-6b20-46d5-ae2a-2eb20e236769\") " pod="openshift-ovn-kubernetes/ovnkube-node-657wc" Nov 24 08:49:26 crc kubenswrapper[4886]: I1124 08:49:26.902981 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/03f9078c-6b20-46d5-ae2a-2eb20e236769-host-cni-netd\") pod \"ovnkube-node-657wc\" (UID: \"03f9078c-6b20-46d5-ae2a-2eb20e236769\") " pod="openshift-ovn-kubernetes/ovnkube-node-657wc" Nov 24 08:49:26 crc kubenswrapper[4886]: I1124 08:49:26.903003 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/03f9078c-6b20-46d5-ae2a-2eb20e236769-node-log\") pod \"ovnkube-node-657wc\" (UID: \"03f9078c-6b20-46d5-ae2a-2eb20e236769\") " pod="openshift-ovn-kubernetes/ovnkube-node-657wc" Nov 24 08:49:26 crc kubenswrapper[4886]: I1124 08:49:26.903028 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/03f9078c-6b20-46d5-ae2a-2eb20e236769-etc-openvswitch\") pod \"ovnkube-node-657wc\" (UID: \"03f9078c-6b20-46d5-ae2a-2eb20e236769\") " pod="openshift-ovn-kubernetes/ovnkube-node-657wc" Nov 24 08:49:26 crc kubenswrapper[4886]: I1124 08:49:26.903049 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/03f9078c-6b20-46d5-ae2a-2eb20e236769-run-openvswitch\") pod \"ovnkube-node-657wc\" (UID: \"03f9078c-6b20-46d5-ae2a-2eb20e236769\") " pod="openshift-ovn-kubernetes/ovnkube-node-657wc" Nov 24 08:49:26 crc kubenswrapper[4886]: I1124 08:49:26.903068 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/03f9078c-6b20-46d5-ae2a-2eb20e236769-ovnkube-config\") pod \"ovnkube-node-657wc\" (UID: \"03f9078c-6b20-46d5-ae2a-2eb20e236769\") " pod="openshift-ovn-kubernetes/ovnkube-node-657wc" Nov 24 08:49:26 crc kubenswrapper[4886]: I1124 08:49:26.903092 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/03f9078c-6b20-46d5-ae2a-2eb20e236769-ovnkube-script-lib\") pod \"ovnkube-node-657wc\" (UID: \"03f9078c-6b20-46d5-ae2a-2eb20e236769\") " pod="openshift-ovn-kubernetes/ovnkube-node-657wc" Nov 24 08:49:26 crc kubenswrapper[4886]: I1124 08:49:26.903102 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/03f9078c-6b20-46d5-ae2a-2eb20e236769-host-cni-bin\") pod \"ovnkube-node-657wc\" (UID: \"03f9078c-6b20-46d5-ae2a-2eb20e236769\") " pod="openshift-ovn-kubernetes/ovnkube-node-657wc" Nov 24 08:49:26 crc kubenswrapper[4886]: I1124 08:49:26.903218 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/03f9078c-6b20-46d5-ae2a-2eb20e236769-host-cni-netd\") pod \"ovnkube-node-657wc\" (UID: \"03f9078c-6b20-46d5-ae2a-2eb20e236769\") " pod="openshift-ovn-kubernetes/ovnkube-node-657wc" Nov 24 08:49:26 crc kubenswrapper[4886]: I1124 08:49:26.903252 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/03f9078c-6b20-46d5-ae2a-2eb20e236769-run-openvswitch\") pod \"ovnkube-node-657wc\" (UID: \"03f9078c-6b20-46d5-ae2a-2eb20e236769\") " pod="openshift-ovn-kubernetes/ovnkube-node-657wc" Nov 24 08:49:26 crc kubenswrapper[4886]: I1124 08:49:26.903127 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/03f9078c-6b20-46d5-ae2a-2eb20e236769-run-systemd\") pod \"ovnkube-node-657wc\" (UID: \"03f9078c-6b20-46d5-ae2a-2eb20e236769\") " pod="openshift-ovn-kubernetes/ovnkube-node-657wc" Nov 24 08:49:26 crc kubenswrapper[4886]: I1124 08:49:26.903324 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/03f9078c-6b20-46d5-ae2a-2eb20e236769-run-ovn\") pod \"ovnkube-node-657wc\" (UID: \"03f9078c-6b20-46d5-ae2a-2eb20e236769\") " pod="openshift-ovn-kubernetes/ovnkube-node-657wc" Nov 24 08:49:26 crc kubenswrapper[4886]: I1124 08:49:26.903355 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/03f9078c-6b20-46d5-ae2a-2eb20e236769-host-kubelet\") pod \"ovnkube-node-657wc\" (UID: \"03f9078c-6b20-46d5-ae2a-2eb20e236769\") " pod="openshift-ovn-kubernetes/ovnkube-node-657wc" Nov 24 08:49:26 crc kubenswrapper[4886]: I1124 08:49:26.903398 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/03f9078c-6b20-46d5-ae2a-2eb20e236769-host-run-ovn-kubernetes\") pod \"ovnkube-node-657wc\" (UID: \"03f9078c-6b20-46d5-ae2a-2eb20e236769\") " pod="openshift-ovn-kubernetes/ovnkube-node-657wc" Nov 24 08:49:26 crc kubenswrapper[4886]: I1124 08:49:26.903214 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/03f9078c-6b20-46d5-ae2a-2eb20e236769-run-systemd\") pod \"ovnkube-node-657wc\" (UID: \"03f9078c-6b20-46d5-ae2a-2eb20e236769\") " pod="openshift-ovn-kubernetes/ovnkube-node-657wc" Nov 24 08:49:26 crc kubenswrapper[4886]: I1124 08:49:26.903434 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m55nz\" (UniqueName: \"kubernetes.io/projected/03f9078c-6b20-46d5-ae2a-2eb20e236769-kube-api-access-m55nz\") pod \"ovnkube-node-657wc\" (UID: \"03f9078c-6b20-46d5-ae2a-2eb20e236769\") " pod="openshift-ovn-kubernetes/ovnkube-node-657wc" Nov 24 08:49:26 crc kubenswrapper[4886]: I1124 08:49:26.903675 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/03f9078c-6b20-46d5-ae2a-2eb20e236769-node-log\") pod \"ovnkube-node-657wc\" (UID: \"03f9078c-6b20-46d5-ae2a-2eb20e236769\") " pod="openshift-ovn-kubernetes/ovnkube-node-657wc" Nov 24 08:49:26 crc kubenswrapper[4886]: I1124 08:49:26.903733 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/03f9078c-6b20-46d5-ae2a-2eb20e236769-host-kubelet\") pod \"ovnkube-node-657wc\" (UID: \"03f9078c-6b20-46d5-ae2a-2eb20e236769\") " pod="openshift-ovn-kubernetes/ovnkube-node-657wc" Nov 24 08:49:26 crc kubenswrapper[4886]: I1124 08:49:26.903772 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/03f9078c-6b20-46d5-ae2a-2eb20e236769-etc-openvswitch\") pod \"ovnkube-node-657wc\" (UID: \"03f9078c-6b20-46d5-ae2a-2eb20e236769\") " pod="openshift-ovn-kubernetes/ovnkube-node-657wc" Nov 24 08:49:26 crc kubenswrapper[4886]: I1124 08:49:26.903802 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/03f9078c-6b20-46d5-ae2a-2eb20e236769-host-run-ovn-kubernetes\") pod \"ovnkube-node-657wc\" (UID: \"03f9078c-6b20-46d5-ae2a-2eb20e236769\") " pod="openshift-ovn-kubernetes/ovnkube-node-657wc" Nov 24 08:49:26 crc kubenswrapper[4886]: I1124 08:49:26.903817 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/03f9078c-6b20-46d5-ae2a-2eb20e236769-run-ovn\") pod \"ovnkube-node-657wc\" (UID: \"03f9078c-6b20-46d5-ae2a-2eb20e236769\") " pod="openshift-ovn-kubernetes/ovnkube-node-657wc" Nov 24 08:49:26 crc kubenswrapper[4886]: I1124 08:49:26.904361 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/03f9078c-6b20-46d5-ae2a-2eb20e236769-ovnkube-script-lib\") pod \"ovnkube-node-657wc\" (UID: \"03f9078c-6b20-46d5-ae2a-2eb20e236769\") " pod="openshift-ovn-kubernetes/ovnkube-node-657wc" Nov 24 08:49:26 crc kubenswrapper[4886]: I1124 08:49:26.904436 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/03f9078c-6b20-46d5-ae2a-2eb20e236769-host-run-netns\") pod \"ovnkube-node-657wc\" (UID: \"03f9078c-6b20-46d5-ae2a-2eb20e236769\") " pod="openshift-ovn-kubernetes/ovnkube-node-657wc" Nov 24 08:49:26 crc kubenswrapper[4886]: I1124 08:49:26.904491 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/03f9078c-6b20-46d5-ae2a-2eb20e236769-systemd-units\") pod \"ovnkube-node-657wc\" (UID: \"03f9078c-6b20-46d5-ae2a-2eb20e236769\") " pod="openshift-ovn-kubernetes/ovnkube-node-657wc" Nov 24 08:49:26 crc kubenswrapper[4886]: I1124 08:49:26.904510 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/03f9078c-6b20-46d5-ae2a-2eb20e236769-host-slash\") pod \"ovnkube-node-657wc\" (UID: \"03f9078c-6b20-46d5-ae2a-2eb20e236769\") " pod="openshift-ovn-kubernetes/ovnkube-node-657wc" Nov 24 08:49:26 crc kubenswrapper[4886]: I1124 08:49:26.904535 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/03f9078c-6b20-46d5-ae2a-2eb20e236769-var-lib-openvswitch\") pod \"ovnkube-node-657wc\" (UID: \"03f9078c-6b20-46d5-ae2a-2eb20e236769\") " pod="openshift-ovn-kubernetes/ovnkube-node-657wc" Nov 24 08:49:26 crc kubenswrapper[4886]: I1124 08:49:26.904619 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/03f9078c-6b20-46d5-ae2a-2eb20e236769-host-slash\") pod \"ovnkube-node-657wc\" (UID: \"03f9078c-6b20-46d5-ae2a-2eb20e236769\") " pod="openshift-ovn-kubernetes/ovnkube-node-657wc" Nov 24 08:49:26 crc kubenswrapper[4886]: I1124 08:49:26.904692 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/03f9078c-6b20-46d5-ae2a-2eb20e236769-systemd-units\") pod \"ovnkube-node-657wc\" (UID: \"03f9078c-6b20-46d5-ae2a-2eb20e236769\") " pod="openshift-ovn-kubernetes/ovnkube-node-657wc" Nov 24 08:49:26 crc kubenswrapper[4886]: I1124 08:49:26.904733 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/03f9078c-6b20-46d5-ae2a-2eb20e236769-var-lib-openvswitch\") pod \"ovnkube-node-657wc\" (UID: \"03f9078c-6b20-46d5-ae2a-2eb20e236769\") " pod="openshift-ovn-kubernetes/ovnkube-node-657wc" Nov 24 08:49:26 crc kubenswrapper[4886]: I1124 08:49:26.904878 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:26Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:26 crc kubenswrapper[4886]: I1124 08:49:26.904551 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/03f9078c-6b20-46d5-ae2a-2eb20e236769-ovn-node-metrics-cert\") pod \"ovnkube-node-657wc\" (UID: \"03f9078c-6b20-46d5-ae2a-2eb20e236769\") " pod="openshift-ovn-kubernetes/ovnkube-node-657wc" Nov 24 08:49:26 crc kubenswrapper[4886]: I1124 08:49:26.905166 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/03f9078c-6b20-46d5-ae2a-2eb20e236769-env-overrides\") pod \"ovnkube-node-657wc\" (UID: \"03f9078c-6b20-46d5-ae2a-2eb20e236769\") " pod="openshift-ovn-kubernetes/ovnkube-node-657wc" Nov 24 08:49:26 crc kubenswrapper[4886]: I1124 08:49:26.905188 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/03f9078c-6b20-46d5-ae2a-2eb20e236769-ovnkube-config\") pod \"ovnkube-node-657wc\" (UID: \"03f9078c-6b20-46d5-ae2a-2eb20e236769\") " pod="openshift-ovn-kubernetes/ovnkube-node-657wc" Nov 24 08:49:26 crc kubenswrapper[4886]: I1124 08:49:26.905192 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/03f9078c-6b20-46d5-ae2a-2eb20e236769-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-657wc\" (UID: \"03f9078c-6b20-46d5-ae2a-2eb20e236769\") " pod="openshift-ovn-kubernetes/ovnkube-node-657wc" Nov 24 08:49:26 crc kubenswrapper[4886]: I1124 08:49:26.905273 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/03f9078c-6b20-46d5-ae2a-2eb20e236769-log-socket\") pod \"ovnkube-node-657wc\" (UID: \"03f9078c-6b20-46d5-ae2a-2eb20e236769\") " pod="openshift-ovn-kubernetes/ovnkube-node-657wc" Nov 24 08:49:26 crc kubenswrapper[4886]: I1124 08:49:26.905350 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/03f9078c-6b20-46d5-ae2a-2eb20e236769-log-socket\") pod \"ovnkube-node-657wc\" (UID: \"03f9078c-6b20-46d5-ae2a-2eb20e236769\") " pod="openshift-ovn-kubernetes/ovnkube-node-657wc" Nov 24 08:49:26 crc kubenswrapper[4886]: I1124 08:49:26.905225 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/03f9078c-6b20-46d5-ae2a-2eb20e236769-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-657wc\" (UID: \"03f9078c-6b20-46d5-ae2a-2eb20e236769\") " pod="openshift-ovn-kubernetes/ovnkube-node-657wc" Nov 24 08:49:26 crc kubenswrapper[4886]: I1124 08:49:26.905602 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/03f9078c-6b20-46d5-ae2a-2eb20e236769-env-overrides\") pod \"ovnkube-node-657wc\" (UID: \"03f9078c-6b20-46d5-ae2a-2eb20e236769\") " pod="openshift-ovn-kubernetes/ovnkube-node-657wc" Nov 24 08:49:26 crc kubenswrapper[4886]: I1124 08:49:26.906217 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/03f9078c-6b20-46d5-ae2a-2eb20e236769-host-run-netns\") pod \"ovnkube-node-657wc\" (UID: \"03f9078c-6b20-46d5-ae2a-2eb20e236769\") " pod="openshift-ovn-kubernetes/ovnkube-node-657wc" Nov 24 08:49:26 crc kubenswrapper[4886]: I1124 08:49:26.916988 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/03f9078c-6b20-46d5-ae2a-2eb20e236769-ovn-node-metrics-cert\") pod \"ovnkube-node-657wc\" (UID: \"03f9078c-6b20-46d5-ae2a-2eb20e236769\") " pod="openshift-ovn-kubernetes/ovnkube-node-657wc" Nov 24 08:49:26 crc kubenswrapper[4886]: I1124 08:49:26.923323 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zc46q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23cb993e-0360-4449-b604-8ddd825a6502\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8hf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8hf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zc46q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:26Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:26 crc kubenswrapper[4886]: I1124 08:49:26.925432 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m55nz\" (UniqueName: \"kubernetes.io/projected/03f9078c-6b20-46d5-ae2a-2eb20e236769-kube-api-access-m55nz\") pod \"ovnkube-node-657wc\" (UID: \"03f9078c-6b20-46d5-ae2a-2eb20e236769\") " pod="openshift-ovn-kubernetes/ovnkube-node-657wc" Nov 24 08:49:26 crc kubenswrapper[4886]: I1124 08:49:26.939258 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2dk8j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d515fec-60f3-4bf7-9ba4-697bb691b670\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6pfs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2dk8j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:26Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:26 crc kubenswrapper[4886]: I1124 08:49:26.962124 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56bddcbe3378ccc43de3046352a1284d9590790973736e218e891b804bfe1ff8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:26Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:26 crc kubenswrapper[4886]: I1124 08:49:26.984016 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:26Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:26 crc kubenswrapper[4886]: I1124 08:49:26.999728 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-5g6ld" event={"ID":"82737edd-859f-4e06-8559-47375deb3a1a","Type":"ContainerStarted","Data":"f1c6253e8a3c02a22a64ad8913937a50b00f99c7c9da201aa0dd154175dc24c7"} Nov 24 08:49:26 crc kubenswrapper[4886]: I1124 08:49:26.999794 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-5g6ld" event={"ID":"82737edd-859f-4e06-8559-47375deb3a1a","Type":"ContainerStarted","Data":"3a51ef8fed3792d53385df1c5d33ddafe907c3b553ca147c60a5263189140979"} Nov 24 08:49:27 crc kubenswrapper[4886]: I1124 08:49:27.003684 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zc46q" event={"ID":"23cb993e-0360-4449-b604-8ddd825a6502","Type":"ContainerStarted","Data":"5b6a02c10c81171f23bf0e623a3f86710da37f6dd62f885c0366830a60b2be34"} Nov 24 08:49:27 crc kubenswrapper[4886]: I1124 08:49:27.003733 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zc46q" event={"ID":"23cb993e-0360-4449-b604-8ddd825a6502","Type":"ContainerStarted","Data":"e91649034af5bea8eb6d7414b96dc30597797638583086ffd8896b755ebf5c17"} Nov 24 08:49:27 crc kubenswrapper[4886]: I1124 08:49:27.006163 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-2dk8j" event={"ID":"5d515fec-60f3-4bf7-9ba4-697bb691b670","Type":"ContainerStarted","Data":"ead2821f6b71571212807c6f2984d541faf5143c28fee48594b19e08bd91d085"} Nov 24 08:49:27 crc kubenswrapper[4886]: I1124 08:49:27.006244 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-2dk8j" event={"ID":"5d515fec-60f3-4bf7-9ba4-697bb691b670","Type":"ContainerStarted","Data":"9bb3e918db91beb74c808f1f8563500a1e4135aa31038514bd60e84eb052f13a"} Nov 24 08:49:27 crc kubenswrapper[4886]: I1124 08:49:27.007201 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-kl8k6" event={"ID":"b578bbaf-7246-42d9-9d2d-346bd1da2c41","Type":"ContainerStarted","Data":"3678319f500bc0ecdec2d6de650fa7a26d78f0bd8828fe7f292ce44c56dcb0e0"} Nov 24 08:49:27 crc kubenswrapper[4886]: I1124 08:49:27.009635 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Nov 24 08:49:27 crc kubenswrapper[4886]: I1124 08:49:27.017511 4886 scope.go:117] "RemoveContainer" containerID="b6d0c617fe424a39e246d154cd6c0d8dc451c251e24dffaf45d644354e94e01e" Nov 24 08:49:27 crc kubenswrapper[4886]: E1124 08:49:27.017715 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Nov 24 08:49:27 crc kubenswrapper[4886]: I1124 08:49:27.032509 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-657wc" Nov 24 08:49:27 crc kubenswrapper[4886]: I1124 08:49:27.036365 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:27Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:27 crc kubenswrapper[4886]: I1124 08:49:27.051232 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zc46q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23cb993e-0360-4449-b604-8ddd825a6502\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8hf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8hf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zc46q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:27Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:27 crc kubenswrapper[4886]: W1124 08:49:27.061577 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod03f9078c_6b20_46d5_ae2a_2eb20e236769.slice/crio-b0b380b152ef8a74d5787cfdeca82f7ca7ac64d220106858371ba4489ff5e2da WatchSource:0}: Error finding container b0b380b152ef8a74d5787cfdeca82f7ca7ac64d220106858371ba4489ff5e2da: Status 404 returned error can't find the container with id b0b380b152ef8a74d5787cfdeca82f7ca7ac64d220106858371ba4489ff5e2da Nov 24 08:49:27 crc kubenswrapper[4886]: I1124 08:49:27.079761 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2dk8j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d515fec-60f3-4bf7-9ba4-697bb691b670\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6pfs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2dk8j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:27Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:27 crc kubenswrapper[4886]: I1124 08:49:27.103630 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:27Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:27 crc kubenswrapper[4886]: I1124 08:49:27.121411 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56bddcbe3378ccc43de3046352a1284d9590790973736e218e891b804bfe1ff8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:27Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:27 crc kubenswrapper[4886]: I1124 08:49:27.140045 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://336d2d1c7e69b9e2c481681be12af7bfd17b8517e7fd97a67b3ca82dbc84ac35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c72af163fd4d5f2dc2a642984873ee83c62e9f1f96fac318fd17d5a74e5ea82f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:27Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:27 crc kubenswrapper[4886]: I1124 08:49:27.156700 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb6a474a-7674-4280-8da4-784cc3c4fc67\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e99af72fba7cd02a022aede4ff904f6dc0d263429468b460f3886cbbf8767e9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3a5e301ad7f424b4db6c7c2d4054973bf707f225de81d58a13077e1a130fe59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://879cdf4650f88cd23cfed38dabd912619f3c95ab3c254c0eb893eb4cbb2f8c5c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://243dc8c38e86fbd96d073b5dc2edaf3f378f052563abfedf86cd78e6270f11b1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:04Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:27Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:27 crc kubenswrapper[4886]: I1124 08:49:27.188032 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b56e78f-e569-4f01-9a0f-6b9db3b505d7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4cd419a96834f5093a6a3bad2d2b99f9ab6f4abdbe5d7fddadf1505a0fd1f3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff0e663f6feec9c77de1cd993c443c9b3df3492612c554cc8b95c8e4e4a026b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://146e753c49089634c3d3105a3debf9737245745d8ed9aee61ae2cf4f56ae37ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6d0c617fe424a39e246d154cd6c0d8dc451c251e24dffaf45d644354e94e01e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8cd396a94b39804c5730c17f380b8a20d993b60e18e213ea8ac8e69eb8fa5862\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T08:49:18Z\\\",\\\"message\\\":\\\"W1124 08:49:08.065821 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1124 08:49:08.066350 1 crypto.go:601] Generating new CA for check-endpoints-signer@1763974148 cert, and key in /tmp/serving-cert-1713431650/serving-signer.crt, /tmp/serving-cert-1713431650/serving-signer.key\\\\nI1124 08:49:08.489753 1 observer_polling.go:159] Starting file observer\\\\nW1124 08:49:08.492449 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1124 08:49:08.492620 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 08:49:08.494754 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1713431650/tls.crt::/tmp/serving-cert-1713431650/tls.key\\\\\\\"\\\\nF1124 08:49:18.704666 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:07Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6d0c617fe424a39e246d154cd6c0d8dc451c251e24dffaf45d644354e94e01e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"message\\\":\\\" 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1124 08:49:25.277531 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1124 08:49:25.277556 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1124 08:49:25.277707 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1124 08:49:25.277722 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1124 08:49:25.278962 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-505038271/tls.crt::/tmp/serving-cert-505038271/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1763974159\\\\\\\\\\\\\\\" (2025-11-24 08:49:18 +0000 UTC to 2025-12-24 08:49:19 +0000 UTC (now=2025-11-24 08:49:25.278915173 +0000 UTC))\\\\\\\"\\\\nI1124 08:49:25.279136 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1763974165\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1763974164\\\\\\\\\\\\\\\" (2025-11-24 07:49:24 +0000 UTC to 2026-11-24 07:49:24 +0000 UTC (now=2025-11-24 08:49:25.279105228 +0000 UTC))\\\\\\\"\\\\nI1124 08:49:25.279186 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1124 08:49:25.279230 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1124 08:49:25.279272 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-505038271/tls.crt::/tmp/serving-cert-505038271/tls.key\\\\\\\"\\\\nI1124 08:49:25.279422 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF1124 08:49:25.273255 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://97e022f9b9279047d700f687148beaeef5ce4f72de418be1bdf0edde4e5211da\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc5ed0e2c8f6299305d6c26194c4b43e6e884785effd5ae99254221d738e7662\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc5ed0e2c8f6299305d6c26194c4b43e6e884785effd5ae99254221d738e7662\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:27Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:27 crc kubenswrapper[4886]: I1124 08:49:27.217033 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:24Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:27Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:27 crc kubenswrapper[4886]: I1124 08:49:27.267243 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:27Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:27 crc kubenswrapper[4886]: I1124 08:49:27.292383 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5g6ld" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82737edd-859f-4e06-8559-47375deb3a1a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f1c6253e8a3c02a22a64ad8913937a50b00f99c7c9da201aa0dd154175dc24c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ffhbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5g6ld\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:27Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:27 crc kubenswrapper[4886]: I1124 08:49:27.340804 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kl8k6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b578bbaf-7246-42d9-9d2d-346bd1da2c41\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6ng9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6ng9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6ng9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6ng9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6ng9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6ng9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6ng9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kl8k6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:27Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:27 crc kubenswrapper[4886]: I1124 08:49:27.343457 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Nov 24 08:49:27 crc kubenswrapper[4886]: I1124 08:49:27.363913 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Nov 24 08:49:27 crc kubenswrapper[4886]: I1124 08:49:27.381916 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Nov 24 08:49:27 crc kubenswrapper[4886]: I1124 08:49:27.405177 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-657wc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03f9078c-6b20-46d5-ae2a-2eb20e236769\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-657wc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:27Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:27 crc kubenswrapper[4886]: I1124 08:49:27.432921 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb6a474a-7674-4280-8da4-784cc3c4fc67\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e99af72fba7cd02a022aede4ff904f6dc0d263429468b460f3886cbbf8767e9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3a5e301ad7f424b4db6c7c2d4054973bf707f225de81d58a13077e1a130fe59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://879cdf4650f88cd23cfed38dabd912619f3c95ab3c254c0eb893eb4cbb2f8c5c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://243dc8c38e86fbd96d073b5dc2edaf3f378f052563abfedf86cd78e6270f11b1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:04Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:27Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:27 crc kubenswrapper[4886]: I1124 08:49:27.475140 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://336d2d1c7e69b9e2c481681be12af7bfd17b8517e7fd97a67b3ca82dbc84ac35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c72af163fd4d5f2dc2a642984873ee83c62e9f1f96fac318fd17d5a74e5ea82f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:27Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:27 crc kubenswrapper[4886]: I1124 08:49:27.514919 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5g6ld" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82737edd-859f-4e06-8559-47375deb3a1a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f1c6253e8a3c02a22a64ad8913937a50b00f99c7c9da201aa0dd154175dc24c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ffhbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5g6ld\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:27Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:27 crc kubenswrapper[4886]: I1124 08:49:27.561640 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kl8k6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b578bbaf-7246-42d9-9d2d-346bd1da2c41\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6ng9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6ng9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6ng9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6ng9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6ng9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6ng9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6ng9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kl8k6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:27Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:27 crc kubenswrapper[4886]: I1124 08:49:27.601596 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-657wc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03f9078c-6b20-46d5-ae2a-2eb20e236769\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-657wc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:27Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:27 crc kubenswrapper[4886]: I1124 08:49:27.641755 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1de93d0-350b-4f10-a50d-d4ca45b47a33\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6cae19a77290a6aa02abc366057b242de03f58a183d136dc19248cdfa2c0fa75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e34e380e0dd3b6788878e07e1d497d003445742fb3e976e78f116fd775fb0ae8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aa26820df6b9179aede13abc90067edc933b9bae6f11a6a1c32cf1a3ee494c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66381e4a8de0eb3ec683f0643c26082dac4f16a6e152164bf9a8b14449a23b0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52f6359fcbe915c66d972a4cdf89646528d2c1d5099c96cb3436e2da54d1099a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ee3470d2b357bc4a39b9fa7da8800930ea9f4d4a3926aa9d2000be8ca2da241\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ee3470d2b357bc4a39b9fa7da8800930ea9f4d4a3926aa9d2000be8ca2da241\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74bf2e186184e37d22515fc9a765b4ff0530354eb4b6af7679cf6f046651f9e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74bf2e186184e37d22515fc9a765b4ff0530354eb4b6af7679cf6f046651f9e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1794ee7f2b8bf736d63385a3755d5a50e5f05cc74470ed04c3f21054476ff5f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1794ee7f2b8bf736d63385a3755d5a50e5f05cc74470ed04c3f21054476ff5f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:04Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:27Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:27 crc kubenswrapper[4886]: I1124 08:49:27.674519 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b56e78f-e569-4f01-9a0f-6b9db3b505d7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4cd419a96834f5093a6a3bad2d2b99f9ab6f4abdbe5d7fddadf1505a0fd1f3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff0e663f6feec9c77de1cd993c443c9b3df3492612c554cc8b95c8e4e4a026b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://146e753c49089634c3d3105a3debf9737245745d8ed9aee61ae2cf4f56ae37ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6d0c617fe424a39e246d154cd6c0d8dc451c251e24dffaf45d644354e94e01e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6d0c617fe424a39e246d154cd6c0d8dc451c251e24dffaf45d644354e94e01e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"message\\\":\\\" 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1124 08:49:25.277531 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1124 08:49:25.277556 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1124 08:49:25.277707 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1124 08:49:25.277722 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1124 08:49:25.278962 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-505038271/tls.crt::/tmp/serving-cert-505038271/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1763974159\\\\\\\\\\\\\\\" (2025-11-24 08:49:18 +0000 UTC to 2025-12-24 08:49:19 +0000 UTC (now=2025-11-24 08:49:25.278915173 +0000 UTC))\\\\\\\"\\\\nI1124 08:49:25.279136 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1763974165\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1763974164\\\\\\\\\\\\\\\" (2025-11-24 07:49:24 +0000 UTC to 2026-11-24 07:49:24 +0000 UTC (now=2025-11-24 08:49:25.279105228 +0000 UTC))\\\\\\\"\\\\nI1124 08:49:25.279186 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1124 08:49:25.279230 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1124 08:49:25.279272 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-505038271/tls.crt::/tmp/serving-cert-505038271/tls.key\\\\\\\"\\\\nI1124 08:49:25.279422 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF1124 08:49:25.273255 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://97e022f9b9279047d700f687148beaeef5ce4f72de418be1bdf0edde4e5211da\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc5ed0e2c8f6299305d6c26194c4b43e6e884785effd5ae99254221d738e7662\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc5ed0e2c8f6299305d6c26194c4b43e6e884785effd5ae99254221d738e7662\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:27Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:27 crc kubenswrapper[4886]: I1124 08:49:27.712998 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:24Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:27Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:27 crc kubenswrapper[4886]: I1124 08:49:27.752943 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:27Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:27 crc kubenswrapper[4886]: I1124 08:49:27.791074 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:27Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:27 crc kubenswrapper[4886]: I1124 08:49:27.832053 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zc46q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23cb993e-0360-4449-b604-8ddd825a6502\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8hf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8hf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zc46q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:27Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:27 crc kubenswrapper[4886]: I1124 08:49:27.872823 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2dk8j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d515fec-60f3-4bf7-9ba4-697bb691b670\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ead2821f6b71571212807c6f2984d541faf5143c28fee48594b19e08bd91d085\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6pfs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2dk8j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:27Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:27 crc kubenswrapper[4886]: I1124 08:49:27.874752 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-82v6c"] Nov 24 08:49:27 crc kubenswrapper[4886]: I1124 08:49:27.875218 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-82v6c" Nov 24 08:49:27 crc kubenswrapper[4886]: I1124 08:49:27.903980 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Nov 24 08:49:27 crc kubenswrapper[4886]: I1124 08:49:27.914971 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b5d34f7b-b75b-4572-87ca-a01c66ba67b3-host\") pod \"node-ca-82v6c\" (UID: \"b5d34f7b-b75b-4572-87ca-a01c66ba67b3\") " pod="openshift-image-registry/node-ca-82v6c" Nov 24 08:49:27 crc kubenswrapper[4886]: I1124 08:49:27.915055 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j8qmt\" (UniqueName: \"kubernetes.io/projected/b5d34f7b-b75b-4572-87ca-a01c66ba67b3-kube-api-access-j8qmt\") pod \"node-ca-82v6c\" (UID: \"b5d34f7b-b75b-4572-87ca-a01c66ba67b3\") " pod="openshift-image-registry/node-ca-82v6c" Nov 24 08:49:27 crc kubenswrapper[4886]: I1124 08:49:27.915126 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/b5d34f7b-b75b-4572-87ca-a01c66ba67b3-serviceca\") pod \"node-ca-82v6c\" (UID: \"b5d34f7b-b75b-4572-87ca-a01c66ba67b3\") " pod="openshift-image-registry/node-ca-82v6c" Nov 24 08:49:27 crc kubenswrapper[4886]: I1124 08:49:27.921808 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Nov 24 08:49:27 crc kubenswrapper[4886]: I1124 08:49:27.943209 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Nov 24 08:49:27 crc kubenswrapper[4886]: I1124 08:49:27.961955 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Nov 24 08:49:27 crc kubenswrapper[4886]: I1124 08:49:27.995008 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56bddcbe3378ccc43de3046352a1284d9590790973736e218e891b804bfe1ff8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:27Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:28 crc kubenswrapper[4886]: I1124 08:49:28.016065 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j8qmt\" (UniqueName: \"kubernetes.io/projected/b5d34f7b-b75b-4572-87ca-a01c66ba67b3-kube-api-access-j8qmt\") pod \"node-ca-82v6c\" (UID: \"b5d34f7b-b75b-4572-87ca-a01c66ba67b3\") " pod="openshift-image-registry/node-ca-82v6c" Nov 24 08:49:28 crc kubenswrapper[4886]: I1124 08:49:28.016198 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/b5d34f7b-b75b-4572-87ca-a01c66ba67b3-serviceca\") pod \"node-ca-82v6c\" (UID: \"b5d34f7b-b75b-4572-87ca-a01c66ba67b3\") " pod="openshift-image-registry/node-ca-82v6c" Nov 24 08:49:28 crc kubenswrapper[4886]: I1124 08:49:28.016255 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b5d34f7b-b75b-4572-87ca-a01c66ba67b3-host\") pod \"node-ca-82v6c\" (UID: \"b5d34f7b-b75b-4572-87ca-a01c66ba67b3\") " pod="openshift-image-registry/node-ca-82v6c" Nov 24 08:49:28 crc kubenswrapper[4886]: I1124 08:49:28.016310 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b5d34f7b-b75b-4572-87ca-a01c66ba67b3-host\") pod \"node-ca-82v6c\" (UID: \"b5d34f7b-b75b-4572-87ca-a01c66ba67b3\") " pod="openshift-image-registry/node-ca-82v6c" Nov 24 08:49:28 crc kubenswrapper[4886]: I1124 08:49:28.017522 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/b5d34f7b-b75b-4572-87ca-a01c66ba67b3-serviceca\") pod \"node-ca-82v6c\" (UID: \"b5d34f7b-b75b-4572-87ca-a01c66ba67b3\") " pod="openshift-image-registry/node-ca-82v6c" Nov 24 08:49:28 crc kubenswrapper[4886]: I1124 08:49:28.022509 4886 generic.go:334] "Generic (PLEG): container finished" podID="03f9078c-6b20-46d5-ae2a-2eb20e236769" containerID="53fba806955f4d7e98c5a2c830bc39cc6f9ac2a6248099aa13da6dff5a0cde76" exitCode=0 Nov 24 08:49:28 crc kubenswrapper[4886]: I1124 08:49:28.022592 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-657wc" event={"ID":"03f9078c-6b20-46d5-ae2a-2eb20e236769","Type":"ContainerDied","Data":"53fba806955f4d7e98c5a2c830bc39cc6f9ac2a6248099aa13da6dff5a0cde76"} Nov 24 08:49:28 crc kubenswrapper[4886]: I1124 08:49:28.022664 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-657wc" event={"ID":"03f9078c-6b20-46d5-ae2a-2eb20e236769","Type":"ContainerStarted","Data":"b0b380b152ef8a74d5787cfdeca82f7ca7ac64d220106858371ba4489ff5e2da"} Nov 24 08:49:28 crc kubenswrapper[4886]: I1124 08:49:28.025449 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zc46q" event={"ID":"23cb993e-0360-4449-b604-8ddd825a6502","Type":"ContainerStarted","Data":"008c7d8517b1c3c5f875d9e73315ceb9936936a1d977422f16bf2aaba7e3f4dd"} Nov 24 08:49:28 crc kubenswrapper[4886]: I1124 08:49:28.027443 4886 generic.go:334] "Generic (PLEG): container finished" podID="b578bbaf-7246-42d9-9d2d-346bd1da2c41" containerID="7b4ab49b6707685341f4480554ba33f3dce9687661a3ed08ca228914a6821703" exitCode=0 Nov 24 08:49:28 crc kubenswrapper[4886]: I1124 08:49:28.027538 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-kl8k6" event={"ID":"b578bbaf-7246-42d9-9d2d-346bd1da2c41","Type":"ContainerDied","Data":"7b4ab49b6707685341f4480554ba33f3dce9687661a3ed08ca228914a6821703"} Nov 24 08:49:28 crc kubenswrapper[4886]: I1124 08:49:28.040502 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:28Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:28 crc kubenswrapper[4886]: I1124 08:49:28.059404 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j8qmt\" (UniqueName: \"kubernetes.io/projected/b5d34f7b-b75b-4572-87ca-a01c66ba67b3-kube-api-access-j8qmt\") pod \"node-ca-82v6c\" (UID: \"b5d34f7b-b75b-4572-87ca-a01c66ba67b3\") " pod="openshift-image-registry/node-ca-82v6c" Nov 24 08:49:28 crc kubenswrapper[4886]: E1124 08:49:28.069911 4886 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"etcd-crc\" already exists" pod="openshift-etcd/etcd-crc" Nov 24 08:49:28 crc kubenswrapper[4886]: I1124 08:49:28.119309 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kl8k6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b578bbaf-7246-42d9-9d2d-346bd1da2c41\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6ng9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b4ab49b6707685341f4480554ba33f3dce9687661a3ed08ca228914a6821703\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b4ab49b6707685341f4480554ba33f3dce9687661a3ed08ca228914a6821703\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6ng9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6ng9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6ng9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6ng9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6ng9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6ng9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kl8k6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:28Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:28 crc kubenswrapper[4886]: I1124 08:49:28.155690 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-657wc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03f9078c-6b20-46d5-ae2a-2eb20e236769\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53fba806955f4d7e98c5a2c830bc39cc6f9ac2a6248099aa13da6dff5a0cde76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53fba806955f4d7e98c5a2c830bc39cc6f9ac2a6248099aa13da6dff5a0cde76\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-657wc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:28Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:28 crc kubenswrapper[4886]: I1124 08:49:28.196183 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-82v6c" Nov 24 08:49:28 crc kubenswrapper[4886]: I1124 08:49:28.202913 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1de93d0-350b-4f10-a50d-d4ca45b47a33\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6cae19a77290a6aa02abc366057b242de03f58a183d136dc19248cdfa2c0fa75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e34e380e0dd3b6788878e07e1d497d003445742fb3e976e78f116fd775fb0ae8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aa26820df6b9179aede13abc90067edc933b9bae6f11a6a1c32cf1a3ee494c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66381e4a8de0eb3ec683f0643c26082dac4f16a6e152164bf9a8b14449a23b0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52f6359fcbe915c66d972a4cdf89646528d2c1d5099c96cb3436e2da54d1099a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ee3470d2b357bc4a39b9fa7da8800930ea9f4d4a3926aa9d2000be8ca2da241\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ee3470d2b357bc4a39b9fa7da8800930ea9f4d4a3926aa9d2000be8ca2da241\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74bf2e186184e37d22515fc9a765b4ff0530354eb4b6af7679cf6f046651f9e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74bf2e186184e37d22515fc9a765b4ff0530354eb4b6af7679cf6f046651f9e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1794ee7f2b8bf736d63385a3755d5a50e5f05cc74470ed04c3f21054476ff5f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1794ee7f2b8bf736d63385a3755d5a50e5f05cc74470ed04c3f21054476ff5f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:04Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:28Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:28 crc kubenswrapper[4886]: I1124 08:49:28.234094 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b56e78f-e569-4f01-9a0f-6b9db3b505d7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4cd419a96834f5093a6a3bad2d2b99f9ab6f4abdbe5d7fddadf1505a0fd1f3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff0e663f6feec9c77de1cd993c443c9b3df3492612c554cc8b95c8e4e4a026b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://146e753c49089634c3d3105a3debf9737245745d8ed9aee61ae2cf4f56ae37ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6d0c617fe424a39e246d154cd6c0d8dc451c251e24dffaf45d644354e94e01e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6d0c617fe424a39e246d154cd6c0d8dc451c251e24dffaf45d644354e94e01e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"message\\\":\\\" 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1124 08:49:25.277531 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1124 08:49:25.277556 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1124 08:49:25.277707 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1124 08:49:25.277722 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1124 08:49:25.278962 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-505038271/tls.crt::/tmp/serving-cert-505038271/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1763974159\\\\\\\\\\\\\\\" (2025-11-24 08:49:18 +0000 UTC to 2025-12-24 08:49:19 +0000 UTC (now=2025-11-24 08:49:25.278915173 +0000 UTC))\\\\\\\"\\\\nI1124 08:49:25.279136 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1763974165\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1763974164\\\\\\\\\\\\\\\" (2025-11-24 07:49:24 +0000 UTC to 2026-11-24 07:49:24 +0000 UTC (now=2025-11-24 08:49:25.279105228 +0000 UTC))\\\\\\\"\\\\nI1124 08:49:25.279186 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1124 08:49:25.279230 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1124 08:49:25.279272 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-505038271/tls.crt::/tmp/serving-cert-505038271/tls.key\\\\\\\"\\\\nI1124 08:49:25.279422 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF1124 08:49:25.273255 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://97e022f9b9279047d700f687148beaeef5ce4f72de418be1bdf0edde4e5211da\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc5ed0e2c8f6299305d6c26194c4b43e6e884785effd5ae99254221d738e7662\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc5ed0e2c8f6299305d6c26194c4b43e6e884785effd5ae99254221d738e7662\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:28Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:28 crc kubenswrapper[4886]: I1124 08:49:28.273817 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:24Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:28Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:28 crc kubenswrapper[4886]: I1124 08:49:28.341960 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:28Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:28 crc kubenswrapper[4886]: I1124 08:49:28.372277 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5g6ld" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82737edd-859f-4e06-8559-47375deb3a1a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f1c6253e8a3c02a22a64ad8913937a50b00f99c7c9da201aa0dd154175dc24c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ffhbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5g6ld\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:28Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:28 crc kubenswrapper[4886]: I1124 08:49:28.408681 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:28Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:28 crc kubenswrapper[4886]: I1124 08:49:28.423266 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 08:49:28 crc kubenswrapper[4886]: I1124 08:49:28.423385 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 08:49:28 crc kubenswrapper[4886]: E1124 08:49:28.423478 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 08:49:32.423440604 +0000 UTC m=+28.310178889 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 08:49:28 crc kubenswrapper[4886]: E1124 08:49:28.423494 4886 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 24 08:49:28 crc kubenswrapper[4886]: E1124 08:49:28.423561 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-24 08:49:32.423553748 +0000 UTC m=+28.310291883 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 24 08:49:28 crc kubenswrapper[4886]: I1124 08:49:28.423734 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 08:49:28 crc kubenswrapper[4886]: E1124 08:49:28.423892 4886 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 24 08:49:28 crc kubenswrapper[4886]: E1124 08:49:28.423962 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-24 08:49:32.423943909 +0000 UTC m=+28.310682044 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 24 08:49:28 crc kubenswrapper[4886]: I1124 08:49:28.438653 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zc46q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23cb993e-0360-4449-b604-8ddd825a6502\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://008c7d8517b1c3c5f875d9e73315ceb9936936a1d977422f16bf2aaba7e3f4dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8hf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b6a02c10c81171f23bf0e623a3f86710da37f6dd62f885c0366830a60b2be34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8hf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zc46q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:28Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:28 crc kubenswrapper[4886]: I1124 08:49:28.475926 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2dk8j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d515fec-60f3-4bf7-9ba4-697bb691b670\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ead2821f6b71571212807c6f2984d541faf5143c28fee48594b19e08bd91d085\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6pfs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2dk8j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:28Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:28 crc kubenswrapper[4886]: I1124 08:49:28.512995 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56bddcbe3378ccc43de3046352a1284d9590790973736e218e891b804bfe1ff8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:28Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:28 crc kubenswrapper[4886]: I1124 08:49:28.525002 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 08:49:28 crc kubenswrapper[4886]: I1124 08:49:28.525069 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 08:49:28 crc kubenswrapper[4886]: E1124 08:49:28.525241 4886 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 24 08:49:28 crc kubenswrapper[4886]: E1124 08:49:28.525261 4886 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 24 08:49:28 crc kubenswrapper[4886]: E1124 08:49:28.525275 4886 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 08:49:28 crc kubenswrapper[4886]: E1124 08:49:28.525332 4886 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 24 08:49:28 crc kubenswrapper[4886]: E1124 08:49:28.525387 4886 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 24 08:49:28 crc kubenswrapper[4886]: E1124 08:49:28.525411 4886 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 08:49:28 crc kubenswrapper[4886]: E1124 08:49:28.525342 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-24 08:49:32.525324689 +0000 UTC m=+28.412062824 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 08:49:28 crc kubenswrapper[4886]: E1124 08:49:28.525603 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-24 08:49:32.525542556 +0000 UTC m=+28.412280821 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 08:49:28 crc kubenswrapper[4886]: I1124 08:49:28.555192 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:28Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:28 crc kubenswrapper[4886]: I1124 08:49:28.592272 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-82v6c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5d34f7b-b75b-4572-87ca-a01c66ba67b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:27Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:27Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8qmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-82v6c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:28Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:28 crc kubenswrapper[4886]: I1124 08:49:28.632603 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb6a474a-7674-4280-8da4-784cc3c4fc67\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e99af72fba7cd02a022aede4ff904f6dc0d263429468b460f3886cbbf8767e9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3a5e301ad7f424b4db6c7c2d4054973bf707f225de81d58a13077e1a130fe59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://879cdf4650f88cd23cfed38dabd912619f3c95ab3c254c0eb893eb4cbb2f8c5c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://243dc8c38e86fbd96d073b5dc2edaf3f378f052563abfedf86cd78e6270f11b1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:04Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:28Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:28 crc kubenswrapper[4886]: I1124 08:49:28.673821 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://336d2d1c7e69b9e2c481681be12af7bfd17b8517e7fd97a67b3ca82dbc84ac35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c72af163fd4d5f2dc2a642984873ee83c62e9f1f96fac318fd17d5a74e5ea82f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:28Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:28 crc kubenswrapper[4886]: I1124 08:49:28.848572 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 08:49:28 crc kubenswrapper[4886]: I1124 08:49:28.848646 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 08:49:28 crc kubenswrapper[4886]: E1124 08:49:28.849106 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 08:49:28 crc kubenswrapper[4886]: E1124 08:49:28.849209 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 08:49:28 crc kubenswrapper[4886]: I1124 08:49:28.848797 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 08:49:28 crc kubenswrapper[4886]: E1124 08:49:28.849453 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 08:49:29 crc kubenswrapper[4886]: I1124 08:49:29.034625 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"736f509b74b3eb557d2963822fd7f3aa507fc34bc178d6b2c05e05dde6e2c88e"} Nov 24 08:49:29 crc kubenswrapper[4886]: I1124 08:49:29.037432 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-kl8k6" event={"ID":"b578bbaf-7246-42d9-9d2d-346bd1da2c41","Type":"ContainerStarted","Data":"ce7a090aac10049685ef8de3f6533a3a7c61f8fe5cc79f2af17f6f36d0e90168"} Nov 24 08:49:29 crc kubenswrapper[4886]: I1124 08:49:29.040191 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-657wc" event={"ID":"03f9078c-6b20-46d5-ae2a-2eb20e236769","Type":"ContainerStarted","Data":"f59f3a090f9824d71c23fcb40dde8c9d34fa8116c17ba79a45c005b2719f3dc7"} Nov 24 08:49:29 crc kubenswrapper[4886]: I1124 08:49:29.040248 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-657wc" event={"ID":"03f9078c-6b20-46d5-ae2a-2eb20e236769","Type":"ContainerStarted","Data":"48584e4993cf06ba4b29a0732136f0f86c23d81adea76e945015ffafb462819d"} Nov 24 08:49:29 crc kubenswrapper[4886]: I1124 08:49:29.040261 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-657wc" event={"ID":"03f9078c-6b20-46d5-ae2a-2eb20e236769","Type":"ContainerStarted","Data":"97477bfe13de2286806b9a8e3c723e3828c1b0ef182329585eaef507dd0c36f9"} Nov 24 08:49:29 crc kubenswrapper[4886]: I1124 08:49:29.041608 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-82v6c" event={"ID":"b5d34f7b-b75b-4572-87ca-a01c66ba67b3","Type":"ContainerStarted","Data":"a9fe889c11fc75d5c449c6facbeb67f8fc2d60ffedbdf1433b72147cdabd353e"} Nov 24 08:49:29 crc kubenswrapper[4886]: I1124 08:49:29.041668 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-82v6c" event={"ID":"b5d34f7b-b75b-4572-87ca-a01c66ba67b3","Type":"ContainerStarted","Data":"d88580f1bf050169378685581a44e84457c58848b416d54d5fc01df8d3a07eb9"} Nov 24 08:49:29 crc kubenswrapper[4886]: I1124 08:49:29.056109 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb6a474a-7674-4280-8da4-784cc3c4fc67\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e99af72fba7cd02a022aede4ff904f6dc0d263429468b460f3886cbbf8767e9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3a5e301ad7f424b4db6c7c2d4054973bf707f225de81d58a13077e1a130fe59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://879cdf4650f88cd23cfed38dabd912619f3c95ab3c254c0eb893eb4cbb2f8c5c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://243dc8c38e86fbd96d073b5dc2edaf3f378f052563abfedf86cd78e6270f11b1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:04Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:29Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:29 crc kubenswrapper[4886]: I1124 08:49:29.070947 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://336d2d1c7e69b9e2c481681be12af7bfd17b8517e7fd97a67b3ca82dbc84ac35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c72af163fd4d5f2dc2a642984873ee83c62e9f1f96fac318fd17d5a74e5ea82f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:29Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:29 crc kubenswrapper[4886]: I1124 08:49:29.085241 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://736f509b74b3eb557d2963822fd7f3aa507fc34bc178d6b2c05e05dde6e2c88e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:29Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:29 crc kubenswrapper[4886]: I1124 08:49:29.101354 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:29Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:29 crc kubenswrapper[4886]: I1124 08:49:29.116873 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5g6ld" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82737edd-859f-4e06-8559-47375deb3a1a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f1c6253e8a3c02a22a64ad8913937a50b00f99c7c9da201aa0dd154175dc24c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ffhbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5g6ld\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:29Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:29 crc kubenswrapper[4886]: I1124 08:49:29.137517 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kl8k6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b578bbaf-7246-42d9-9d2d-346bd1da2c41\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6ng9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b4ab49b6707685341f4480554ba33f3dce9687661a3ed08ca228914a6821703\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b4ab49b6707685341f4480554ba33f3dce9687661a3ed08ca228914a6821703\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6ng9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6ng9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6ng9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6ng9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6ng9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6ng9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kl8k6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:29Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:29 crc kubenswrapper[4886]: I1124 08:49:29.163051 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-657wc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03f9078c-6b20-46d5-ae2a-2eb20e236769\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53fba806955f4d7e98c5a2c830bc39cc6f9ac2a6248099aa13da6dff5a0cde76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53fba806955f4d7e98c5a2c830bc39cc6f9ac2a6248099aa13da6dff5a0cde76\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-657wc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:29Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:29 crc kubenswrapper[4886]: I1124 08:49:29.190418 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1de93d0-350b-4f10-a50d-d4ca45b47a33\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6cae19a77290a6aa02abc366057b242de03f58a183d136dc19248cdfa2c0fa75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e34e380e0dd3b6788878e07e1d497d003445742fb3e976e78f116fd775fb0ae8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aa26820df6b9179aede13abc90067edc933b9bae6f11a6a1c32cf1a3ee494c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66381e4a8de0eb3ec683f0643c26082dac4f16a6e152164bf9a8b14449a23b0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52f6359fcbe915c66d972a4cdf89646528d2c1d5099c96cb3436e2da54d1099a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ee3470d2b357bc4a39b9fa7da8800930ea9f4d4a3926aa9d2000be8ca2da241\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ee3470d2b357bc4a39b9fa7da8800930ea9f4d4a3926aa9d2000be8ca2da241\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74bf2e186184e37d22515fc9a765b4ff0530354eb4b6af7679cf6f046651f9e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74bf2e186184e37d22515fc9a765b4ff0530354eb4b6af7679cf6f046651f9e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1794ee7f2b8bf736d63385a3755d5a50e5f05cc74470ed04c3f21054476ff5f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1794ee7f2b8bf736d63385a3755d5a50e5f05cc74470ed04c3f21054476ff5f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:04Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:29Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:29 crc kubenswrapper[4886]: I1124 08:49:29.210049 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b56e78f-e569-4f01-9a0f-6b9db3b505d7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4cd419a96834f5093a6a3bad2d2b99f9ab6f4abdbe5d7fddadf1505a0fd1f3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff0e663f6feec9c77de1cd993c443c9b3df3492612c554cc8b95c8e4e4a026b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://146e753c49089634c3d3105a3debf9737245745d8ed9aee61ae2cf4f56ae37ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6d0c617fe424a39e246d154cd6c0d8dc451c251e24dffaf45d644354e94e01e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6d0c617fe424a39e246d154cd6c0d8dc451c251e24dffaf45d644354e94e01e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"message\\\":\\\" 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1124 08:49:25.277531 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1124 08:49:25.277556 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1124 08:49:25.277707 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1124 08:49:25.277722 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1124 08:49:25.278962 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-505038271/tls.crt::/tmp/serving-cert-505038271/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1763974159\\\\\\\\\\\\\\\" (2025-11-24 08:49:18 +0000 UTC to 2025-12-24 08:49:19 +0000 UTC (now=2025-11-24 08:49:25.278915173 +0000 UTC))\\\\\\\"\\\\nI1124 08:49:25.279136 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1763974165\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1763974164\\\\\\\\\\\\\\\" (2025-11-24 07:49:24 +0000 UTC to 2026-11-24 07:49:24 +0000 UTC (now=2025-11-24 08:49:25.279105228 +0000 UTC))\\\\\\\"\\\\nI1124 08:49:25.279186 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1124 08:49:25.279230 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1124 08:49:25.279272 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-505038271/tls.crt::/tmp/serving-cert-505038271/tls.key\\\\\\\"\\\\nI1124 08:49:25.279422 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF1124 08:49:25.273255 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://97e022f9b9279047d700f687148beaeef5ce4f72de418be1bdf0edde4e5211da\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc5ed0e2c8f6299305d6c26194c4b43e6e884785effd5ae99254221d738e7662\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc5ed0e2c8f6299305d6c26194c4b43e6e884785effd5ae99254221d738e7662\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:29Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:29 crc kubenswrapper[4886]: I1124 08:49:29.226394 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2dk8j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d515fec-60f3-4bf7-9ba4-697bb691b670\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ead2821f6b71571212807c6f2984d541faf5143c28fee48594b19e08bd91d085\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6pfs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2dk8j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:29Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:29 crc kubenswrapper[4886]: I1124 08:49:29.243699 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:29Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:29 crc kubenswrapper[4886]: I1124 08:49:29.258682 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zc46q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23cb993e-0360-4449-b604-8ddd825a6502\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://008c7d8517b1c3c5f875d9e73315ceb9936936a1d977422f16bf2aaba7e3f4dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8hf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b6a02c10c81171f23bf0e623a3f86710da37f6dd62f885c0366830a60b2be34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8hf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zc46q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:29Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:29 crc kubenswrapper[4886]: I1124 08:49:29.273529 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56bddcbe3378ccc43de3046352a1284d9590790973736e218e891b804bfe1ff8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:29Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:29 crc kubenswrapper[4886]: I1124 08:49:29.293420 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:29Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:29 crc kubenswrapper[4886]: I1124 08:49:29.306331 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-82v6c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5d34f7b-b75b-4572-87ca-a01c66ba67b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:27Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:27Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8qmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-82v6c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:29Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:29 crc kubenswrapper[4886]: I1124 08:49:29.321568 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56bddcbe3378ccc43de3046352a1284d9590790973736e218e891b804bfe1ff8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:29Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:29 crc kubenswrapper[4886]: I1124 08:49:29.350328 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:29Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:29 crc kubenswrapper[4886]: I1124 08:49:29.392243 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-82v6c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5d34f7b-b75b-4572-87ca-a01c66ba67b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9fe889c11fc75d5c449c6facbeb67f8fc2d60ffedbdf1433b72147cdabd353e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8qmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-82v6c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:29Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:29 crc kubenswrapper[4886]: I1124 08:49:29.433346 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb6a474a-7674-4280-8da4-784cc3c4fc67\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e99af72fba7cd02a022aede4ff904f6dc0d263429468b460f3886cbbf8767e9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3a5e301ad7f424b4db6c7c2d4054973bf707f225de81d58a13077e1a130fe59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://879cdf4650f88cd23cfed38dabd912619f3c95ab3c254c0eb893eb4cbb2f8c5c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://243dc8c38e86fbd96d073b5dc2edaf3f378f052563abfedf86cd78e6270f11b1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:04Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:29Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:29 crc kubenswrapper[4886]: I1124 08:49:29.471460 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://336d2d1c7e69b9e2c481681be12af7bfd17b8517e7fd97a67b3ca82dbc84ac35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c72af163fd4d5f2dc2a642984873ee83c62e9f1f96fac318fd17d5a74e5ea82f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:29Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:29 crc kubenswrapper[4886]: I1124 08:49:29.516639 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kl8k6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b578bbaf-7246-42d9-9d2d-346bd1da2c41\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6ng9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b4ab49b6707685341f4480554ba33f3dce9687661a3ed08ca228914a6821703\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b4ab49b6707685341f4480554ba33f3dce9687661a3ed08ca228914a6821703\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6ng9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce7a090aac10049685ef8de3f6533a3a7c61f8fe5cc79f2af17f6f36d0e90168\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6ng9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6ng9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6ng9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6ng9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6ng9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kl8k6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:29Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:29 crc kubenswrapper[4886]: I1124 08:49:29.560248 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-657wc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03f9078c-6b20-46d5-ae2a-2eb20e236769\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53fba806955f4d7e98c5a2c830bc39cc6f9ac2a6248099aa13da6dff5a0cde76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53fba806955f4d7e98c5a2c830bc39cc6f9ac2a6248099aa13da6dff5a0cde76\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-657wc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:29Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:29 crc kubenswrapper[4886]: I1124 08:49:29.599488 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1de93d0-350b-4f10-a50d-d4ca45b47a33\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6cae19a77290a6aa02abc366057b242de03f58a183d136dc19248cdfa2c0fa75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e34e380e0dd3b6788878e07e1d497d003445742fb3e976e78f116fd775fb0ae8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aa26820df6b9179aede13abc90067edc933b9bae6f11a6a1c32cf1a3ee494c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66381e4a8de0eb3ec683f0643c26082dac4f16a6e152164bf9a8b14449a23b0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52f6359fcbe915c66d972a4cdf89646528d2c1d5099c96cb3436e2da54d1099a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ee3470d2b357bc4a39b9fa7da8800930ea9f4d4a3926aa9d2000be8ca2da241\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ee3470d2b357bc4a39b9fa7da8800930ea9f4d4a3926aa9d2000be8ca2da241\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74bf2e186184e37d22515fc9a765b4ff0530354eb4b6af7679cf6f046651f9e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74bf2e186184e37d22515fc9a765b4ff0530354eb4b6af7679cf6f046651f9e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1794ee7f2b8bf736d63385a3755d5a50e5f05cc74470ed04c3f21054476ff5f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1794ee7f2b8bf736d63385a3755d5a50e5f05cc74470ed04c3f21054476ff5f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:04Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:29Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:29 crc kubenswrapper[4886]: I1124 08:49:29.633393 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b56e78f-e569-4f01-9a0f-6b9db3b505d7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4cd419a96834f5093a6a3bad2d2b99f9ab6f4abdbe5d7fddadf1505a0fd1f3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff0e663f6feec9c77de1cd993c443c9b3df3492612c554cc8b95c8e4e4a026b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://146e753c49089634c3d3105a3debf9737245745d8ed9aee61ae2cf4f56ae37ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6d0c617fe424a39e246d154cd6c0d8dc451c251e24dffaf45d644354e94e01e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6d0c617fe424a39e246d154cd6c0d8dc451c251e24dffaf45d644354e94e01e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"message\\\":\\\" 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1124 08:49:25.277531 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1124 08:49:25.277556 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1124 08:49:25.277707 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1124 08:49:25.277722 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1124 08:49:25.278962 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-505038271/tls.crt::/tmp/serving-cert-505038271/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1763974159\\\\\\\\\\\\\\\" (2025-11-24 08:49:18 +0000 UTC to 2025-12-24 08:49:19 +0000 UTC (now=2025-11-24 08:49:25.278915173 +0000 UTC))\\\\\\\"\\\\nI1124 08:49:25.279136 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1763974165\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1763974164\\\\\\\\\\\\\\\" (2025-11-24 07:49:24 +0000 UTC to 2026-11-24 07:49:24 +0000 UTC (now=2025-11-24 08:49:25.279105228 +0000 UTC))\\\\\\\"\\\\nI1124 08:49:25.279186 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1124 08:49:25.279230 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1124 08:49:25.279272 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-505038271/tls.crt::/tmp/serving-cert-505038271/tls.key\\\\\\\"\\\\nI1124 08:49:25.279422 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF1124 08:49:25.273255 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://97e022f9b9279047d700f687148beaeef5ce4f72de418be1bdf0edde4e5211da\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc5ed0e2c8f6299305d6c26194c4b43e6e884785effd5ae99254221d738e7662\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc5ed0e2c8f6299305d6c26194c4b43e6e884785effd5ae99254221d738e7662\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:29Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:29 crc kubenswrapper[4886]: I1124 08:49:29.671522 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://736f509b74b3eb557d2963822fd7f3aa507fc34bc178d6b2c05e05dde6e2c88e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:29Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:29 crc kubenswrapper[4886]: I1124 08:49:29.711108 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:29Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:29 crc kubenswrapper[4886]: I1124 08:49:29.750766 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5g6ld" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82737edd-859f-4e06-8559-47375deb3a1a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f1c6253e8a3c02a22a64ad8913937a50b00f99c7c9da201aa0dd154175dc24c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ffhbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5g6ld\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:29Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:29 crc kubenswrapper[4886]: I1124 08:49:29.792939 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:29Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:29 crc kubenswrapper[4886]: I1124 08:49:29.832621 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zc46q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23cb993e-0360-4449-b604-8ddd825a6502\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://008c7d8517b1c3c5f875d9e73315ceb9936936a1d977422f16bf2aaba7e3f4dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8hf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b6a02c10c81171f23bf0e623a3f86710da37f6dd62f885c0366830a60b2be34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8hf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zc46q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:29Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:29 crc kubenswrapper[4886]: I1124 08:49:29.872562 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2dk8j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d515fec-60f3-4bf7-9ba4-697bb691b670\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ead2821f6b71571212807c6f2984d541faf5143c28fee48594b19e08bd91d085\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6pfs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2dk8j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:29Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:30 crc kubenswrapper[4886]: I1124 08:49:30.050764 4886 generic.go:334] "Generic (PLEG): container finished" podID="b578bbaf-7246-42d9-9d2d-346bd1da2c41" containerID="ce7a090aac10049685ef8de3f6533a3a7c61f8fe5cc79f2af17f6f36d0e90168" exitCode=0 Nov 24 08:49:30 crc kubenswrapper[4886]: I1124 08:49:30.050875 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-kl8k6" event={"ID":"b578bbaf-7246-42d9-9d2d-346bd1da2c41","Type":"ContainerDied","Data":"ce7a090aac10049685ef8de3f6533a3a7c61f8fe5cc79f2af17f6f36d0e90168"} Nov 24 08:49:30 crc kubenswrapper[4886]: I1124 08:49:30.056238 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-657wc" event={"ID":"03f9078c-6b20-46d5-ae2a-2eb20e236769","Type":"ContainerStarted","Data":"6f504f7f421bd783399a6d08fa713f0a885d8edd5f5245933a98b6830c5573d5"} Nov 24 08:49:30 crc kubenswrapper[4886]: I1124 08:49:30.056307 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-657wc" event={"ID":"03f9078c-6b20-46d5-ae2a-2eb20e236769","Type":"ContainerStarted","Data":"51cd23eb1b9056a44b5716ce542fafec8e3ebde494f136655a2a5d838ec90ab0"} Nov 24 08:49:30 crc kubenswrapper[4886]: I1124 08:49:30.056322 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-657wc" event={"ID":"03f9078c-6b20-46d5-ae2a-2eb20e236769","Type":"ContainerStarted","Data":"e8c129c56a2c69042516d9caa6abdf4095aefde8fdaef4953a8eed353940be89"} Nov 24 08:49:30 crc kubenswrapper[4886]: I1124 08:49:30.067548 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb6a474a-7674-4280-8da4-784cc3c4fc67\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e99af72fba7cd02a022aede4ff904f6dc0d263429468b460f3886cbbf8767e9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3a5e301ad7f424b4db6c7c2d4054973bf707f225de81d58a13077e1a130fe59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://879cdf4650f88cd23cfed38dabd912619f3c95ab3c254c0eb893eb4cbb2f8c5c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://243dc8c38e86fbd96d073b5dc2edaf3f378f052563abfedf86cd78e6270f11b1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:04Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:30Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:30 crc kubenswrapper[4886]: I1124 08:49:30.081696 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://336d2d1c7e69b9e2c481681be12af7bfd17b8517e7fd97a67b3ca82dbc84ac35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c72af163fd4d5f2dc2a642984873ee83c62e9f1f96fac318fd17d5a74e5ea82f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:30Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:30 crc kubenswrapper[4886]: I1124 08:49:30.097219 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kl8k6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b578bbaf-7246-42d9-9d2d-346bd1da2c41\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6ng9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b4ab49b6707685341f4480554ba33f3dce9687661a3ed08ca228914a6821703\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b4ab49b6707685341f4480554ba33f3dce9687661a3ed08ca228914a6821703\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6ng9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce7a090aac10049685ef8de3f6533a3a7c61f8fe5cc79f2af17f6f36d0e90168\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce7a090aac10049685ef8de3f6533a3a7c61f8fe5cc79f2af17f6f36d0e90168\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6ng9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6ng9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6ng9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6ng9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6ng9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kl8k6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:30Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:30 crc kubenswrapper[4886]: I1124 08:49:30.116316 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-657wc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03f9078c-6b20-46d5-ae2a-2eb20e236769\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53fba806955f4d7e98c5a2c830bc39cc6f9ac2a6248099aa13da6dff5a0cde76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53fba806955f4d7e98c5a2c830bc39cc6f9ac2a6248099aa13da6dff5a0cde76\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-657wc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:30Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:30 crc kubenswrapper[4886]: I1124 08:49:30.138414 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1de93d0-350b-4f10-a50d-d4ca45b47a33\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6cae19a77290a6aa02abc366057b242de03f58a183d136dc19248cdfa2c0fa75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e34e380e0dd3b6788878e07e1d497d003445742fb3e976e78f116fd775fb0ae8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aa26820df6b9179aede13abc90067edc933b9bae6f11a6a1c32cf1a3ee494c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66381e4a8de0eb3ec683f0643c26082dac4f16a6e152164bf9a8b14449a23b0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52f6359fcbe915c66d972a4cdf89646528d2c1d5099c96cb3436e2da54d1099a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ee3470d2b357bc4a39b9fa7da8800930ea9f4d4a3926aa9d2000be8ca2da241\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ee3470d2b357bc4a39b9fa7da8800930ea9f4d4a3926aa9d2000be8ca2da241\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74bf2e186184e37d22515fc9a765b4ff0530354eb4b6af7679cf6f046651f9e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74bf2e186184e37d22515fc9a765b4ff0530354eb4b6af7679cf6f046651f9e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1794ee7f2b8bf736d63385a3755d5a50e5f05cc74470ed04c3f21054476ff5f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1794ee7f2b8bf736d63385a3755d5a50e5f05cc74470ed04c3f21054476ff5f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:04Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:30Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:30 crc kubenswrapper[4886]: I1124 08:49:30.155541 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b56e78f-e569-4f01-9a0f-6b9db3b505d7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4cd419a96834f5093a6a3bad2d2b99f9ab6f4abdbe5d7fddadf1505a0fd1f3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff0e663f6feec9c77de1cd993c443c9b3df3492612c554cc8b95c8e4e4a026b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://146e753c49089634c3d3105a3debf9737245745d8ed9aee61ae2cf4f56ae37ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6d0c617fe424a39e246d154cd6c0d8dc451c251e24dffaf45d644354e94e01e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6d0c617fe424a39e246d154cd6c0d8dc451c251e24dffaf45d644354e94e01e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"message\\\":\\\" 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1124 08:49:25.277531 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1124 08:49:25.277556 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1124 08:49:25.277707 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1124 08:49:25.277722 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1124 08:49:25.278962 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-505038271/tls.crt::/tmp/serving-cert-505038271/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1763974159\\\\\\\\\\\\\\\" (2025-11-24 08:49:18 +0000 UTC to 2025-12-24 08:49:19 +0000 UTC (now=2025-11-24 08:49:25.278915173 +0000 UTC))\\\\\\\"\\\\nI1124 08:49:25.279136 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1763974165\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1763974164\\\\\\\\\\\\\\\" (2025-11-24 07:49:24 +0000 UTC to 2026-11-24 07:49:24 +0000 UTC (now=2025-11-24 08:49:25.279105228 +0000 UTC))\\\\\\\"\\\\nI1124 08:49:25.279186 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1124 08:49:25.279230 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1124 08:49:25.279272 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-505038271/tls.crt::/tmp/serving-cert-505038271/tls.key\\\\\\\"\\\\nI1124 08:49:25.279422 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF1124 08:49:25.273255 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://97e022f9b9279047d700f687148beaeef5ce4f72de418be1bdf0edde4e5211da\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc5ed0e2c8f6299305d6c26194c4b43e6e884785effd5ae99254221d738e7662\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc5ed0e2c8f6299305d6c26194c4b43e6e884785effd5ae99254221d738e7662\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:30Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:30 crc kubenswrapper[4886]: I1124 08:49:30.171107 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://736f509b74b3eb557d2963822fd7f3aa507fc34bc178d6b2c05e05dde6e2c88e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:30Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:30 crc kubenswrapper[4886]: I1124 08:49:30.192517 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:30Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:30 crc kubenswrapper[4886]: I1124 08:49:30.232056 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5g6ld" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82737edd-859f-4e06-8559-47375deb3a1a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f1c6253e8a3c02a22a64ad8913937a50b00f99c7c9da201aa0dd154175dc24c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ffhbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5g6ld\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:30Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:30 crc kubenswrapper[4886]: I1124 08:49:30.271401 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:30Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:30 crc kubenswrapper[4886]: I1124 08:49:30.311486 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zc46q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23cb993e-0360-4449-b604-8ddd825a6502\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://008c7d8517b1c3c5f875d9e73315ceb9936936a1d977422f16bf2aaba7e3f4dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8hf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b6a02c10c81171f23bf0e623a3f86710da37f6dd62f885c0366830a60b2be34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8hf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zc46q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:30Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:30 crc kubenswrapper[4886]: I1124 08:49:30.352015 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2dk8j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d515fec-60f3-4bf7-9ba4-697bb691b670\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ead2821f6b71571212807c6f2984d541faf5143c28fee48594b19e08bd91d085\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6pfs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2dk8j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:30Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:30 crc kubenswrapper[4886]: I1124 08:49:30.393592 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56bddcbe3378ccc43de3046352a1284d9590790973736e218e891b804bfe1ff8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:30Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:30 crc kubenswrapper[4886]: I1124 08:49:30.434833 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:30Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:30 crc kubenswrapper[4886]: I1124 08:49:30.470908 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-82v6c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5d34f7b-b75b-4572-87ca-a01c66ba67b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9fe889c11fc75d5c449c6facbeb67f8fc2d60ffedbdf1433b72147cdabd353e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8qmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-82v6c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:30Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:30 crc kubenswrapper[4886]: I1124 08:49:30.849254 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 08:49:30 crc kubenswrapper[4886]: I1124 08:49:30.849351 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 08:49:30 crc kubenswrapper[4886]: I1124 08:49:30.849358 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 08:49:30 crc kubenswrapper[4886]: E1124 08:49:30.849440 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 08:49:30 crc kubenswrapper[4886]: E1124 08:49:30.849545 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 08:49:30 crc kubenswrapper[4886]: E1124 08:49:30.849661 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 08:49:31 crc kubenswrapper[4886]: I1124 08:49:31.006023 4886 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 08:49:31 crc kubenswrapper[4886]: I1124 08:49:31.008425 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:31 crc kubenswrapper[4886]: I1124 08:49:31.008478 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:31 crc kubenswrapper[4886]: I1124 08:49:31.008497 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:31 crc kubenswrapper[4886]: I1124 08:49:31.008672 4886 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 24 08:49:31 crc kubenswrapper[4886]: I1124 08:49:31.017923 4886 kubelet_node_status.go:115] "Node was previously registered" node="crc" Nov 24 08:49:31 crc kubenswrapper[4886]: I1124 08:49:31.018295 4886 kubelet_node_status.go:79] "Successfully registered node" node="crc" Nov 24 08:49:31 crc kubenswrapper[4886]: I1124 08:49:31.019721 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:31 crc kubenswrapper[4886]: I1124 08:49:31.019857 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:31 crc kubenswrapper[4886]: I1124 08:49:31.019932 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:31 crc kubenswrapper[4886]: I1124 08:49:31.020020 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:49:31 crc kubenswrapper[4886]: I1124 08:49:31.020093 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:49:31Z","lastTransitionTime":"2025-11-24T08:49:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:49:31 crc kubenswrapper[4886]: E1124 08:49:31.035470 4886 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T08:49:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T08:49:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T08:49:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T08:49:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bf6e1d17-5641-40b5-abdc-9697895ace84\\\",\\\"systemUUID\\\":\\\"b95cd08d-0a26-454e-842e-e33553e0c6a8\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:31Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:31 crc kubenswrapper[4886]: I1124 08:49:31.041467 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:31 crc kubenswrapper[4886]: I1124 08:49:31.041532 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:31 crc kubenswrapper[4886]: I1124 08:49:31.041546 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:31 crc kubenswrapper[4886]: I1124 08:49:31.041567 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:49:31 crc kubenswrapper[4886]: I1124 08:49:31.041580 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:49:31Z","lastTransitionTime":"2025-11-24T08:49:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:49:31 crc kubenswrapper[4886]: E1124 08:49:31.057669 4886 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T08:49:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T08:49:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T08:49:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T08:49:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bf6e1d17-5641-40b5-abdc-9697895ace84\\\",\\\"systemUUID\\\":\\\"b95cd08d-0a26-454e-842e-e33553e0c6a8\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:31Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:31 crc kubenswrapper[4886]: I1124 08:49:31.064493 4886 generic.go:334] "Generic (PLEG): container finished" podID="b578bbaf-7246-42d9-9d2d-346bd1da2c41" containerID="d53782a307c740b991c352671c3365951fb623f61ade4b061452b63f19b5e606" exitCode=0 Nov 24 08:49:31 crc kubenswrapper[4886]: I1124 08:49:31.064553 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-kl8k6" event={"ID":"b578bbaf-7246-42d9-9d2d-346bd1da2c41","Type":"ContainerDied","Data":"d53782a307c740b991c352671c3365951fb623f61ade4b061452b63f19b5e606"} Nov 24 08:49:31 crc kubenswrapper[4886]: I1124 08:49:31.066695 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:31 crc kubenswrapper[4886]: I1124 08:49:31.066731 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:31 crc kubenswrapper[4886]: I1124 08:49:31.066741 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:31 crc kubenswrapper[4886]: I1124 08:49:31.066757 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:49:31 crc kubenswrapper[4886]: I1124 08:49:31.066770 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:49:31Z","lastTransitionTime":"2025-11-24T08:49:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:49:31 crc kubenswrapper[4886]: I1124 08:49:31.084884 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://736f509b74b3eb557d2963822fd7f3aa507fc34bc178d6b2c05e05dde6e2c88e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:31Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:31 crc kubenswrapper[4886]: E1124 08:49:31.085043 4886 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T08:49:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T08:49:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T08:49:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T08:49:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bf6e1d17-5641-40b5-abdc-9697895ace84\\\",\\\"systemUUID\\\":\\\"b95cd08d-0a26-454e-842e-e33553e0c6a8\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:31Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:31 crc kubenswrapper[4886]: I1124 08:49:31.100746 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:31 crc kubenswrapper[4886]: I1124 08:49:31.101303 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:31 crc kubenswrapper[4886]: I1124 08:49:31.101317 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:31 crc kubenswrapper[4886]: I1124 08:49:31.101339 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:49:31 crc kubenswrapper[4886]: I1124 08:49:31.101351 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:49:31Z","lastTransitionTime":"2025-11-24T08:49:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:49:31 crc kubenswrapper[4886]: I1124 08:49:31.114793 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:31Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:31 crc kubenswrapper[4886]: E1124 08:49:31.118532 4886 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T08:49:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T08:49:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T08:49:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T08:49:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bf6e1d17-5641-40b5-abdc-9697895ace84\\\",\\\"systemUUID\\\":\\\"b95cd08d-0a26-454e-842e-e33553e0c6a8\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:31Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:31 crc kubenswrapper[4886]: I1124 08:49:31.123367 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:31 crc kubenswrapper[4886]: I1124 08:49:31.123419 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:31 crc kubenswrapper[4886]: I1124 08:49:31.123429 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:31 crc kubenswrapper[4886]: I1124 08:49:31.123444 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:49:31 crc kubenswrapper[4886]: I1124 08:49:31.123456 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:49:31Z","lastTransitionTime":"2025-11-24T08:49:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:49:31 crc kubenswrapper[4886]: I1124 08:49:31.132046 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5g6ld" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82737edd-859f-4e06-8559-47375deb3a1a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f1c6253e8a3c02a22a64ad8913937a50b00f99c7c9da201aa0dd154175dc24c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ffhbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5g6ld\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:31Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:31 crc kubenswrapper[4886]: E1124 08:49:31.137575 4886 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T08:49:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T08:49:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T08:49:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T08:49:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bf6e1d17-5641-40b5-abdc-9697895ace84\\\",\\\"systemUUID\\\":\\\"b95cd08d-0a26-454e-842e-e33553e0c6a8\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:31Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:31 crc kubenswrapper[4886]: E1124 08:49:31.137725 4886 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 24 08:49:31 crc kubenswrapper[4886]: I1124 08:49:31.139940 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:31 crc kubenswrapper[4886]: I1124 08:49:31.139989 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:31 crc kubenswrapper[4886]: I1124 08:49:31.140003 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:31 crc kubenswrapper[4886]: I1124 08:49:31.140021 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:49:31 crc kubenswrapper[4886]: I1124 08:49:31.140035 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:49:31Z","lastTransitionTime":"2025-11-24T08:49:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:49:31 crc kubenswrapper[4886]: I1124 08:49:31.152809 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kl8k6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b578bbaf-7246-42d9-9d2d-346bd1da2c41\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6ng9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b4ab49b6707685341f4480554ba33f3dce9687661a3ed08ca228914a6821703\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b4ab49b6707685341f4480554ba33f3dce9687661a3ed08ca228914a6821703\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6ng9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce7a090aac10049685ef8de3f6533a3a7c61f8fe5cc79f2af17f6f36d0e90168\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce7a090aac10049685ef8de3f6533a3a7c61f8fe5cc79f2af17f6f36d0e90168\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6ng9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d53782a307c740b991c352671c3365951fb623f61ade4b061452b63f19b5e606\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d53782a307c740b991c352671c3365951fb623f61ade4b061452b63f19b5e606\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6ng9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6ng9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6ng9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6ng9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kl8k6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:31Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:31 crc kubenswrapper[4886]: I1124 08:49:31.179797 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-657wc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03f9078c-6b20-46d5-ae2a-2eb20e236769\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53fba806955f4d7e98c5a2c830bc39cc6f9ac2a6248099aa13da6dff5a0cde76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53fba806955f4d7e98c5a2c830bc39cc6f9ac2a6248099aa13da6dff5a0cde76\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-657wc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:31Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:31 crc kubenswrapper[4886]: I1124 08:49:31.203567 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1de93d0-350b-4f10-a50d-d4ca45b47a33\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6cae19a77290a6aa02abc366057b242de03f58a183d136dc19248cdfa2c0fa75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e34e380e0dd3b6788878e07e1d497d003445742fb3e976e78f116fd775fb0ae8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aa26820df6b9179aede13abc90067edc933b9bae6f11a6a1c32cf1a3ee494c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66381e4a8de0eb3ec683f0643c26082dac4f16a6e152164bf9a8b14449a23b0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52f6359fcbe915c66d972a4cdf89646528d2c1d5099c96cb3436e2da54d1099a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ee3470d2b357bc4a39b9fa7da8800930ea9f4d4a3926aa9d2000be8ca2da241\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ee3470d2b357bc4a39b9fa7da8800930ea9f4d4a3926aa9d2000be8ca2da241\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74bf2e186184e37d22515fc9a765b4ff0530354eb4b6af7679cf6f046651f9e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74bf2e186184e37d22515fc9a765b4ff0530354eb4b6af7679cf6f046651f9e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1794ee7f2b8bf736d63385a3755d5a50e5f05cc74470ed04c3f21054476ff5f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1794ee7f2b8bf736d63385a3755d5a50e5f05cc74470ed04c3f21054476ff5f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:04Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:31Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:31 crc kubenswrapper[4886]: I1124 08:49:31.221006 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b56e78f-e569-4f01-9a0f-6b9db3b505d7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4cd419a96834f5093a6a3bad2d2b99f9ab6f4abdbe5d7fddadf1505a0fd1f3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff0e663f6feec9c77de1cd993c443c9b3df3492612c554cc8b95c8e4e4a026b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://146e753c49089634c3d3105a3debf9737245745d8ed9aee61ae2cf4f56ae37ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6d0c617fe424a39e246d154cd6c0d8dc451c251e24dffaf45d644354e94e01e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6d0c617fe424a39e246d154cd6c0d8dc451c251e24dffaf45d644354e94e01e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"message\\\":\\\" 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1124 08:49:25.277531 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1124 08:49:25.277556 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1124 08:49:25.277707 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1124 08:49:25.277722 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1124 08:49:25.278962 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-505038271/tls.crt::/tmp/serving-cert-505038271/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1763974159\\\\\\\\\\\\\\\" (2025-11-24 08:49:18 +0000 UTC to 2025-12-24 08:49:19 +0000 UTC (now=2025-11-24 08:49:25.278915173 +0000 UTC))\\\\\\\"\\\\nI1124 08:49:25.279136 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1763974165\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1763974164\\\\\\\\\\\\\\\" (2025-11-24 07:49:24 +0000 UTC to 2026-11-24 07:49:24 +0000 UTC (now=2025-11-24 08:49:25.279105228 +0000 UTC))\\\\\\\"\\\\nI1124 08:49:25.279186 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1124 08:49:25.279230 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1124 08:49:25.279272 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-505038271/tls.crt::/tmp/serving-cert-505038271/tls.key\\\\\\\"\\\\nI1124 08:49:25.279422 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF1124 08:49:25.273255 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://97e022f9b9279047d700f687148beaeef5ce4f72de418be1bdf0edde4e5211da\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc5ed0e2c8f6299305d6c26194c4b43e6e884785effd5ae99254221d738e7662\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc5ed0e2c8f6299305d6c26194c4b43e6e884785effd5ae99254221d738e7662\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:31Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:31 crc kubenswrapper[4886]: I1124 08:49:31.236683 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2dk8j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d515fec-60f3-4bf7-9ba4-697bb691b670\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ead2821f6b71571212807c6f2984d541faf5143c28fee48594b19e08bd91d085\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6pfs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2dk8j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:31Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:31 crc kubenswrapper[4886]: I1124 08:49:31.246171 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:31 crc kubenswrapper[4886]: I1124 08:49:31.246208 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:31 crc kubenswrapper[4886]: I1124 08:49:31.246219 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:31 crc kubenswrapper[4886]: I1124 08:49:31.246240 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:49:31 crc kubenswrapper[4886]: I1124 08:49:31.246253 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:49:31Z","lastTransitionTime":"2025-11-24T08:49:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:49:31 crc kubenswrapper[4886]: I1124 08:49:31.253647 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:31Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:31 crc kubenswrapper[4886]: I1124 08:49:31.267385 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zc46q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23cb993e-0360-4449-b604-8ddd825a6502\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://008c7d8517b1c3c5f875d9e73315ceb9936936a1d977422f16bf2aaba7e3f4dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8hf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b6a02c10c81171f23bf0e623a3f86710da37f6dd62f885c0366830a60b2be34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8hf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zc46q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:31Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:31 crc kubenswrapper[4886]: I1124 08:49:31.283496 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56bddcbe3378ccc43de3046352a1284d9590790973736e218e891b804bfe1ff8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:31Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:31 crc kubenswrapper[4886]: I1124 08:49:31.296652 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:31Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:31 crc kubenswrapper[4886]: I1124 08:49:31.309358 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-82v6c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5d34f7b-b75b-4572-87ca-a01c66ba67b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9fe889c11fc75d5c449c6facbeb67f8fc2d60ffedbdf1433b72147cdabd353e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8qmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-82v6c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:31Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:31 crc kubenswrapper[4886]: I1124 08:49:31.326221 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb6a474a-7674-4280-8da4-784cc3c4fc67\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e99af72fba7cd02a022aede4ff904f6dc0d263429468b460f3886cbbf8767e9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3a5e301ad7f424b4db6c7c2d4054973bf707f225de81d58a13077e1a130fe59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://879cdf4650f88cd23cfed38dabd912619f3c95ab3c254c0eb893eb4cbb2f8c5c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://243dc8c38e86fbd96d073b5dc2edaf3f378f052563abfedf86cd78e6270f11b1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:04Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:31Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:31 crc kubenswrapper[4886]: I1124 08:49:31.341845 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://336d2d1c7e69b9e2c481681be12af7bfd17b8517e7fd97a67b3ca82dbc84ac35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c72af163fd4d5f2dc2a642984873ee83c62e9f1f96fac318fd17d5a74e5ea82f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:31Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:31 crc kubenswrapper[4886]: I1124 08:49:31.348903 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:31 crc kubenswrapper[4886]: I1124 08:49:31.348966 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:31 crc kubenswrapper[4886]: I1124 08:49:31.348977 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:31 crc kubenswrapper[4886]: I1124 08:49:31.348996 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:49:31 crc kubenswrapper[4886]: I1124 08:49:31.349011 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:49:31Z","lastTransitionTime":"2025-11-24T08:49:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:49:31 crc kubenswrapper[4886]: I1124 08:49:31.451528 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:31 crc kubenswrapper[4886]: I1124 08:49:31.451568 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:31 crc kubenswrapper[4886]: I1124 08:49:31.451576 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:31 crc kubenswrapper[4886]: I1124 08:49:31.451593 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:49:31 crc kubenswrapper[4886]: I1124 08:49:31.451605 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:49:31Z","lastTransitionTime":"2025-11-24T08:49:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:49:31 crc kubenswrapper[4886]: I1124 08:49:31.554176 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:31 crc kubenswrapper[4886]: I1124 08:49:31.554232 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:31 crc kubenswrapper[4886]: I1124 08:49:31.554244 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:31 crc kubenswrapper[4886]: I1124 08:49:31.554262 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:49:31 crc kubenswrapper[4886]: I1124 08:49:31.554276 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:49:31Z","lastTransitionTime":"2025-11-24T08:49:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:49:31 crc kubenswrapper[4886]: I1124 08:49:31.656750 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:31 crc kubenswrapper[4886]: I1124 08:49:31.656800 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:31 crc kubenswrapper[4886]: I1124 08:49:31.656812 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:31 crc kubenswrapper[4886]: I1124 08:49:31.656831 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:49:31 crc kubenswrapper[4886]: I1124 08:49:31.656842 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:49:31Z","lastTransitionTime":"2025-11-24T08:49:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:49:31 crc kubenswrapper[4886]: I1124 08:49:31.760193 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:31 crc kubenswrapper[4886]: I1124 08:49:31.760264 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:31 crc kubenswrapper[4886]: I1124 08:49:31.760278 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:31 crc kubenswrapper[4886]: I1124 08:49:31.760305 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:49:31 crc kubenswrapper[4886]: I1124 08:49:31.760320 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:49:31Z","lastTransitionTime":"2025-11-24T08:49:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:49:31 crc kubenswrapper[4886]: I1124 08:49:31.863436 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:31 crc kubenswrapper[4886]: I1124 08:49:31.863479 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:31 crc kubenswrapper[4886]: I1124 08:49:31.863518 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:31 crc kubenswrapper[4886]: I1124 08:49:31.863538 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:49:31 crc kubenswrapper[4886]: I1124 08:49:31.863549 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:49:31Z","lastTransitionTime":"2025-11-24T08:49:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:49:31 crc kubenswrapper[4886]: I1124 08:49:31.966265 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:31 crc kubenswrapper[4886]: I1124 08:49:31.966317 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:31 crc kubenswrapper[4886]: I1124 08:49:31.966329 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:31 crc kubenswrapper[4886]: I1124 08:49:31.966349 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:49:31 crc kubenswrapper[4886]: I1124 08:49:31.966367 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:49:31Z","lastTransitionTime":"2025-11-24T08:49:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:49:32 crc kubenswrapper[4886]: I1124 08:49:32.068290 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:32 crc kubenswrapper[4886]: I1124 08:49:32.068338 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:32 crc kubenswrapper[4886]: I1124 08:49:32.068352 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:32 crc kubenswrapper[4886]: I1124 08:49:32.068372 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:49:32 crc kubenswrapper[4886]: I1124 08:49:32.068386 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:49:32Z","lastTransitionTime":"2025-11-24T08:49:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:49:32 crc kubenswrapper[4886]: I1124 08:49:32.072780 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-657wc" event={"ID":"03f9078c-6b20-46d5-ae2a-2eb20e236769","Type":"ContainerStarted","Data":"070c77876ce83a479d0ab423d8107a14d506c655b476b2719766d78ad3c88dc4"} Nov 24 08:49:32 crc kubenswrapper[4886]: I1124 08:49:32.075100 4886 generic.go:334] "Generic (PLEG): container finished" podID="b578bbaf-7246-42d9-9d2d-346bd1da2c41" containerID="ce2a601938d1e214fdd14de097a4d95fdf3ac619b03bc2d269a9edf192fa940c" exitCode=0 Nov 24 08:49:32 crc kubenswrapper[4886]: I1124 08:49:32.075144 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-kl8k6" event={"ID":"b578bbaf-7246-42d9-9d2d-346bd1da2c41","Type":"ContainerDied","Data":"ce2a601938d1e214fdd14de097a4d95fdf3ac619b03bc2d269a9edf192fa940c"} Nov 24 08:49:32 crc kubenswrapper[4886]: I1124 08:49:32.090236 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:32Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:32 crc kubenswrapper[4886]: I1124 08:49:32.104449 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zc46q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23cb993e-0360-4449-b604-8ddd825a6502\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://008c7d8517b1c3c5f875d9e73315ceb9936936a1d977422f16bf2aaba7e3f4dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8hf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b6a02c10c81171f23bf0e623a3f86710da37f6dd62f885c0366830a60b2be34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8hf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zc46q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:32Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:32 crc kubenswrapper[4886]: I1124 08:49:32.119751 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2dk8j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d515fec-60f3-4bf7-9ba4-697bb691b670\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ead2821f6b71571212807c6f2984d541faf5143c28fee48594b19e08bd91d085\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6pfs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2dk8j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:32Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:32 crc kubenswrapper[4886]: I1124 08:49:32.137906 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56bddcbe3378ccc43de3046352a1284d9590790973736e218e891b804bfe1ff8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:32Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:32 crc kubenswrapper[4886]: I1124 08:49:32.155572 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:32Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:32 crc kubenswrapper[4886]: I1124 08:49:32.168045 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-82v6c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5d34f7b-b75b-4572-87ca-a01c66ba67b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9fe889c11fc75d5c449c6facbeb67f8fc2d60ffedbdf1433b72147cdabd353e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8qmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-82v6c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:32Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:32 crc kubenswrapper[4886]: I1124 08:49:32.171809 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:32 crc kubenswrapper[4886]: I1124 08:49:32.171872 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:32 crc kubenswrapper[4886]: I1124 08:49:32.171886 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:32 crc kubenswrapper[4886]: I1124 08:49:32.171908 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:49:32 crc kubenswrapper[4886]: I1124 08:49:32.171922 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:49:32Z","lastTransitionTime":"2025-11-24T08:49:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:49:32 crc kubenswrapper[4886]: I1124 08:49:32.183107 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb6a474a-7674-4280-8da4-784cc3c4fc67\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e99af72fba7cd02a022aede4ff904f6dc0d263429468b460f3886cbbf8767e9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3a5e301ad7f424b4db6c7c2d4054973bf707f225de81d58a13077e1a130fe59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://879cdf4650f88cd23cfed38dabd912619f3c95ab3c254c0eb893eb4cbb2f8c5c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://243dc8c38e86fbd96d073b5dc2edaf3f378f052563abfedf86cd78e6270f11b1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:04Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:32Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:32 crc kubenswrapper[4886]: I1124 08:49:32.195795 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://336d2d1c7e69b9e2c481681be12af7bfd17b8517e7fd97a67b3ca82dbc84ac35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c72af163fd4d5f2dc2a642984873ee83c62e9f1f96fac318fd17d5a74e5ea82f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:32Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:32 crc kubenswrapper[4886]: I1124 08:49:32.211613 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kl8k6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b578bbaf-7246-42d9-9d2d-346bd1da2c41\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6ng9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b4ab49b6707685341f4480554ba33f3dce9687661a3ed08ca228914a6821703\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b4ab49b6707685341f4480554ba33f3dce9687661a3ed08ca228914a6821703\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6ng9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce7a090aac10049685ef8de3f6533a3a7c61f8fe5cc79f2af17f6f36d0e90168\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce7a090aac10049685ef8de3f6533a3a7c61f8fe5cc79f2af17f6f36d0e90168\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6ng9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d53782a307c740b991c352671c3365951fb623f61ade4b061452b63f19b5e606\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d53782a307c740b991c352671c3365951fb623f61ade4b061452b63f19b5e606\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6ng9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce2a601938d1e214fdd14de097a4d95fdf3ac619b03bc2d269a9edf192fa940c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce2a601938d1e214fdd14de097a4d95fdf3ac619b03bc2d269a9edf192fa940c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6ng9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6ng9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6ng9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kl8k6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:32Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:32 crc kubenswrapper[4886]: I1124 08:49:32.235586 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-657wc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03f9078c-6b20-46d5-ae2a-2eb20e236769\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53fba806955f4d7e98c5a2c830bc39cc6f9ac2a6248099aa13da6dff5a0cde76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53fba806955f4d7e98c5a2c830bc39cc6f9ac2a6248099aa13da6dff5a0cde76\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-657wc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:32Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:32 crc kubenswrapper[4886]: I1124 08:49:32.259038 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1de93d0-350b-4f10-a50d-d4ca45b47a33\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6cae19a77290a6aa02abc366057b242de03f58a183d136dc19248cdfa2c0fa75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e34e380e0dd3b6788878e07e1d497d003445742fb3e976e78f116fd775fb0ae8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aa26820df6b9179aede13abc90067edc933b9bae6f11a6a1c32cf1a3ee494c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66381e4a8de0eb3ec683f0643c26082dac4f16a6e152164bf9a8b14449a23b0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52f6359fcbe915c66d972a4cdf89646528d2c1d5099c96cb3436e2da54d1099a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ee3470d2b357bc4a39b9fa7da8800930ea9f4d4a3926aa9d2000be8ca2da241\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ee3470d2b357bc4a39b9fa7da8800930ea9f4d4a3926aa9d2000be8ca2da241\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74bf2e186184e37d22515fc9a765b4ff0530354eb4b6af7679cf6f046651f9e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74bf2e186184e37d22515fc9a765b4ff0530354eb4b6af7679cf6f046651f9e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1794ee7f2b8bf736d63385a3755d5a50e5f05cc74470ed04c3f21054476ff5f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1794ee7f2b8bf736d63385a3755d5a50e5f05cc74470ed04c3f21054476ff5f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:04Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:32Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:32 crc kubenswrapper[4886]: I1124 08:49:32.277267 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:32 crc kubenswrapper[4886]: I1124 08:49:32.277335 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:32 crc kubenswrapper[4886]: I1124 08:49:32.277348 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:32 crc kubenswrapper[4886]: I1124 08:49:32.277371 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:49:32 crc kubenswrapper[4886]: I1124 08:49:32.277387 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:49:32Z","lastTransitionTime":"2025-11-24T08:49:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:49:32 crc kubenswrapper[4886]: I1124 08:49:32.280573 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b56e78f-e569-4f01-9a0f-6b9db3b505d7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4cd419a96834f5093a6a3bad2d2b99f9ab6f4abdbe5d7fddadf1505a0fd1f3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff0e663f6feec9c77de1cd993c443c9b3df3492612c554cc8b95c8e4e4a026b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://146e753c49089634c3d3105a3debf9737245745d8ed9aee61ae2cf4f56ae37ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6d0c617fe424a39e246d154cd6c0d8dc451c251e24dffaf45d644354e94e01e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6d0c617fe424a39e246d154cd6c0d8dc451c251e24dffaf45d644354e94e01e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"message\\\":\\\" 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1124 08:49:25.277531 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1124 08:49:25.277556 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1124 08:49:25.277707 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1124 08:49:25.277722 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1124 08:49:25.278962 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-505038271/tls.crt::/tmp/serving-cert-505038271/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1763974159\\\\\\\\\\\\\\\" (2025-11-24 08:49:18 +0000 UTC to 2025-12-24 08:49:19 +0000 UTC (now=2025-11-24 08:49:25.278915173 +0000 UTC))\\\\\\\"\\\\nI1124 08:49:25.279136 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1763974165\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1763974164\\\\\\\\\\\\\\\" (2025-11-24 07:49:24 +0000 UTC to 2026-11-24 07:49:24 +0000 UTC (now=2025-11-24 08:49:25.279105228 +0000 UTC))\\\\\\\"\\\\nI1124 08:49:25.279186 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1124 08:49:25.279230 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1124 08:49:25.279272 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-505038271/tls.crt::/tmp/serving-cert-505038271/tls.key\\\\\\\"\\\\nI1124 08:49:25.279422 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF1124 08:49:25.273255 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://97e022f9b9279047d700f687148beaeef5ce4f72de418be1bdf0edde4e5211da\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc5ed0e2c8f6299305d6c26194c4b43e6e884785effd5ae99254221d738e7662\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc5ed0e2c8f6299305d6c26194c4b43e6e884785effd5ae99254221d738e7662\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:32Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:32 crc kubenswrapper[4886]: I1124 08:49:32.294684 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://736f509b74b3eb557d2963822fd7f3aa507fc34bc178d6b2c05e05dde6e2c88e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:32Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:32 crc kubenswrapper[4886]: I1124 08:49:32.308704 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:32Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:32 crc kubenswrapper[4886]: I1124 08:49:32.321122 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5g6ld" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82737edd-859f-4e06-8559-47375deb3a1a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f1c6253e8a3c02a22a64ad8913937a50b00f99c7c9da201aa0dd154175dc24c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ffhbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5g6ld\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:32Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:32 crc kubenswrapper[4886]: I1124 08:49:32.380284 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:32 crc kubenswrapper[4886]: I1124 08:49:32.380325 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:32 crc kubenswrapper[4886]: I1124 08:49:32.380336 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:32 crc kubenswrapper[4886]: I1124 08:49:32.380362 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:49:32 crc kubenswrapper[4886]: I1124 08:49:32.380375 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:49:32Z","lastTransitionTime":"2025-11-24T08:49:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:49:32 crc kubenswrapper[4886]: I1124 08:49:32.469522 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 08:49:32 crc kubenswrapper[4886]: I1124 08:49:32.469682 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 08:49:32 crc kubenswrapper[4886]: E1124 08:49:32.469789 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 08:49:40.46975135 +0000 UTC m=+36.356489485 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 08:49:32 crc kubenswrapper[4886]: E1124 08:49:32.469829 4886 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 24 08:49:32 crc kubenswrapper[4886]: E1124 08:49:32.470053 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-24 08:49:40.470033688 +0000 UTC m=+36.356771823 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 24 08:49:32 crc kubenswrapper[4886]: I1124 08:49:32.470047 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 08:49:32 crc kubenswrapper[4886]: E1124 08:49:32.470125 4886 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 24 08:49:32 crc kubenswrapper[4886]: E1124 08:49:32.470194 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-24 08:49:40.470185592 +0000 UTC m=+36.356923767 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 24 08:49:32 crc kubenswrapper[4886]: I1124 08:49:32.482632 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:32 crc kubenswrapper[4886]: I1124 08:49:32.482668 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:32 crc kubenswrapper[4886]: I1124 08:49:32.482678 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:32 crc kubenswrapper[4886]: I1124 08:49:32.482695 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:49:32 crc kubenswrapper[4886]: I1124 08:49:32.482705 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:49:32Z","lastTransitionTime":"2025-11-24T08:49:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:49:32 crc kubenswrapper[4886]: I1124 08:49:32.571040 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 08:49:32 crc kubenswrapper[4886]: I1124 08:49:32.571116 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 08:49:32 crc kubenswrapper[4886]: E1124 08:49:32.571320 4886 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 24 08:49:32 crc kubenswrapper[4886]: E1124 08:49:32.571341 4886 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 24 08:49:32 crc kubenswrapper[4886]: E1124 08:49:32.571354 4886 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 08:49:32 crc kubenswrapper[4886]: E1124 08:49:32.571408 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-24 08:49:40.571391138 +0000 UTC m=+36.458129273 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 08:49:32 crc kubenswrapper[4886]: E1124 08:49:32.571320 4886 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 24 08:49:32 crc kubenswrapper[4886]: E1124 08:49:32.571444 4886 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 24 08:49:32 crc kubenswrapper[4886]: E1124 08:49:32.571457 4886 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 08:49:32 crc kubenswrapper[4886]: E1124 08:49:32.571492 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-24 08:49:40.57148207 +0000 UTC m=+36.458220205 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 08:49:32 crc kubenswrapper[4886]: I1124 08:49:32.585188 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:32 crc kubenswrapper[4886]: I1124 08:49:32.585240 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:32 crc kubenswrapper[4886]: I1124 08:49:32.585254 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:32 crc kubenswrapper[4886]: I1124 08:49:32.585275 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:49:32 crc kubenswrapper[4886]: I1124 08:49:32.585290 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:49:32Z","lastTransitionTime":"2025-11-24T08:49:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:49:32 crc kubenswrapper[4886]: I1124 08:49:32.687644 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:32 crc kubenswrapper[4886]: I1124 08:49:32.687697 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:32 crc kubenswrapper[4886]: I1124 08:49:32.687710 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:32 crc kubenswrapper[4886]: I1124 08:49:32.687733 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:49:32 crc kubenswrapper[4886]: I1124 08:49:32.687747 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:49:32Z","lastTransitionTime":"2025-11-24T08:49:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:49:32 crc kubenswrapper[4886]: I1124 08:49:32.792688 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:32 crc kubenswrapper[4886]: I1124 08:49:32.792747 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:32 crc kubenswrapper[4886]: I1124 08:49:32.792763 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:32 crc kubenswrapper[4886]: I1124 08:49:32.792799 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:49:32 crc kubenswrapper[4886]: I1124 08:49:32.792818 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:49:32Z","lastTransitionTime":"2025-11-24T08:49:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:49:32 crc kubenswrapper[4886]: I1124 08:49:32.848338 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 08:49:32 crc kubenswrapper[4886]: I1124 08:49:32.848432 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 08:49:32 crc kubenswrapper[4886]: E1124 08:49:32.848526 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 08:49:32 crc kubenswrapper[4886]: E1124 08:49:32.848619 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 08:49:32 crc kubenswrapper[4886]: I1124 08:49:32.848802 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 08:49:32 crc kubenswrapper[4886]: E1124 08:49:32.848886 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 08:49:32 crc kubenswrapper[4886]: I1124 08:49:32.895699 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:32 crc kubenswrapper[4886]: I1124 08:49:32.895751 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:32 crc kubenswrapper[4886]: I1124 08:49:32.895764 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:32 crc kubenswrapper[4886]: I1124 08:49:32.895784 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:49:32 crc kubenswrapper[4886]: I1124 08:49:32.895796 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:49:32Z","lastTransitionTime":"2025-11-24T08:49:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:49:32 crc kubenswrapper[4886]: I1124 08:49:32.999416 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:32 crc kubenswrapper[4886]: I1124 08:49:32.999472 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:32 crc kubenswrapper[4886]: I1124 08:49:32.999483 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:32 crc kubenswrapper[4886]: I1124 08:49:32.999504 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:49:32 crc kubenswrapper[4886]: I1124 08:49:32.999518 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:49:32Z","lastTransitionTime":"2025-11-24T08:49:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:49:33 crc kubenswrapper[4886]: I1124 08:49:33.083396 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-kl8k6" event={"ID":"b578bbaf-7246-42d9-9d2d-346bd1da2c41","Type":"ContainerStarted","Data":"e219f09fdaf4fbce2d0c7f6d7dd746f27e424ef649941221dcd487c414e9eabd"} Nov 24 08:49:33 crc kubenswrapper[4886]: I1124 08:49:33.102195 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:33 crc kubenswrapper[4886]: I1124 08:49:33.102433 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:33 crc kubenswrapper[4886]: I1124 08:49:33.102516 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:33 crc kubenswrapper[4886]: I1124 08:49:33.102617 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:49:33 crc kubenswrapper[4886]: I1124 08:49:33.102692 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:49:33Z","lastTransitionTime":"2025-11-24T08:49:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:49:33 crc kubenswrapper[4886]: I1124 08:49:33.103018 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:33Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:33 crc kubenswrapper[4886]: I1124 08:49:33.119173 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zc46q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23cb993e-0360-4449-b604-8ddd825a6502\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://008c7d8517b1c3c5f875d9e73315ceb9936936a1d977422f16bf2aaba7e3f4dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8hf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b6a02c10c81171f23bf0e623a3f86710da37f6dd62f885c0366830a60b2be34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8hf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zc46q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:33Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:33 crc kubenswrapper[4886]: I1124 08:49:33.135695 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2dk8j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d515fec-60f3-4bf7-9ba4-697bb691b670\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ead2821f6b71571212807c6f2984d541faf5143c28fee48594b19e08bd91d085\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6pfs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2dk8j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:33Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:33 crc kubenswrapper[4886]: I1124 08:49:33.151599 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56bddcbe3378ccc43de3046352a1284d9590790973736e218e891b804bfe1ff8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:33Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:33 crc kubenswrapper[4886]: I1124 08:49:33.168603 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:33Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:33 crc kubenswrapper[4886]: I1124 08:49:33.183124 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-82v6c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5d34f7b-b75b-4572-87ca-a01c66ba67b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9fe889c11fc75d5c449c6facbeb67f8fc2d60ffedbdf1433b72147cdabd353e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8qmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-82v6c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:33Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:33 crc kubenswrapper[4886]: I1124 08:49:33.199619 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb6a474a-7674-4280-8da4-784cc3c4fc67\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e99af72fba7cd02a022aede4ff904f6dc0d263429468b460f3886cbbf8767e9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3a5e301ad7f424b4db6c7c2d4054973bf707f225de81d58a13077e1a130fe59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://879cdf4650f88cd23cfed38dabd912619f3c95ab3c254c0eb893eb4cbb2f8c5c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://243dc8c38e86fbd96d073b5dc2edaf3f378f052563abfedf86cd78e6270f11b1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:04Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:33Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:33 crc kubenswrapper[4886]: I1124 08:49:33.205578 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:33 crc kubenswrapper[4886]: I1124 08:49:33.205668 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:33 crc kubenswrapper[4886]: I1124 08:49:33.205685 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:33 crc kubenswrapper[4886]: I1124 08:49:33.205706 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:49:33 crc kubenswrapper[4886]: I1124 08:49:33.205723 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:49:33Z","lastTransitionTime":"2025-11-24T08:49:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:49:33 crc kubenswrapper[4886]: I1124 08:49:33.218200 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://336d2d1c7e69b9e2c481681be12af7bfd17b8517e7fd97a67b3ca82dbc84ac35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c72af163fd4d5f2dc2a642984873ee83c62e9f1f96fac318fd17d5a74e5ea82f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:33Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:33 crc kubenswrapper[4886]: I1124 08:49:33.241402 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-657wc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03f9078c-6b20-46d5-ae2a-2eb20e236769\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53fba806955f4d7e98c5a2c830bc39cc6f9ac2a6248099aa13da6dff5a0cde76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53fba806955f4d7e98c5a2c830bc39cc6f9ac2a6248099aa13da6dff5a0cde76\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-657wc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:33Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:33 crc kubenswrapper[4886]: I1124 08:49:33.264754 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1de93d0-350b-4f10-a50d-d4ca45b47a33\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6cae19a77290a6aa02abc366057b242de03f58a183d136dc19248cdfa2c0fa75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e34e380e0dd3b6788878e07e1d497d003445742fb3e976e78f116fd775fb0ae8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aa26820df6b9179aede13abc90067edc933b9bae6f11a6a1c32cf1a3ee494c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66381e4a8de0eb3ec683f0643c26082dac4f16a6e152164bf9a8b14449a23b0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52f6359fcbe915c66d972a4cdf89646528d2c1d5099c96cb3436e2da54d1099a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ee3470d2b357bc4a39b9fa7da8800930ea9f4d4a3926aa9d2000be8ca2da241\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ee3470d2b357bc4a39b9fa7da8800930ea9f4d4a3926aa9d2000be8ca2da241\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74bf2e186184e37d22515fc9a765b4ff0530354eb4b6af7679cf6f046651f9e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74bf2e186184e37d22515fc9a765b4ff0530354eb4b6af7679cf6f046651f9e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1794ee7f2b8bf736d63385a3755d5a50e5f05cc74470ed04c3f21054476ff5f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1794ee7f2b8bf736d63385a3755d5a50e5f05cc74470ed04c3f21054476ff5f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:04Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:33Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:33 crc kubenswrapper[4886]: I1124 08:49:33.281242 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b56e78f-e569-4f01-9a0f-6b9db3b505d7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4cd419a96834f5093a6a3bad2d2b99f9ab6f4abdbe5d7fddadf1505a0fd1f3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff0e663f6feec9c77de1cd993c443c9b3df3492612c554cc8b95c8e4e4a026b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://146e753c49089634c3d3105a3debf9737245745d8ed9aee61ae2cf4f56ae37ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6d0c617fe424a39e246d154cd6c0d8dc451c251e24dffaf45d644354e94e01e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6d0c617fe424a39e246d154cd6c0d8dc451c251e24dffaf45d644354e94e01e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"message\\\":\\\" 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1124 08:49:25.277531 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1124 08:49:25.277556 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1124 08:49:25.277707 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1124 08:49:25.277722 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1124 08:49:25.278962 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-505038271/tls.crt::/tmp/serving-cert-505038271/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1763974159\\\\\\\\\\\\\\\" (2025-11-24 08:49:18 +0000 UTC to 2025-12-24 08:49:19 +0000 UTC (now=2025-11-24 08:49:25.278915173 +0000 UTC))\\\\\\\"\\\\nI1124 08:49:25.279136 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1763974165\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1763974164\\\\\\\\\\\\\\\" (2025-11-24 07:49:24 +0000 UTC to 2026-11-24 07:49:24 +0000 UTC (now=2025-11-24 08:49:25.279105228 +0000 UTC))\\\\\\\"\\\\nI1124 08:49:25.279186 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1124 08:49:25.279230 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1124 08:49:25.279272 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-505038271/tls.crt::/tmp/serving-cert-505038271/tls.key\\\\\\\"\\\\nI1124 08:49:25.279422 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF1124 08:49:25.273255 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://97e022f9b9279047d700f687148beaeef5ce4f72de418be1bdf0edde4e5211da\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc5ed0e2c8f6299305d6c26194c4b43e6e884785effd5ae99254221d738e7662\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc5ed0e2c8f6299305d6c26194c4b43e6e884785effd5ae99254221d738e7662\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:33Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:33 crc kubenswrapper[4886]: I1124 08:49:33.295594 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://736f509b74b3eb557d2963822fd7f3aa507fc34bc178d6b2c05e05dde6e2c88e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:33Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:33 crc kubenswrapper[4886]: I1124 08:49:33.308772 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:33 crc kubenswrapper[4886]: I1124 08:49:33.308829 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:33 crc kubenswrapper[4886]: I1124 08:49:33.308838 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:33 crc kubenswrapper[4886]: I1124 08:49:33.308854 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:49:33 crc kubenswrapper[4886]: I1124 08:49:33.308865 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:49:33Z","lastTransitionTime":"2025-11-24T08:49:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:49:33 crc kubenswrapper[4886]: I1124 08:49:33.309280 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:33Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:33 crc kubenswrapper[4886]: I1124 08:49:33.321380 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5g6ld" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82737edd-859f-4e06-8559-47375deb3a1a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f1c6253e8a3c02a22a64ad8913937a50b00f99c7c9da201aa0dd154175dc24c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ffhbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5g6ld\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:33Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:33 crc kubenswrapper[4886]: I1124 08:49:33.337830 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kl8k6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b578bbaf-7246-42d9-9d2d-346bd1da2c41\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6ng9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b4ab49b6707685341f4480554ba33f3dce9687661a3ed08ca228914a6821703\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b4ab49b6707685341f4480554ba33f3dce9687661a3ed08ca228914a6821703\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6ng9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce7a090aac10049685ef8de3f6533a3a7c61f8fe5cc79f2af17f6f36d0e90168\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce7a090aac10049685ef8de3f6533a3a7c61f8fe5cc79f2af17f6f36d0e90168\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6ng9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d53782a307c740b991c352671c3365951fb623f61ade4b061452b63f19b5e606\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d53782a307c740b991c352671c3365951fb623f61ade4b061452b63f19b5e606\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6ng9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce2a601938d1e214fdd14de097a4d95fdf3ac619b03bc2d269a9edf192fa940c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce2a601938d1e214fdd14de097a4d95fdf3ac619b03bc2d269a9edf192fa940c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6ng9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e219f09fdaf4fbce2d0c7f6d7dd746f27e424ef649941221dcd487c414e9eabd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6ng9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6ng9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kl8k6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:33Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:33 crc kubenswrapper[4886]: I1124 08:49:33.412131 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:33 crc kubenswrapper[4886]: I1124 08:49:33.412237 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:33 crc kubenswrapper[4886]: I1124 08:49:33.412252 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:33 crc kubenswrapper[4886]: I1124 08:49:33.412273 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:49:33 crc kubenswrapper[4886]: I1124 08:49:33.412289 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:49:33Z","lastTransitionTime":"2025-11-24T08:49:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:49:33 crc kubenswrapper[4886]: I1124 08:49:33.515453 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:33 crc kubenswrapper[4886]: I1124 08:49:33.515509 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:33 crc kubenswrapper[4886]: I1124 08:49:33.515520 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:33 crc kubenswrapper[4886]: I1124 08:49:33.515539 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:49:33 crc kubenswrapper[4886]: I1124 08:49:33.515555 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:49:33Z","lastTransitionTime":"2025-11-24T08:49:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:49:33 crc kubenswrapper[4886]: I1124 08:49:33.618457 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:33 crc kubenswrapper[4886]: I1124 08:49:33.618548 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:33 crc kubenswrapper[4886]: I1124 08:49:33.618561 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:33 crc kubenswrapper[4886]: I1124 08:49:33.618579 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:49:33 crc kubenswrapper[4886]: I1124 08:49:33.618596 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:49:33Z","lastTransitionTime":"2025-11-24T08:49:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:49:33 crc kubenswrapper[4886]: I1124 08:49:33.721528 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:33 crc kubenswrapper[4886]: I1124 08:49:33.721584 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:33 crc kubenswrapper[4886]: I1124 08:49:33.721594 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:33 crc kubenswrapper[4886]: I1124 08:49:33.721610 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:49:33 crc kubenswrapper[4886]: I1124 08:49:33.721622 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:49:33Z","lastTransitionTime":"2025-11-24T08:49:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:49:33 crc kubenswrapper[4886]: I1124 08:49:33.824788 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:33 crc kubenswrapper[4886]: I1124 08:49:33.824834 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:33 crc kubenswrapper[4886]: I1124 08:49:33.824846 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:33 crc kubenswrapper[4886]: I1124 08:49:33.824867 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:49:33 crc kubenswrapper[4886]: I1124 08:49:33.824880 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:49:33Z","lastTransitionTime":"2025-11-24T08:49:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:49:33 crc kubenswrapper[4886]: I1124 08:49:33.931244 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:33 crc kubenswrapper[4886]: I1124 08:49:33.931309 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:33 crc kubenswrapper[4886]: I1124 08:49:33.931322 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:33 crc kubenswrapper[4886]: I1124 08:49:33.931344 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:49:33 crc kubenswrapper[4886]: I1124 08:49:33.931361 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:49:33Z","lastTransitionTime":"2025-11-24T08:49:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:49:34 crc kubenswrapper[4886]: I1124 08:49:34.034839 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:34 crc kubenswrapper[4886]: I1124 08:49:34.035254 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:34 crc kubenswrapper[4886]: I1124 08:49:34.035267 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:34 crc kubenswrapper[4886]: I1124 08:49:34.035287 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:49:34 crc kubenswrapper[4886]: I1124 08:49:34.035301 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:49:34Z","lastTransitionTime":"2025-11-24T08:49:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:49:34 crc kubenswrapper[4886]: I1124 08:49:34.138544 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:34 crc kubenswrapper[4886]: I1124 08:49:34.138588 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:34 crc kubenswrapper[4886]: I1124 08:49:34.138597 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:34 crc kubenswrapper[4886]: I1124 08:49:34.138617 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:49:34 crc kubenswrapper[4886]: I1124 08:49:34.138627 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:49:34Z","lastTransitionTime":"2025-11-24T08:49:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:49:34 crc kubenswrapper[4886]: I1124 08:49:34.242200 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:34 crc kubenswrapper[4886]: I1124 08:49:34.242703 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:34 crc kubenswrapper[4886]: I1124 08:49:34.242802 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:34 crc kubenswrapper[4886]: I1124 08:49:34.242922 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:49:34 crc kubenswrapper[4886]: I1124 08:49:34.242995 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:49:34Z","lastTransitionTime":"2025-11-24T08:49:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:49:34 crc kubenswrapper[4886]: I1124 08:49:34.346539 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:34 crc kubenswrapper[4886]: I1124 08:49:34.346588 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:34 crc kubenswrapper[4886]: I1124 08:49:34.346598 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:34 crc kubenswrapper[4886]: I1124 08:49:34.346621 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:49:34 crc kubenswrapper[4886]: I1124 08:49:34.346631 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:49:34Z","lastTransitionTime":"2025-11-24T08:49:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:49:34 crc kubenswrapper[4886]: I1124 08:49:34.407041 4886 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 24 08:49:34 crc kubenswrapper[4886]: I1124 08:49:34.407873 4886 scope.go:117] "RemoveContainer" containerID="b6d0c617fe424a39e246d154cd6c0d8dc451c251e24dffaf45d644354e94e01e" Nov 24 08:49:34 crc kubenswrapper[4886]: E1124 08:49:34.408052 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Nov 24 08:49:34 crc kubenswrapper[4886]: I1124 08:49:34.449949 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:34 crc kubenswrapper[4886]: I1124 08:49:34.450322 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:34 crc kubenswrapper[4886]: I1124 08:49:34.450405 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:34 crc kubenswrapper[4886]: I1124 08:49:34.450496 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:49:34 crc kubenswrapper[4886]: I1124 08:49:34.450576 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:49:34Z","lastTransitionTime":"2025-11-24T08:49:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:49:34 crc kubenswrapper[4886]: I1124 08:49:34.553300 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:34 crc kubenswrapper[4886]: I1124 08:49:34.553611 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:34 crc kubenswrapper[4886]: I1124 08:49:34.553700 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:34 crc kubenswrapper[4886]: I1124 08:49:34.553799 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:49:34 crc kubenswrapper[4886]: I1124 08:49:34.553881 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:49:34Z","lastTransitionTime":"2025-11-24T08:49:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:49:34 crc kubenswrapper[4886]: I1124 08:49:34.656796 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:34 crc kubenswrapper[4886]: I1124 08:49:34.656850 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:34 crc kubenswrapper[4886]: I1124 08:49:34.656861 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:34 crc kubenswrapper[4886]: I1124 08:49:34.656883 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:49:34 crc kubenswrapper[4886]: I1124 08:49:34.656896 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:49:34Z","lastTransitionTime":"2025-11-24T08:49:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:49:34 crc kubenswrapper[4886]: I1124 08:49:34.759914 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:34 crc kubenswrapper[4886]: I1124 08:49:34.759967 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:34 crc kubenswrapper[4886]: I1124 08:49:34.759980 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:34 crc kubenswrapper[4886]: I1124 08:49:34.760001 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:49:34 crc kubenswrapper[4886]: I1124 08:49:34.760013 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:49:34Z","lastTransitionTime":"2025-11-24T08:49:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:49:34 crc kubenswrapper[4886]: I1124 08:49:34.848873 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 08:49:34 crc kubenswrapper[4886]: I1124 08:49:34.848931 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 08:49:34 crc kubenswrapper[4886]: I1124 08:49:34.849012 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 08:49:34 crc kubenswrapper[4886]: E1124 08:49:34.849065 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 08:49:34 crc kubenswrapper[4886]: E1124 08:49:34.849238 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 08:49:34 crc kubenswrapper[4886]: E1124 08:49:34.849428 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 08:49:34 crc kubenswrapper[4886]: I1124 08:49:34.863016 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:34 crc kubenswrapper[4886]: I1124 08:49:34.863064 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:34 crc kubenswrapper[4886]: I1124 08:49:34.863081 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:34 crc kubenswrapper[4886]: I1124 08:49:34.863099 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:49:34 crc kubenswrapper[4886]: I1124 08:49:34.863110 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:49:34Z","lastTransitionTime":"2025-11-24T08:49:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:49:34 crc kubenswrapper[4886]: I1124 08:49:34.863401 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5g6ld" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82737edd-859f-4e06-8559-47375deb3a1a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f1c6253e8a3c02a22a64ad8913937a50b00f99c7c9da201aa0dd154175dc24c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ffhbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5g6ld\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:34Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:34 crc kubenswrapper[4886]: I1124 08:49:34.882607 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kl8k6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b578bbaf-7246-42d9-9d2d-346bd1da2c41\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6ng9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b4ab49b6707685341f4480554ba33f3dce9687661a3ed08ca228914a6821703\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b4ab49b6707685341f4480554ba33f3dce9687661a3ed08ca228914a6821703\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6ng9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce7a090aac10049685ef8de3f6533a3a7c61f8fe5cc79f2af17f6f36d0e90168\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce7a090aac10049685ef8de3f6533a3a7c61f8fe5cc79f2af17f6f36d0e90168\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6ng9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d53782a307c740b991c352671c3365951fb623f61ade4b061452b63f19b5e606\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d53782a307c740b991c352671c3365951fb623f61ade4b061452b63f19b5e606\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6ng9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce2a601938d1e214fdd14de097a4d95fdf3ac619b03bc2d269a9edf192fa940c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce2a601938d1e214fdd14de097a4d95fdf3ac619b03bc2d269a9edf192fa940c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6ng9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e219f09fdaf4fbce2d0c7f6d7dd746f27e424ef649941221dcd487c414e9eabd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6ng9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6ng9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kl8k6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:34Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:34 crc kubenswrapper[4886]: I1124 08:49:34.902993 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-657wc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03f9078c-6b20-46d5-ae2a-2eb20e236769\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53fba806955f4d7e98c5a2c830bc39cc6f9ac2a6248099aa13da6dff5a0cde76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53fba806955f4d7e98c5a2c830bc39cc6f9ac2a6248099aa13da6dff5a0cde76\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-657wc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:34Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:34 crc kubenswrapper[4886]: I1124 08:49:34.932418 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1de93d0-350b-4f10-a50d-d4ca45b47a33\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6cae19a77290a6aa02abc366057b242de03f58a183d136dc19248cdfa2c0fa75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e34e380e0dd3b6788878e07e1d497d003445742fb3e976e78f116fd775fb0ae8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aa26820df6b9179aede13abc90067edc933b9bae6f11a6a1c32cf1a3ee494c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66381e4a8de0eb3ec683f0643c26082dac4f16a6e152164bf9a8b14449a23b0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52f6359fcbe915c66d972a4cdf89646528d2c1d5099c96cb3436e2da54d1099a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ee3470d2b357bc4a39b9fa7da8800930ea9f4d4a3926aa9d2000be8ca2da241\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ee3470d2b357bc4a39b9fa7da8800930ea9f4d4a3926aa9d2000be8ca2da241\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74bf2e186184e37d22515fc9a765b4ff0530354eb4b6af7679cf6f046651f9e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74bf2e186184e37d22515fc9a765b4ff0530354eb4b6af7679cf6f046651f9e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1794ee7f2b8bf736d63385a3755d5a50e5f05cc74470ed04c3f21054476ff5f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1794ee7f2b8bf736d63385a3755d5a50e5f05cc74470ed04c3f21054476ff5f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:04Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:34Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:34 crc kubenswrapper[4886]: I1124 08:49:34.947936 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b56e78f-e569-4f01-9a0f-6b9db3b505d7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4cd419a96834f5093a6a3bad2d2b99f9ab6f4abdbe5d7fddadf1505a0fd1f3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff0e663f6feec9c77de1cd993c443c9b3df3492612c554cc8b95c8e4e4a026b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://146e753c49089634c3d3105a3debf9737245745d8ed9aee61ae2cf4f56ae37ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6d0c617fe424a39e246d154cd6c0d8dc451c251e24dffaf45d644354e94e01e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6d0c617fe424a39e246d154cd6c0d8dc451c251e24dffaf45d644354e94e01e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"message\\\":\\\" 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1124 08:49:25.277531 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1124 08:49:25.277556 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1124 08:49:25.277707 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1124 08:49:25.277722 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1124 08:49:25.278962 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-505038271/tls.crt::/tmp/serving-cert-505038271/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1763974159\\\\\\\\\\\\\\\" (2025-11-24 08:49:18 +0000 UTC to 2025-12-24 08:49:19 +0000 UTC (now=2025-11-24 08:49:25.278915173 +0000 UTC))\\\\\\\"\\\\nI1124 08:49:25.279136 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1763974165\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1763974164\\\\\\\\\\\\\\\" (2025-11-24 07:49:24 +0000 UTC to 2026-11-24 07:49:24 +0000 UTC (now=2025-11-24 08:49:25.279105228 +0000 UTC))\\\\\\\"\\\\nI1124 08:49:25.279186 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1124 08:49:25.279230 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1124 08:49:25.279272 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-505038271/tls.crt::/tmp/serving-cert-505038271/tls.key\\\\\\\"\\\\nI1124 08:49:25.279422 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF1124 08:49:25.273255 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://97e022f9b9279047d700f687148beaeef5ce4f72de418be1bdf0edde4e5211da\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc5ed0e2c8f6299305d6c26194c4b43e6e884785effd5ae99254221d738e7662\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc5ed0e2c8f6299305d6c26194c4b43e6e884785effd5ae99254221d738e7662\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:34Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:34 crc kubenswrapper[4886]: I1124 08:49:34.964121 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://736f509b74b3eb557d2963822fd7f3aa507fc34bc178d6b2c05e05dde6e2c88e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:34Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:34 crc kubenswrapper[4886]: I1124 08:49:34.965875 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:34 crc kubenswrapper[4886]: I1124 08:49:34.965939 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:34 crc kubenswrapper[4886]: I1124 08:49:34.965952 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:34 crc kubenswrapper[4886]: I1124 08:49:34.965970 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:49:34 crc kubenswrapper[4886]: I1124 08:49:34.965982 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:49:34Z","lastTransitionTime":"2025-11-24T08:49:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:49:34 crc kubenswrapper[4886]: I1124 08:49:34.981685 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:34Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:34 crc kubenswrapper[4886]: I1124 08:49:34.996387 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:34Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:35 crc kubenswrapper[4886]: I1124 08:49:35.012407 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zc46q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23cb993e-0360-4449-b604-8ddd825a6502\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://008c7d8517b1c3c5f875d9e73315ceb9936936a1d977422f16bf2aaba7e3f4dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8hf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b6a02c10c81171f23bf0e623a3f86710da37f6dd62f885c0366830a60b2be34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8hf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zc46q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:35Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:35 crc kubenswrapper[4886]: I1124 08:49:35.028372 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2dk8j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d515fec-60f3-4bf7-9ba4-697bb691b670\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ead2821f6b71571212807c6f2984d541faf5143c28fee48594b19e08bd91d085\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6pfs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2dk8j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:35Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:35 crc kubenswrapper[4886]: I1124 08:49:35.045006 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56bddcbe3378ccc43de3046352a1284d9590790973736e218e891b804bfe1ff8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:35Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:35 crc kubenswrapper[4886]: I1124 08:49:35.060529 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:35Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:35 crc kubenswrapper[4886]: I1124 08:49:35.069656 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:35 crc kubenswrapper[4886]: I1124 08:49:35.069747 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:35 crc kubenswrapper[4886]: I1124 08:49:35.069764 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:35 crc kubenswrapper[4886]: I1124 08:49:35.069788 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:49:35 crc kubenswrapper[4886]: I1124 08:49:35.069807 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:49:35Z","lastTransitionTime":"2025-11-24T08:49:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:49:35 crc kubenswrapper[4886]: I1124 08:49:35.078395 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-82v6c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5d34f7b-b75b-4572-87ca-a01c66ba67b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9fe889c11fc75d5c449c6facbeb67f8fc2d60ffedbdf1433b72147cdabd353e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8qmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-82v6c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:35Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:35 crc kubenswrapper[4886]: I1124 08:49:35.094887 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb6a474a-7674-4280-8da4-784cc3c4fc67\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e99af72fba7cd02a022aede4ff904f6dc0d263429468b460f3886cbbf8767e9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3a5e301ad7f424b4db6c7c2d4054973bf707f225de81d58a13077e1a130fe59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://879cdf4650f88cd23cfed38dabd912619f3c95ab3c254c0eb893eb4cbb2f8c5c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://243dc8c38e86fbd96d073b5dc2edaf3f378f052563abfedf86cd78e6270f11b1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:04Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:35Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:35 crc kubenswrapper[4886]: I1124 08:49:35.096227 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-657wc" event={"ID":"03f9078c-6b20-46d5-ae2a-2eb20e236769","Type":"ContainerStarted","Data":"d74daaf9d67d4afdcf587b8a51260a178f2b6804c138da8f4eb1a229bdfa9328"} Nov 24 08:49:35 crc kubenswrapper[4886]: I1124 08:49:35.096504 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-657wc" Nov 24 08:49:35 crc kubenswrapper[4886]: I1124 08:49:35.100248 4886 generic.go:334] "Generic (PLEG): container finished" podID="b578bbaf-7246-42d9-9d2d-346bd1da2c41" containerID="e219f09fdaf4fbce2d0c7f6d7dd746f27e424ef649941221dcd487c414e9eabd" exitCode=0 Nov 24 08:49:35 crc kubenswrapper[4886]: I1124 08:49:35.100277 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-kl8k6" event={"ID":"b578bbaf-7246-42d9-9d2d-346bd1da2c41","Type":"ContainerDied","Data":"e219f09fdaf4fbce2d0c7f6d7dd746f27e424ef649941221dcd487c414e9eabd"} Nov 24 08:49:35 crc kubenswrapper[4886]: I1124 08:49:35.111917 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://336d2d1c7e69b9e2c481681be12af7bfd17b8517e7fd97a67b3ca82dbc84ac35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c72af163fd4d5f2dc2a642984873ee83c62e9f1f96fac318fd17d5a74e5ea82f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:35Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:35 crc kubenswrapper[4886]: I1124 08:49:35.124758 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5g6ld" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82737edd-859f-4e06-8559-47375deb3a1a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f1c6253e8a3c02a22a64ad8913937a50b00f99c7c9da201aa0dd154175dc24c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ffhbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5g6ld\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:35Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:35 crc kubenswrapper[4886]: I1124 08:49:35.131863 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-657wc" Nov 24 08:49:35 crc kubenswrapper[4886]: I1124 08:49:35.143255 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kl8k6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b578bbaf-7246-42d9-9d2d-346bd1da2c41\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6ng9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b4ab49b6707685341f4480554ba33f3dce9687661a3ed08ca228914a6821703\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b4ab49b6707685341f4480554ba33f3dce9687661a3ed08ca228914a6821703\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6ng9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce7a090aac10049685ef8de3f6533a3a7c61f8fe5cc79f2af17f6f36d0e90168\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce7a090aac10049685ef8de3f6533a3a7c61f8fe5cc79f2af17f6f36d0e90168\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6ng9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d53782a307c740b991c352671c3365951fb623f61ade4b061452b63f19b5e606\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d53782a307c740b991c352671c3365951fb623f61ade4b061452b63f19b5e606\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6ng9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce2a601938d1e214fdd14de097a4d95fdf3ac619b03bc2d269a9edf192fa940c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce2a601938d1e214fdd14de097a4d95fdf3ac619b03bc2d269a9edf192fa940c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6ng9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e219f09fdaf4fbce2d0c7f6d7dd746f27e424ef649941221dcd487c414e9eabd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e219f09fdaf4fbce2d0c7f6d7dd746f27e424ef649941221dcd487c414e9eabd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6ng9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6ng9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kl8k6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:35Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:35 crc kubenswrapper[4886]: I1124 08:49:35.165506 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-657wc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03f9078c-6b20-46d5-ae2a-2eb20e236769\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f59f3a090f9824d71c23fcb40dde8c9d34fa8116c17ba79a45c005b2719f3dc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8c129c56a2c69042516d9caa6abdf4095aefde8fdaef4953a8eed353940be89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f504f7f421bd783399a6d08fa713f0a885d8edd5f5245933a98b6830c5573d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51cd23eb1b9056a44b5716ce542fafec8e3ebde494f136655a2a5d838ec90ab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://48584e4993cf06ba4b29a0732136f0f86c23d81adea76e945015ffafb462819d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://97477bfe13de2286806b9a8e3c723e3828c1b0ef182329585eaef507dd0c36f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d74daaf9d67d4afdcf587b8a51260a178f2b6804c138da8f4eb1a229bdfa9328\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://070c77876ce83a479d0ab423d8107a14d506c655b476b2719766d78ad3c88dc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53fba806955f4d7e98c5a2c830bc39cc6f9ac2a6248099aa13da6dff5a0cde76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53fba806955f4d7e98c5a2c830bc39cc6f9ac2a6248099aa13da6dff5a0cde76\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-657wc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:35Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:35 crc kubenswrapper[4886]: I1124 08:49:35.172999 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:35 crc kubenswrapper[4886]: I1124 08:49:35.173059 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:35 crc kubenswrapper[4886]: I1124 08:49:35.173076 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:35 crc kubenswrapper[4886]: I1124 08:49:35.173098 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:49:35 crc kubenswrapper[4886]: I1124 08:49:35.173116 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:49:35Z","lastTransitionTime":"2025-11-24T08:49:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:49:35 crc kubenswrapper[4886]: I1124 08:49:35.187371 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1de93d0-350b-4f10-a50d-d4ca45b47a33\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6cae19a77290a6aa02abc366057b242de03f58a183d136dc19248cdfa2c0fa75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e34e380e0dd3b6788878e07e1d497d003445742fb3e976e78f116fd775fb0ae8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aa26820df6b9179aede13abc90067edc933b9bae6f11a6a1c32cf1a3ee494c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66381e4a8de0eb3ec683f0643c26082dac4f16a6e152164bf9a8b14449a23b0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52f6359fcbe915c66d972a4cdf89646528d2c1d5099c96cb3436e2da54d1099a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ee3470d2b357bc4a39b9fa7da8800930ea9f4d4a3926aa9d2000be8ca2da241\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ee3470d2b357bc4a39b9fa7da8800930ea9f4d4a3926aa9d2000be8ca2da241\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74bf2e186184e37d22515fc9a765b4ff0530354eb4b6af7679cf6f046651f9e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74bf2e186184e37d22515fc9a765b4ff0530354eb4b6af7679cf6f046651f9e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1794ee7f2b8bf736d63385a3755d5a50e5f05cc74470ed04c3f21054476ff5f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1794ee7f2b8bf736d63385a3755d5a50e5f05cc74470ed04c3f21054476ff5f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:04Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:35Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:35 crc kubenswrapper[4886]: I1124 08:49:35.201494 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b56e78f-e569-4f01-9a0f-6b9db3b505d7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4cd419a96834f5093a6a3bad2d2b99f9ab6f4abdbe5d7fddadf1505a0fd1f3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff0e663f6feec9c77de1cd993c443c9b3df3492612c554cc8b95c8e4e4a026b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://146e753c49089634c3d3105a3debf9737245745d8ed9aee61ae2cf4f56ae37ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6d0c617fe424a39e246d154cd6c0d8dc451c251e24dffaf45d644354e94e01e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6d0c617fe424a39e246d154cd6c0d8dc451c251e24dffaf45d644354e94e01e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"message\\\":\\\" 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1124 08:49:25.277531 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1124 08:49:25.277556 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1124 08:49:25.277707 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1124 08:49:25.277722 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1124 08:49:25.278962 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-505038271/tls.crt::/tmp/serving-cert-505038271/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1763974159\\\\\\\\\\\\\\\" (2025-11-24 08:49:18 +0000 UTC to 2025-12-24 08:49:19 +0000 UTC (now=2025-11-24 08:49:25.278915173 +0000 UTC))\\\\\\\"\\\\nI1124 08:49:25.279136 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1763974165\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1763974164\\\\\\\\\\\\\\\" (2025-11-24 07:49:24 +0000 UTC to 2026-11-24 07:49:24 +0000 UTC (now=2025-11-24 08:49:25.279105228 +0000 UTC))\\\\\\\"\\\\nI1124 08:49:25.279186 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1124 08:49:25.279230 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1124 08:49:25.279272 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-505038271/tls.crt::/tmp/serving-cert-505038271/tls.key\\\\\\\"\\\\nI1124 08:49:25.279422 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF1124 08:49:25.273255 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://97e022f9b9279047d700f687148beaeef5ce4f72de418be1bdf0edde4e5211da\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc5ed0e2c8f6299305d6c26194c4b43e6e884785effd5ae99254221d738e7662\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc5ed0e2c8f6299305d6c26194c4b43e6e884785effd5ae99254221d738e7662\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:35Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:35 crc kubenswrapper[4886]: I1124 08:49:35.215024 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://736f509b74b3eb557d2963822fd7f3aa507fc34bc178d6b2c05e05dde6e2c88e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:35Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:35 crc kubenswrapper[4886]: I1124 08:49:35.229378 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:35Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:35 crc kubenswrapper[4886]: I1124 08:49:35.242533 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:35Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:35 crc kubenswrapper[4886]: I1124 08:49:35.255770 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zc46q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23cb993e-0360-4449-b604-8ddd825a6502\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://008c7d8517b1c3c5f875d9e73315ceb9936936a1d977422f16bf2aaba7e3f4dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8hf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b6a02c10c81171f23bf0e623a3f86710da37f6dd62f885c0366830a60b2be34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8hf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zc46q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:35Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:35 crc kubenswrapper[4886]: I1124 08:49:35.271661 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2dk8j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d515fec-60f3-4bf7-9ba4-697bb691b670\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ead2821f6b71571212807c6f2984d541faf5143c28fee48594b19e08bd91d085\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6pfs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2dk8j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:35Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:35 crc kubenswrapper[4886]: I1124 08:49:35.276812 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:35 crc kubenswrapper[4886]: I1124 08:49:35.276857 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:35 crc kubenswrapper[4886]: I1124 08:49:35.276869 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:35 crc kubenswrapper[4886]: I1124 08:49:35.276886 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:49:35 crc kubenswrapper[4886]: I1124 08:49:35.276898 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:49:35Z","lastTransitionTime":"2025-11-24T08:49:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:49:35 crc kubenswrapper[4886]: I1124 08:49:35.284999 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56bddcbe3378ccc43de3046352a1284d9590790973736e218e891b804bfe1ff8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:35Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:35 crc kubenswrapper[4886]: I1124 08:49:35.298631 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:35Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:35 crc kubenswrapper[4886]: I1124 08:49:35.311807 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-82v6c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5d34f7b-b75b-4572-87ca-a01c66ba67b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9fe889c11fc75d5c449c6facbeb67f8fc2d60ffedbdf1433b72147cdabd353e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8qmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-82v6c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:35Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:35 crc kubenswrapper[4886]: I1124 08:49:35.325613 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb6a474a-7674-4280-8da4-784cc3c4fc67\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e99af72fba7cd02a022aede4ff904f6dc0d263429468b460f3886cbbf8767e9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3a5e301ad7f424b4db6c7c2d4054973bf707f225de81d58a13077e1a130fe59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://879cdf4650f88cd23cfed38dabd912619f3c95ab3c254c0eb893eb4cbb2f8c5c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://243dc8c38e86fbd96d073b5dc2edaf3f378f052563abfedf86cd78e6270f11b1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:04Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:35Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:35 crc kubenswrapper[4886]: I1124 08:49:35.339953 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://336d2d1c7e69b9e2c481681be12af7bfd17b8517e7fd97a67b3ca82dbc84ac35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c72af163fd4d5f2dc2a642984873ee83c62e9f1f96fac318fd17d5a74e5ea82f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:35Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:35 crc kubenswrapper[4886]: I1124 08:49:35.357319 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kl8k6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b578bbaf-7246-42d9-9d2d-346bd1da2c41\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6ng9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b4ab49b6707685341f4480554ba33f3dce9687661a3ed08ca228914a6821703\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b4ab49b6707685341f4480554ba33f3dce9687661a3ed08ca228914a6821703\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6ng9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce7a090aac10049685ef8de3f6533a3a7c61f8fe5cc79f2af17f6f36d0e90168\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce7a090aac10049685ef8de3f6533a3a7c61f8fe5cc79f2af17f6f36d0e90168\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6ng9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d53782a307c740b991c352671c3365951fb623f61ade4b061452b63f19b5e606\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d53782a307c740b991c352671c3365951fb623f61ade4b061452b63f19b5e606\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6ng9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce2a601938d1e214fdd14de097a4d95fdf3ac619b03bc2d269a9edf192fa940c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce2a601938d1e214fdd14de097a4d95fdf3ac619b03bc2d269a9edf192fa940c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6ng9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e219f09fdaf4fbce2d0c7f6d7dd746f27e424ef649941221dcd487c414e9eabd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e219f09fdaf4fbce2d0c7f6d7dd746f27e424ef649941221dcd487c414e9eabd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6ng9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6ng9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kl8k6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:35Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:35 crc kubenswrapper[4886]: I1124 08:49:35.379272 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-657wc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03f9078c-6b20-46d5-ae2a-2eb20e236769\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f59f3a090f9824d71c23fcb40dde8c9d34fa8116c17ba79a45c005b2719f3dc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8c129c56a2c69042516d9caa6abdf4095aefde8fdaef4953a8eed353940be89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f504f7f421bd783399a6d08fa713f0a885d8edd5f5245933a98b6830c5573d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51cd23eb1b9056a44b5716ce542fafec8e3ebde494f136655a2a5d838ec90ab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://48584e4993cf06ba4b29a0732136f0f86c23d81adea76e945015ffafb462819d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://97477bfe13de2286806b9a8e3c723e3828c1b0ef182329585eaef507dd0c36f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d74daaf9d67d4afdcf587b8a51260a178f2b6804c138da8f4eb1a229bdfa9328\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://070c77876ce83a479d0ab423d8107a14d506c655b476b2719766d78ad3c88dc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53fba806955f4d7e98c5a2c830bc39cc6f9ac2a6248099aa13da6dff5a0cde76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53fba806955f4d7e98c5a2c830bc39cc6f9ac2a6248099aa13da6dff5a0cde76\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-657wc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:35Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:35 crc kubenswrapper[4886]: I1124 08:49:35.379800 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:35 crc kubenswrapper[4886]: I1124 08:49:35.379844 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:35 crc kubenswrapper[4886]: I1124 08:49:35.379856 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:35 crc kubenswrapper[4886]: I1124 08:49:35.379875 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:49:35 crc kubenswrapper[4886]: I1124 08:49:35.379887 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:49:35Z","lastTransitionTime":"2025-11-24T08:49:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:49:35 crc kubenswrapper[4886]: I1124 08:49:35.402103 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1de93d0-350b-4f10-a50d-d4ca45b47a33\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6cae19a77290a6aa02abc366057b242de03f58a183d136dc19248cdfa2c0fa75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e34e380e0dd3b6788878e07e1d497d003445742fb3e976e78f116fd775fb0ae8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aa26820df6b9179aede13abc90067edc933b9bae6f11a6a1c32cf1a3ee494c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66381e4a8de0eb3ec683f0643c26082dac4f16a6e152164bf9a8b14449a23b0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52f6359fcbe915c66d972a4cdf89646528d2c1d5099c96cb3436e2da54d1099a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ee3470d2b357bc4a39b9fa7da8800930ea9f4d4a3926aa9d2000be8ca2da241\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ee3470d2b357bc4a39b9fa7da8800930ea9f4d4a3926aa9d2000be8ca2da241\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74bf2e186184e37d22515fc9a765b4ff0530354eb4b6af7679cf6f046651f9e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74bf2e186184e37d22515fc9a765b4ff0530354eb4b6af7679cf6f046651f9e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1794ee7f2b8bf736d63385a3755d5a50e5f05cc74470ed04c3f21054476ff5f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1794ee7f2b8bf736d63385a3755d5a50e5f05cc74470ed04c3f21054476ff5f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:04Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:35Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:35 crc kubenswrapper[4886]: I1124 08:49:35.418544 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b56e78f-e569-4f01-9a0f-6b9db3b505d7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4cd419a96834f5093a6a3bad2d2b99f9ab6f4abdbe5d7fddadf1505a0fd1f3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff0e663f6feec9c77de1cd993c443c9b3df3492612c554cc8b95c8e4e4a026b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://146e753c49089634c3d3105a3debf9737245745d8ed9aee61ae2cf4f56ae37ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6d0c617fe424a39e246d154cd6c0d8dc451c251e24dffaf45d644354e94e01e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6d0c617fe424a39e246d154cd6c0d8dc451c251e24dffaf45d644354e94e01e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"message\\\":\\\" 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1124 08:49:25.277531 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1124 08:49:25.277556 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1124 08:49:25.277707 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1124 08:49:25.277722 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1124 08:49:25.278962 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-505038271/tls.crt::/tmp/serving-cert-505038271/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1763974159\\\\\\\\\\\\\\\" (2025-11-24 08:49:18 +0000 UTC to 2025-12-24 08:49:19 +0000 UTC (now=2025-11-24 08:49:25.278915173 +0000 UTC))\\\\\\\"\\\\nI1124 08:49:25.279136 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1763974165\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1763974164\\\\\\\\\\\\\\\" (2025-11-24 07:49:24 +0000 UTC to 2026-11-24 07:49:24 +0000 UTC (now=2025-11-24 08:49:25.279105228 +0000 UTC))\\\\\\\"\\\\nI1124 08:49:25.279186 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1124 08:49:25.279230 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1124 08:49:25.279272 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-505038271/tls.crt::/tmp/serving-cert-505038271/tls.key\\\\\\\"\\\\nI1124 08:49:25.279422 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF1124 08:49:25.273255 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://97e022f9b9279047d700f687148beaeef5ce4f72de418be1bdf0edde4e5211da\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc5ed0e2c8f6299305d6c26194c4b43e6e884785effd5ae99254221d738e7662\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc5ed0e2c8f6299305d6c26194c4b43e6e884785effd5ae99254221d738e7662\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:35Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:35 crc kubenswrapper[4886]: I1124 08:49:35.432835 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://736f509b74b3eb557d2963822fd7f3aa507fc34bc178d6b2c05e05dde6e2c88e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:35Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:35 crc kubenswrapper[4886]: I1124 08:49:35.449342 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:35Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:35 crc kubenswrapper[4886]: I1124 08:49:35.465012 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5g6ld" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82737edd-859f-4e06-8559-47375deb3a1a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f1c6253e8a3c02a22a64ad8913937a50b00f99c7c9da201aa0dd154175dc24c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ffhbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5g6ld\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:35Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:35 crc kubenswrapper[4886]: I1124 08:49:35.479754 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:35Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:35 crc kubenswrapper[4886]: I1124 08:49:35.483467 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:35 crc kubenswrapper[4886]: I1124 08:49:35.483508 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:35 crc kubenswrapper[4886]: I1124 08:49:35.483521 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:35 crc kubenswrapper[4886]: I1124 08:49:35.483547 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:49:35 crc kubenswrapper[4886]: I1124 08:49:35.483558 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:49:35Z","lastTransitionTime":"2025-11-24T08:49:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:49:35 crc kubenswrapper[4886]: I1124 08:49:35.500311 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zc46q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23cb993e-0360-4449-b604-8ddd825a6502\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://008c7d8517b1c3c5f875d9e73315ceb9936936a1d977422f16bf2aaba7e3f4dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8hf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b6a02c10c81171f23bf0e623a3f86710da37f6dd62f885c0366830a60b2be34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8hf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zc46q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:35Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:35 crc kubenswrapper[4886]: I1124 08:49:35.512991 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2dk8j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d515fec-60f3-4bf7-9ba4-697bb691b670\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ead2821f6b71571212807c6f2984d541faf5143c28fee48594b19e08bd91d085\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6pfs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2dk8j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:35Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:35 crc kubenswrapper[4886]: I1124 08:49:35.527987 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56bddcbe3378ccc43de3046352a1284d9590790973736e218e891b804bfe1ff8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:35Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:35 crc kubenswrapper[4886]: I1124 08:49:35.543085 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:35Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:35 crc kubenswrapper[4886]: I1124 08:49:35.555215 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-82v6c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5d34f7b-b75b-4572-87ca-a01c66ba67b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9fe889c11fc75d5c449c6facbeb67f8fc2d60ffedbdf1433b72147cdabd353e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8qmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-82v6c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:35Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:35 crc kubenswrapper[4886]: I1124 08:49:35.569124 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb6a474a-7674-4280-8da4-784cc3c4fc67\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e99af72fba7cd02a022aede4ff904f6dc0d263429468b460f3886cbbf8767e9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3a5e301ad7f424b4db6c7c2d4054973bf707f225de81d58a13077e1a130fe59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://879cdf4650f88cd23cfed38dabd912619f3c95ab3c254c0eb893eb4cbb2f8c5c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://243dc8c38e86fbd96d073b5dc2edaf3f378f052563abfedf86cd78e6270f11b1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:04Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:35Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:35 crc kubenswrapper[4886]: I1124 08:49:35.582928 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://336d2d1c7e69b9e2c481681be12af7bfd17b8517e7fd97a67b3ca82dbc84ac35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c72af163fd4d5f2dc2a642984873ee83c62e9f1f96fac318fd17d5a74e5ea82f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:35Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:35 crc kubenswrapper[4886]: I1124 08:49:35.586342 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:35 crc kubenswrapper[4886]: I1124 08:49:35.586408 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:35 crc kubenswrapper[4886]: I1124 08:49:35.586420 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:35 crc kubenswrapper[4886]: I1124 08:49:35.586441 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:49:35 crc kubenswrapper[4886]: I1124 08:49:35.586454 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:49:35Z","lastTransitionTime":"2025-11-24T08:49:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:49:35 crc kubenswrapper[4886]: I1124 08:49:35.690484 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:35 crc kubenswrapper[4886]: I1124 08:49:35.690556 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:35 crc kubenswrapper[4886]: I1124 08:49:35.690570 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:35 crc kubenswrapper[4886]: I1124 08:49:35.690627 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:49:35 crc kubenswrapper[4886]: I1124 08:49:35.690644 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:49:35Z","lastTransitionTime":"2025-11-24T08:49:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:49:35 crc kubenswrapper[4886]: I1124 08:49:35.797117 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:35 crc kubenswrapper[4886]: I1124 08:49:35.797175 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:35 crc kubenswrapper[4886]: I1124 08:49:35.797185 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:35 crc kubenswrapper[4886]: I1124 08:49:35.797200 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:49:35 crc kubenswrapper[4886]: I1124 08:49:35.797211 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:49:35Z","lastTransitionTime":"2025-11-24T08:49:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:49:35 crc kubenswrapper[4886]: I1124 08:49:35.899952 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:35 crc kubenswrapper[4886]: I1124 08:49:35.900012 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:35 crc kubenswrapper[4886]: I1124 08:49:35.900024 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:35 crc kubenswrapper[4886]: I1124 08:49:35.900043 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:49:35 crc kubenswrapper[4886]: I1124 08:49:35.900055 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:49:35Z","lastTransitionTime":"2025-11-24T08:49:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:49:36 crc kubenswrapper[4886]: I1124 08:49:36.003372 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:36 crc kubenswrapper[4886]: I1124 08:49:36.003414 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:36 crc kubenswrapper[4886]: I1124 08:49:36.003423 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:36 crc kubenswrapper[4886]: I1124 08:49:36.003446 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:49:36 crc kubenswrapper[4886]: I1124 08:49:36.003459 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:49:36Z","lastTransitionTime":"2025-11-24T08:49:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:49:36 crc kubenswrapper[4886]: I1124 08:49:36.108762 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:36 crc kubenswrapper[4886]: I1124 08:49:36.109099 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:36 crc kubenswrapper[4886]: I1124 08:49:36.109112 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:36 crc kubenswrapper[4886]: I1124 08:49:36.109133 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:49:36 crc kubenswrapper[4886]: I1124 08:49:36.109172 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:49:36Z","lastTransitionTime":"2025-11-24T08:49:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:49:36 crc kubenswrapper[4886]: I1124 08:49:36.112254 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-kl8k6" event={"ID":"b578bbaf-7246-42d9-9d2d-346bd1da2c41","Type":"ContainerStarted","Data":"9bd697c5a7e9cd31c3ef75163507c789373f10f4ae458f55bf9bdfe318eda6e9"} Nov 24 08:49:36 crc kubenswrapper[4886]: I1124 08:49:36.112384 4886 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 24 08:49:36 crc kubenswrapper[4886]: I1124 08:49:36.112809 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-657wc" Nov 24 08:49:36 crc kubenswrapper[4886]: I1124 08:49:36.136568 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-657wc" Nov 24 08:49:36 crc kubenswrapper[4886]: I1124 08:49:36.147948 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-82v6c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5d34f7b-b75b-4572-87ca-a01c66ba67b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9fe889c11fc75d5c449c6facbeb67f8fc2d60ffedbdf1433b72147cdabd353e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8qmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-82v6c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:36Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:36 crc kubenswrapper[4886]: I1124 08:49:36.161441 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56bddcbe3378ccc43de3046352a1284d9590790973736e218e891b804bfe1ff8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:36Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:36 crc kubenswrapper[4886]: I1124 08:49:36.179516 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:36Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:36 crc kubenswrapper[4886]: I1124 08:49:36.194507 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb6a474a-7674-4280-8da4-784cc3c4fc67\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e99af72fba7cd02a022aede4ff904f6dc0d263429468b460f3886cbbf8767e9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3a5e301ad7f424b4db6c7c2d4054973bf707f225de81d58a13077e1a130fe59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://879cdf4650f88cd23cfed38dabd912619f3c95ab3c254c0eb893eb4cbb2f8c5c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://243dc8c38e86fbd96d073b5dc2edaf3f378f052563abfedf86cd78e6270f11b1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:04Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:36Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:36 crc kubenswrapper[4886]: I1124 08:49:36.209819 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://336d2d1c7e69b9e2c481681be12af7bfd17b8517e7fd97a67b3ca82dbc84ac35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c72af163fd4d5f2dc2a642984873ee83c62e9f1f96fac318fd17d5a74e5ea82f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:36Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:36 crc kubenswrapper[4886]: I1124 08:49:36.211696 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:36 crc kubenswrapper[4886]: I1124 08:49:36.211834 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:36 crc kubenswrapper[4886]: I1124 08:49:36.211901 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:36 crc kubenswrapper[4886]: I1124 08:49:36.212144 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:49:36 crc kubenswrapper[4886]: I1124 08:49:36.212275 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:49:36Z","lastTransitionTime":"2025-11-24T08:49:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:49:36 crc kubenswrapper[4886]: I1124 08:49:36.227697 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://736f509b74b3eb557d2963822fd7f3aa507fc34bc178d6b2c05e05dde6e2c88e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:36Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:36 crc kubenswrapper[4886]: I1124 08:49:36.245987 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:36Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:36 crc kubenswrapper[4886]: I1124 08:49:36.258201 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5g6ld" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82737edd-859f-4e06-8559-47375deb3a1a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f1c6253e8a3c02a22a64ad8913937a50b00f99c7c9da201aa0dd154175dc24c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ffhbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5g6ld\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:36Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:36 crc kubenswrapper[4886]: I1124 08:49:36.273521 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kl8k6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b578bbaf-7246-42d9-9d2d-346bd1da2c41\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6ng9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b4ab49b6707685341f4480554ba33f3dce9687661a3ed08ca228914a6821703\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b4ab49b6707685341f4480554ba33f3dce9687661a3ed08ca228914a6821703\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6ng9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce7a090aac10049685ef8de3f6533a3a7c61f8fe5cc79f2af17f6f36d0e90168\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce7a090aac10049685ef8de3f6533a3a7c61f8fe5cc79f2af17f6f36d0e90168\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6ng9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d53782a307c740b991c352671c3365951fb623f61ade4b061452b63f19b5e606\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d53782a307c740b991c352671c3365951fb623f61ade4b061452b63f19b5e606\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6ng9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce2a601938d1e214fdd14de097a4d95fdf3ac619b03bc2d269a9edf192fa940c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce2a601938d1e214fdd14de097a4d95fdf3ac619b03bc2d269a9edf192fa940c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6ng9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e219f09fdaf4fbce2d0c7f6d7dd746f27e424ef649941221dcd487c414e9eabd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e219f09fdaf4fbce2d0c7f6d7dd746f27e424ef649941221dcd487c414e9eabd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6ng9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6ng9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kl8k6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:36Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:36 crc kubenswrapper[4886]: I1124 08:49:36.291894 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-657wc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03f9078c-6b20-46d5-ae2a-2eb20e236769\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f59f3a090f9824d71c23fcb40dde8c9d34fa8116c17ba79a45c005b2719f3dc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8c129c56a2c69042516d9caa6abdf4095aefde8fdaef4953a8eed353940be89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f504f7f421bd783399a6d08fa713f0a885d8edd5f5245933a98b6830c5573d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51cd23eb1b9056a44b5716ce542fafec8e3ebde494f136655a2a5d838ec90ab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://48584e4993cf06ba4b29a0732136f0f86c23d81adea76e945015ffafb462819d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://97477bfe13de2286806b9a8e3c723e3828c1b0ef182329585eaef507dd0c36f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d74daaf9d67d4afdcf587b8a51260a178f2b6804c138da8f4eb1a229bdfa9328\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://070c77876ce83a479d0ab423d8107a14d506c655b476b2719766d78ad3c88dc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53fba806955f4d7e98c5a2c830bc39cc6f9ac2a6248099aa13da6dff5a0cde76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53fba806955f4d7e98c5a2c830bc39cc6f9ac2a6248099aa13da6dff5a0cde76\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-657wc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:36Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:36 crc kubenswrapper[4886]: I1124 08:49:36.313629 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1de93d0-350b-4f10-a50d-d4ca45b47a33\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6cae19a77290a6aa02abc366057b242de03f58a183d136dc19248cdfa2c0fa75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e34e380e0dd3b6788878e07e1d497d003445742fb3e976e78f116fd775fb0ae8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aa26820df6b9179aede13abc90067edc933b9bae6f11a6a1c32cf1a3ee494c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66381e4a8de0eb3ec683f0643c26082dac4f16a6e152164bf9a8b14449a23b0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52f6359fcbe915c66d972a4cdf89646528d2c1d5099c96cb3436e2da54d1099a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ee3470d2b357bc4a39b9fa7da8800930ea9f4d4a3926aa9d2000be8ca2da241\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ee3470d2b357bc4a39b9fa7da8800930ea9f4d4a3926aa9d2000be8ca2da241\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74bf2e186184e37d22515fc9a765b4ff0530354eb4b6af7679cf6f046651f9e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74bf2e186184e37d22515fc9a765b4ff0530354eb4b6af7679cf6f046651f9e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1794ee7f2b8bf736d63385a3755d5a50e5f05cc74470ed04c3f21054476ff5f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1794ee7f2b8bf736d63385a3755d5a50e5f05cc74470ed04c3f21054476ff5f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:04Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:36Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:36 crc kubenswrapper[4886]: I1124 08:49:36.315492 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:36 crc kubenswrapper[4886]: I1124 08:49:36.315615 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:36 crc kubenswrapper[4886]: I1124 08:49:36.315801 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:36 crc kubenswrapper[4886]: I1124 08:49:36.315894 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:49:36 crc kubenswrapper[4886]: I1124 08:49:36.315986 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:49:36Z","lastTransitionTime":"2025-11-24T08:49:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:49:36 crc kubenswrapper[4886]: I1124 08:49:36.328873 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b56e78f-e569-4f01-9a0f-6b9db3b505d7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4cd419a96834f5093a6a3bad2d2b99f9ab6f4abdbe5d7fddadf1505a0fd1f3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff0e663f6feec9c77de1cd993c443c9b3df3492612c554cc8b95c8e4e4a026b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://146e753c49089634c3d3105a3debf9737245745d8ed9aee61ae2cf4f56ae37ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6d0c617fe424a39e246d154cd6c0d8dc451c251e24dffaf45d644354e94e01e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6d0c617fe424a39e246d154cd6c0d8dc451c251e24dffaf45d644354e94e01e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"message\\\":\\\" 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1124 08:49:25.277531 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1124 08:49:25.277556 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1124 08:49:25.277707 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1124 08:49:25.277722 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1124 08:49:25.278962 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-505038271/tls.crt::/tmp/serving-cert-505038271/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1763974159\\\\\\\\\\\\\\\" (2025-11-24 08:49:18 +0000 UTC to 2025-12-24 08:49:19 +0000 UTC (now=2025-11-24 08:49:25.278915173 +0000 UTC))\\\\\\\"\\\\nI1124 08:49:25.279136 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1763974165\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1763974164\\\\\\\\\\\\\\\" (2025-11-24 07:49:24 +0000 UTC to 2026-11-24 07:49:24 +0000 UTC (now=2025-11-24 08:49:25.279105228 +0000 UTC))\\\\\\\"\\\\nI1124 08:49:25.279186 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1124 08:49:25.279230 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1124 08:49:25.279272 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-505038271/tls.crt::/tmp/serving-cert-505038271/tls.key\\\\\\\"\\\\nI1124 08:49:25.279422 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF1124 08:49:25.273255 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://97e022f9b9279047d700f687148beaeef5ce4f72de418be1bdf0edde4e5211da\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc5ed0e2c8f6299305d6c26194c4b43e6e884785effd5ae99254221d738e7662\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc5ed0e2c8f6299305d6c26194c4b43e6e884785effd5ae99254221d738e7662\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:36Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:36 crc kubenswrapper[4886]: I1124 08:49:36.341894 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zc46q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23cb993e-0360-4449-b604-8ddd825a6502\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://008c7d8517b1c3c5f875d9e73315ceb9936936a1d977422f16bf2aaba7e3f4dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8hf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b6a02c10c81171f23bf0e623a3f86710da37f6dd62f885c0366830a60b2be34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8hf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zc46q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:36Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:36 crc kubenswrapper[4886]: I1124 08:49:36.355089 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2dk8j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d515fec-60f3-4bf7-9ba4-697bb691b670\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ead2821f6b71571212807c6f2984d541faf5143c28fee48594b19e08bd91d085\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6pfs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2dk8j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:36Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:36 crc kubenswrapper[4886]: I1124 08:49:36.367015 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:36Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:36 crc kubenswrapper[4886]: I1124 08:49:36.420294 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:36 crc kubenswrapper[4886]: I1124 08:49:36.420342 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:36 crc kubenswrapper[4886]: I1124 08:49:36.420352 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:36 crc kubenswrapper[4886]: I1124 08:49:36.420371 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:49:36 crc kubenswrapper[4886]: I1124 08:49:36.420384 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:49:36Z","lastTransitionTime":"2025-11-24T08:49:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:49:36 crc kubenswrapper[4886]: I1124 08:49:36.523343 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:36 crc kubenswrapper[4886]: I1124 08:49:36.523642 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:36 crc kubenswrapper[4886]: I1124 08:49:36.523730 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:36 crc kubenswrapper[4886]: I1124 08:49:36.523817 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:49:36 crc kubenswrapper[4886]: I1124 08:49:36.523882 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:49:36Z","lastTransitionTime":"2025-11-24T08:49:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:49:36 crc kubenswrapper[4886]: I1124 08:49:36.627319 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:36 crc kubenswrapper[4886]: I1124 08:49:36.627371 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:36 crc kubenswrapper[4886]: I1124 08:49:36.627385 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:36 crc kubenswrapper[4886]: I1124 08:49:36.627404 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:49:36 crc kubenswrapper[4886]: I1124 08:49:36.627417 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:49:36Z","lastTransitionTime":"2025-11-24T08:49:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:49:36 crc kubenswrapper[4886]: I1124 08:49:36.730008 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:36 crc kubenswrapper[4886]: I1124 08:49:36.730365 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:36 crc kubenswrapper[4886]: I1124 08:49:36.730461 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:36 crc kubenswrapper[4886]: I1124 08:49:36.730562 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:49:36 crc kubenswrapper[4886]: I1124 08:49:36.730649 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:49:36Z","lastTransitionTime":"2025-11-24T08:49:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:49:36 crc kubenswrapper[4886]: I1124 08:49:36.833093 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:36 crc kubenswrapper[4886]: I1124 08:49:36.833146 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:36 crc kubenswrapper[4886]: I1124 08:49:36.833200 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:36 crc kubenswrapper[4886]: I1124 08:49:36.833224 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:49:36 crc kubenswrapper[4886]: I1124 08:49:36.833240 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:49:36Z","lastTransitionTime":"2025-11-24T08:49:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:49:36 crc kubenswrapper[4886]: I1124 08:49:36.848891 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 08:49:36 crc kubenswrapper[4886]: I1124 08:49:36.848975 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 08:49:36 crc kubenswrapper[4886]: E1124 08:49:36.849049 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 08:49:36 crc kubenswrapper[4886]: I1124 08:49:36.848975 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 08:49:36 crc kubenswrapper[4886]: E1124 08:49:36.849145 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 08:49:36 crc kubenswrapper[4886]: E1124 08:49:36.849210 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 08:49:36 crc kubenswrapper[4886]: I1124 08:49:36.936550 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:36 crc kubenswrapper[4886]: I1124 08:49:36.936589 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:36 crc kubenswrapper[4886]: I1124 08:49:36.936602 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:36 crc kubenswrapper[4886]: I1124 08:49:36.936621 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:49:36 crc kubenswrapper[4886]: I1124 08:49:36.936634 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:49:36Z","lastTransitionTime":"2025-11-24T08:49:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:49:37 crc kubenswrapper[4886]: I1124 08:49:37.039655 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:37 crc kubenswrapper[4886]: I1124 08:49:37.039711 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:37 crc kubenswrapper[4886]: I1124 08:49:37.039725 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:37 crc kubenswrapper[4886]: I1124 08:49:37.039745 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:49:37 crc kubenswrapper[4886]: I1124 08:49:37.039761 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:49:37Z","lastTransitionTime":"2025-11-24T08:49:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:49:37 crc kubenswrapper[4886]: I1124 08:49:37.118354 4886 generic.go:334] "Generic (PLEG): container finished" podID="b578bbaf-7246-42d9-9d2d-346bd1da2c41" containerID="9bd697c5a7e9cd31c3ef75163507c789373f10f4ae458f55bf9bdfe318eda6e9" exitCode=0 Nov 24 08:49:37 crc kubenswrapper[4886]: I1124 08:49:37.118510 4886 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 24 08:49:37 crc kubenswrapper[4886]: I1124 08:49:37.118945 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-kl8k6" event={"ID":"b578bbaf-7246-42d9-9d2d-346bd1da2c41","Type":"ContainerDied","Data":"9bd697c5a7e9cd31c3ef75163507c789373f10f4ae458f55bf9bdfe318eda6e9"} Nov 24 08:49:37 crc kubenswrapper[4886]: I1124 08:49:37.134182 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56bddcbe3378ccc43de3046352a1284d9590790973736e218e891b804bfe1ff8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:37Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:37 crc kubenswrapper[4886]: I1124 08:49:37.143456 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:37 crc kubenswrapper[4886]: I1124 08:49:37.143498 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:37 crc kubenswrapper[4886]: I1124 08:49:37.143510 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:37 crc kubenswrapper[4886]: I1124 08:49:37.143531 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:49:37 crc kubenswrapper[4886]: I1124 08:49:37.143547 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:49:37Z","lastTransitionTime":"2025-11-24T08:49:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:49:37 crc kubenswrapper[4886]: I1124 08:49:37.153240 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:37Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:37 crc kubenswrapper[4886]: I1124 08:49:37.166874 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-82v6c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5d34f7b-b75b-4572-87ca-a01c66ba67b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9fe889c11fc75d5c449c6facbeb67f8fc2d60ffedbdf1433b72147cdabd353e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8qmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-82v6c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:37Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:37 crc kubenswrapper[4886]: I1124 08:49:37.184206 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb6a474a-7674-4280-8da4-784cc3c4fc67\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e99af72fba7cd02a022aede4ff904f6dc0d263429468b460f3886cbbf8767e9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3a5e301ad7f424b4db6c7c2d4054973bf707f225de81d58a13077e1a130fe59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://879cdf4650f88cd23cfed38dabd912619f3c95ab3c254c0eb893eb4cbb2f8c5c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://243dc8c38e86fbd96d073b5dc2edaf3f378f052563abfedf86cd78e6270f11b1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:04Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:37Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:37 crc kubenswrapper[4886]: I1124 08:49:37.200462 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://336d2d1c7e69b9e2c481681be12af7bfd17b8517e7fd97a67b3ca82dbc84ac35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c72af163fd4d5f2dc2a642984873ee83c62e9f1f96fac318fd17d5a74e5ea82f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:37Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:37 crc kubenswrapper[4886]: I1124 08:49:37.214568 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:37Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:37 crc kubenswrapper[4886]: I1124 08:49:37.226899 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5g6ld" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82737edd-859f-4e06-8559-47375deb3a1a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f1c6253e8a3c02a22a64ad8913937a50b00f99c7c9da201aa0dd154175dc24c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ffhbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5g6ld\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:37Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:37 crc kubenswrapper[4886]: I1124 08:49:37.244464 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kl8k6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b578bbaf-7246-42d9-9d2d-346bd1da2c41\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6ng9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b4ab49b6707685341f4480554ba33f3dce9687661a3ed08ca228914a6821703\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b4ab49b6707685341f4480554ba33f3dce9687661a3ed08ca228914a6821703\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6ng9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce7a090aac10049685ef8de3f6533a3a7c61f8fe5cc79f2af17f6f36d0e90168\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce7a090aac10049685ef8de3f6533a3a7c61f8fe5cc79f2af17f6f36d0e90168\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6ng9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d53782a307c740b991c352671c3365951fb623f61ade4b061452b63f19b5e606\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d53782a307c740b991c352671c3365951fb623f61ade4b061452b63f19b5e606\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6ng9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce2a601938d1e214fdd14de097a4d95fdf3ac619b03bc2d269a9edf192fa940c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce2a601938d1e214fdd14de097a4d95fdf3ac619b03bc2d269a9edf192fa940c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6ng9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e219f09fdaf4fbce2d0c7f6d7dd746f27e424ef649941221dcd487c414e9eabd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e219f09fdaf4fbce2d0c7f6d7dd746f27e424ef649941221dcd487c414e9eabd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6ng9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bd697c5a7e9cd31c3ef75163507c789373f10f4ae458f55bf9bdfe318eda6e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bd697c5a7e9cd31c3ef75163507c789373f10f4ae458f55bf9bdfe318eda6e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6ng9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kl8k6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:37Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:37 crc kubenswrapper[4886]: I1124 08:49:37.247129 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:37 crc kubenswrapper[4886]: I1124 08:49:37.247199 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:37 crc kubenswrapper[4886]: I1124 08:49:37.247217 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:37 crc kubenswrapper[4886]: I1124 08:49:37.247238 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:49:37 crc kubenswrapper[4886]: I1124 08:49:37.247252 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:49:37Z","lastTransitionTime":"2025-11-24T08:49:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:49:37 crc kubenswrapper[4886]: I1124 08:49:37.266812 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-657wc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03f9078c-6b20-46d5-ae2a-2eb20e236769\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f59f3a090f9824d71c23fcb40dde8c9d34fa8116c17ba79a45c005b2719f3dc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8c129c56a2c69042516d9caa6abdf4095aefde8fdaef4953a8eed353940be89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f504f7f421bd783399a6d08fa713f0a885d8edd5f5245933a98b6830c5573d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51cd23eb1b9056a44b5716ce542fafec8e3ebde494f136655a2a5d838ec90ab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://48584e4993cf06ba4b29a0732136f0f86c23d81adea76e945015ffafb462819d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://97477bfe13de2286806b9a8e3c723e3828c1b0ef182329585eaef507dd0c36f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d74daaf9d67d4afdcf587b8a51260a178f2b6804c138da8f4eb1a229bdfa9328\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://070c77876ce83a479d0ab423d8107a14d506c655b476b2719766d78ad3c88dc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53fba806955f4d7e98c5a2c830bc39cc6f9ac2a6248099aa13da6dff5a0cde76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53fba806955f4d7e98c5a2c830bc39cc6f9ac2a6248099aa13da6dff5a0cde76\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-657wc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:37Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:37 crc kubenswrapper[4886]: I1124 08:49:37.289850 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1de93d0-350b-4f10-a50d-d4ca45b47a33\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6cae19a77290a6aa02abc366057b242de03f58a183d136dc19248cdfa2c0fa75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e34e380e0dd3b6788878e07e1d497d003445742fb3e976e78f116fd775fb0ae8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aa26820df6b9179aede13abc90067edc933b9bae6f11a6a1c32cf1a3ee494c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66381e4a8de0eb3ec683f0643c26082dac4f16a6e152164bf9a8b14449a23b0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52f6359fcbe915c66d972a4cdf89646528d2c1d5099c96cb3436e2da54d1099a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ee3470d2b357bc4a39b9fa7da8800930ea9f4d4a3926aa9d2000be8ca2da241\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ee3470d2b357bc4a39b9fa7da8800930ea9f4d4a3926aa9d2000be8ca2da241\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74bf2e186184e37d22515fc9a765b4ff0530354eb4b6af7679cf6f046651f9e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74bf2e186184e37d22515fc9a765b4ff0530354eb4b6af7679cf6f046651f9e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1794ee7f2b8bf736d63385a3755d5a50e5f05cc74470ed04c3f21054476ff5f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1794ee7f2b8bf736d63385a3755d5a50e5f05cc74470ed04c3f21054476ff5f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:04Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:37Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:37 crc kubenswrapper[4886]: I1124 08:49:37.305508 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b56e78f-e569-4f01-9a0f-6b9db3b505d7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4cd419a96834f5093a6a3bad2d2b99f9ab6f4abdbe5d7fddadf1505a0fd1f3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff0e663f6feec9c77de1cd993c443c9b3df3492612c554cc8b95c8e4e4a026b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://146e753c49089634c3d3105a3debf9737245745d8ed9aee61ae2cf4f56ae37ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6d0c617fe424a39e246d154cd6c0d8dc451c251e24dffaf45d644354e94e01e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6d0c617fe424a39e246d154cd6c0d8dc451c251e24dffaf45d644354e94e01e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"message\\\":\\\" 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1124 08:49:25.277531 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1124 08:49:25.277556 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1124 08:49:25.277707 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1124 08:49:25.277722 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1124 08:49:25.278962 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-505038271/tls.crt::/tmp/serving-cert-505038271/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1763974159\\\\\\\\\\\\\\\" (2025-11-24 08:49:18 +0000 UTC to 2025-12-24 08:49:19 +0000 UTC (now=2025-11-24 08:49:25.278915173 +0000 UTC))\\\\\\\"\\\\nI1124 08:49:25.279136 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1763974165\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1763974164\\\\\\\\\\\\\\\" (2025-11-24 07:49:24 +0000 UTC to 2026-11-24 07:49:24 +0000 UTC (now=2025-11-24 08:49:25.279105228 +0000 UTC))\\\\\\\"\\\\nI1124 08:49:25.279186 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1124 08:49:25.279230 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1124 08:49:25.279272 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-505038271/tls.crt::/tmp/serving-cert-505038271/tls.key\\\\\\\"\\\\nI1124 08:49:25.279422 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF1124 08:49:25.273255 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://97e022f9b9279047d700f687148beaeef5ce4f72de418be1bdf0edde4e5211da\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc5ed0e2c8f6299305d6c26194c4b43e6e884785effd5ae99254221d738e7662\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc5ed0e2c8f6299305d6c26194c4b43e6e884785effd5ae99254221d738e7662\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:37Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:37 crc kubenswrapper[4886]: I1124 08:49:37.318445 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://736f509b74b3eb557d2963822fd7f3aa507fc34bc178d6b2c05e05dde6e2c88e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:37Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:37 crc kubenswrapper[4886]: I1124 08:49:37.329903 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:37Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:37 crc kubenswrapper[4886]: I1124 08:49:37.340649 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zc46q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23cb993e-0360-4449-b604-8ddd825a6502\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://008c7d8517b1c3c5f875d9e73315ceb9936936a1d977422f16bf2aaba7e3f4dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8hf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b6a02c10c81171f23bf0e623a3f86710da37f6dd62f885c0366830a60b2be34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8hf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zc46q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:37Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:37 crc kubenswrapper[4886]: I1124 08:49:37.350714 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:37 crc kubenswrapper[4886]: I1124 08:49:37.350767 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:37 crc kubenswrapper[4886]: I1124 08:49:37.350782 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:37 crc kubenswrapper[4886]: I1124 08:49:37.350803 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:49:37 crc kubenswrapper[4886]: I1124 08:49:37.350817 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:49:37Z","lastTransitionTime":"2025-11-24T08:49:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:49:37 crc kubenswrapper[4886]: I1124 08:49:37.354650 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2dk8j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d515fec-60f3-4bf7-9ba4-697bb691b670\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ead2821f6b71571212807c6f2984d541faf5143c28fee48594b19e08bd91d085\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6pfs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2dk8j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:37Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:37 crc kubenswrapper[4886]: I1124 08:49:37.460266 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:37 crc kubenswrapper[4886]: I1124 08:49:37.460361 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:37 crc kubenswrapper[4886]: I1124 08:49:37.460376 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:37 crc kubenswrapper[4886]: I1124 08:49:37.460399 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:49:37 crc kubenswrapper[4886]: I1124 08:49:37.460413 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:49:37Z","lastTransitionTime":"2025-11-24T08:49:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:49:37 crc kubenswrapper[4886]: I1124 08:49:37.563079 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:37 crc kubenswrapper[4886]: I1124 08:49:37.563134 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:37 crc kubenswrapper[4886]: I1124 08:49:37.563178 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:37 crc kubenswrapper[4886]: I1124 08:49:37.563201 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:49:37 crc kubenswrapper[4886]: I1124 08:49:37.563213 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:49:37Z","lastTransitionTime":"2025-11-24T08:49:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:49:37 crc kubenswrapper[4886]: I1124 08:49:37.666326 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:37 crc kubenswrapper[4886]: I1124 08:49:37.666386 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:37 crc kubenswrapper[4886]: I1124 08:49:37.666398 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:37 crc kubenswrapper[4886]: I1124 08:49:37.666417 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:49:37 crc kubenswrapper[4886]: I1124 08:49:37.666436 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:49:37Z","lastTransitionTime":"2025-11-24T08:49:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:49:37 crc kubenswrapper[4886]: I1124 08:49:37.769969 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:37 crc kubenswrapper[4886]: I1124 08:49:37.770019 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:37 crc kubenswrapper[4886]: I1124 08:49:37.770031 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:37 crc kubenswrapper[4886]: I1124 08:49:37.770050 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:49:37 crc kubenswrapper[4886]: I1124 08:49:37.770062 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:49:37Z","lastTransitionTime":"2025-11-24T08:49:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:49:37 crc kubenswrapper[4886]: I1124 08:49:37.873271 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:37 crc kubenswrapper[4886]: I1124 08:49:37.873350 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:37 crc kubenswrapper[4886]: I1124 08:49:37.873373 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:37 crc kubenswrapper[4886]: I1124 08:49:37.873406 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:49:37 crc kubenswrapper[4886]: I1124 08:49:37.873431 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:49:37Z","lastTransitionTime":"2025-11-24T08:49:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:49:37 crc kubenswrapper[4886]: I1124 08:49:37.976980 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:37 crc kubenswrapper[4886]: I1124 08:49:37.977058 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:37 crc kubenswrapper[4886]: I1124 08:49:37.977069 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:37 crc kubenswrapper[4886]: I1124 08:49:37.977090 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:49:37 crc kubenswrapper[4886]: I1124 08:49:37.977117 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:49:37Z","lastTransitionTime":"2025-11-24T08:49:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:49:38 crc kubenswrapper[4886]: I1124 08:49:38.079929 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:38 crc kubenswrapper[4886]: I1124 08:49:38.080006 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:38 crc kubenswrapper[4886]: I1124 08:49:38.080042 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:38 crc kubenswrapper[4886]: I1124 08:49:38.080074 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:49:38 crc kubenswrapper[4886]: I1124 08:49:38.080096 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:49:38Z","lastTransitionTime":"2025-11-24T08:49:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:49:38 crc kubenswrapper[4886]: I1124 08:49:38.122607 4886 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 24 08:49:38 crc kubenswrapper[4886]: I1124 08:49:38.182759 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:38 crc kubenswrapper[4886]: I1124 08:49:38.182801 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:38 crc kubenswrapper[4886]: I1124 08:49:38.182812 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:38 crc kubenswrapper[4886]: I1124 08:49:38.182828 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:49:38 crc kubenswrapper[4886]: I1124 08:49:38.182838 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:49:38Z","lastTransitionTime":"2025-11-24T08:49:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:49:38 crc kubenswrapper[4886]: I1124 08:49:38.285739 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:38 crc kubenswrapper[4886]: I1124 08:49:38.285803 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:38 crc kubenswrapper[4886]: I1124 08:49:38.285819 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:38 crc kubenswrapper[4886]: I1124 08:49:38.285846 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:49:38 crc kubenswrapper[4886]: I1124 08:49:38.285863 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:49:38Z","lastTransitionTime":"2025-11-24T08:49:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:49:38 crc kubenswrapper[4886]: I1124 08:49:38.334788 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vcq8j"] Nov 24 08:49:38 crc kubenswrapper[4886]: I1124 08:49:38.335377 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vcq8j" Nov 24 08:49:38 crc kubenswrapper[4886]: I1124 08:49:38.337358 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Nov 24 08:49:38 crc kubenswrapper[4886]: I1124 08:49:38.338242 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Nov 24 08:49:38 crc kubenswrapper[4886]: I1124 08:49:38.354144 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56bddcbe3378ccc43de3046352a1284d9590790973736e218e891b804bfe1ff8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:38Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:38 crc kubenswrapper[4886]: I1124 08:49:38.368815 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:38Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:38 crc kubenswrapper[4886]: I1124 08:49:38.381964 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-82v6c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5d34f7b-b75b-4572-87ca-a01c66ba67b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9fe889c11fc75d5c449c6facbeb67f8fc2d60ffedbdf1433b72147cdabd353e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8qmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-82v6c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:38Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:38 crc kubenswrapper[4886]: I1124 08:49:38.388425 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:38 crc kubenswrapper[4886]: I1124 08:49:38.388465 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:38 crc kubenswrapper[4886]: I1124 08:49:38.388496 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:38 crc kubenswrapper[4886]: I1124 08:49:38.388519 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:49:38 crc kubenswrapper[4886]: I1124 08:49:38.388532 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:49:38Z","lastTransitionTime":"2025-11-24T08:49:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:49:38 crc kubenswrapper[4886]: I1124 08:49:38.398172 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb6a474a-7674-4280-8da4-784cc3c4fc67\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e99af72fba7cd02a022aede4ff904f6dc0d263429468b460f3886cbbf8767e9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3a5e301ad7f424b4db6c7c2d4054973bf707f225de81d58a13077e1a130fe59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://879cdf4650f88cd23cfed38dabd912619f3c95ab3c254c0eb893eb4cbb2f8c5c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://243dc8c38e86fbd96d073b5dc2edaf3f378f052563abfedf86cd78e6270f11b1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:04Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:38Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:38 crc kubenswrapper[4886]: I1124 08:49:38.412628 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://336d2d1c7e69b9e2c481681be12af7bfd17b8517e7fd97a67b3ca82dbc84ac35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c72af163fd4d5f2dc2a642984873ee83c62e9f1f96fac318fd17d5a74e5ea82f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:38Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:38 crc kubenswrapper[4886]: I1124 08:49:38.425898 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://736f509b74b3eb557d2963822fd7f3aa507fc34bc178d6b2c05e05dde6e2c88e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:38Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:38 crc kubenswrapper[4886]: I1124 08:49:38.437871 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/e6279457-41d0-46e2-9a21-1d3c74311083-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-vcq8j\" (UID: \"e6279457-41d0-46e2-9a21-1d3c74311083\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vcq8j" Nov 24 08:49:38 crc kubenswrapper[4886]: I1124 08:49:38.438102 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/e6279457-41d0-46e2-9a21-1d3c74311083-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-vcq8j\" (UID: \"e6279457-41d0-46e2-9a21-1d3c74311083\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vcq8j" Nov 24 08:49:38 crc kubenswrapper[4886]: I1124 08:49:38.438220 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mvm6j\" (UniqueName: \"kubernetes.io/projected/e6279457-41d0-46e2-9a21-1d3c74311083-kube-api-access-mvm6j\") pod \"ovnkube-control-plane-749d76644c-vcq8j\" (UID: \"e6279457-41d0-46e2-9a21-1d3c74311083\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vcq8j" Nov 24 08:49:38 crc kubenswrapper[4886]: I1124 08:49:38.438257 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/e6279457-41d0-46e2-9a21-1d3c74311083-env-overrides\") pod \"ovnkube-control-plane-749d76644c-vcq8j\" (UID: \"e6279457-41d0-46e2-9a21-1d3c74311083\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vcq8j" Nov 24 08:49:38 crc kubenswrapper[4886]: I1124 08:49:38.444287 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:38Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:38 crc kubenswrapper[4886]: I1124 08:49:38.458093 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5g6ld" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82737edd-859f-4e06-8559-47375deb3a1a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f1c6253e8a3c02a22a64ad8913937a50b00f99c7c9da201aa0dd154175dc24c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ffhbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5g6ld\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:38Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:38 crc kubenswrapper[4886]: I1124 08:49:38.472080 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kl8k6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b578bbaf-7246-42d9-9d2d-346bd1da2c41\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6ng9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b4ab49b6707685341f4480554ba33f3dce9687661a3ed08ca228914a6821703\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b4ab49b6707685341f4480554ba33f3dce9687661a3ed08ca228914a6821703\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6ng9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce7a090aac10049685ef8de3f6533a3a7c61f8fe5cc79f2af17f6f36d0e90168\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce7a090aac10049685ef8de3f6533a3a7c61f8fe5cc79f2af17f6f36d0e90168\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6ng9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d53782a307c740b991c352671c3365951fb623f61ade4b061452b63f19b5e606\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d53782a307c740b991c352671c3365951fb623f61ade4b061452b63f19b5e606\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6ng9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce2a601938d1e214fdd14de097a4d95fdf3ac619b03bc2d269a9edf192fa940c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce2a601938d1e214fdd14de097a4d95fdf3ac619b03bc2d269a9edf192fa940c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6ng9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e219f09fdaf4fbce2d0c7f6d7dd746f27e424ef649941221dcd487c414e9eabd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e219f09fdaf4fbce2d0c7f6d7dd746f27e424ef649941221dcd487c414e9eabd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6ng9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bd697c5a7e9cd31c3ef75163507c789373f10f4ae458f55bf9bdfe318eda6e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bd697c5a7e9cd31c3ef75163507c789373f10f4ae458f55bf9bdfe318eda6e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6ng9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kl8k6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:38Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:38 crc kubenswrapper[4886]: I1124 08:49:38.491218 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:38 crc kubenswrapper[4886]: I1124 08:49:38.491273 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:38 crc kubenswrapper[4886]: I1124 08:49:38.491282 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:38 crc kubenswrapper[4886]: I1124 08:49:38.491298 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:49:38 crc kubenswrapper[4886]: I1124 08:49:38.491308 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:49:38Z","lastTransitionTime":"2025-11-24T08:49:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:49:38 crc kubenswrapper[4886]: I1124 08:49:38.491838 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-657wc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03f9078c-6b20-46d5-ae2a-2eb20e236769\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f59f3a090f9824d71c23fcb40dde8c9d34fa8116c17ba79a45c005b2719f3dc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8c129c56a2c69042516d9caa6abdf4095aefde8fdaef4953a8eed353940be89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f504f7f421bd783399a6d08fa713f0a885d8edd5f5245933a98b6830c5573d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51cd23eb1b9056a44b5716ce542fafec8e3ebde494f136655a2a5d838ec90ab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://48584e4993cf06ba4b29a0732136f0f86c23d81adea76e945015ffafb462819d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://97477bfe13de2286806b9a8e3c723e3828c1b0ef182329585eaef507dd0c36f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d74daaf9d67d4afdcf587b8a51260a178f2b6804c138da8f4eb1a229bdfa9328\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://070c77876ce83a479d0ab423d8107a14d506c655b476b2719766d78ad3c88dc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53fba806955f4d7e98c5a2c830bc39cc6f9ac2a6248099aa13da6dff5a0cde76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53fba806955f4d7e98c5a2c830bc39cc6f9ac2a6248099aa13da6dff5a0cde76\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-657wc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:38Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:38 crc kubenswrapper[4886]: I1124 08:49:38.514474 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1de93d0-350b-4f10-a50d-d4ca45b47a33\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6cae19a77290a6aa02abc366057b242de03f58a183d136dc19248cdfa2c0fa75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e34e380e0dd3b6788878e07e1d497d003445742fb3e976e78f116fd775fb0ae8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aa26820df6b9179aede13abc90067edc933b9bae6f11a6a1c32cf1a3ee494c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66381e4a8de0eb3ec683f0643c26082dac4f16a6e152164bf9a8b14449a23b0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52f6359fcbe915c66d972a4cdf89646528d2c1d5099c96cb3436e2da54d1099a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ee3470d2b357bc4a39b9fa7da8800930ea9f4d4a3926aa9d2000be8ca2da241\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ee3470d2b357bc4a39b9fa7da8800930ea9f4d4a3926aa9d2000be8ca2da241\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74bf2e186184e37d22515fc9a765b4ff0530354eb4b6af7679cf6f046651f9e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74bf2e186184e37d22515fc9a765b4ff0530354eb4b6af7679cf6f046651f9e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1794ee7f2b8bf736d63385a3755d5a50e5f05cc74470ed04c3f21054476ff5f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1794ee7f2b8bf736d63385a3755d5a50e5f05cc74470ed04c3f21054476ff5f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:04Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:38Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:38 crc kubenswrapper[4886]: I1124 08:49:38.538973 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/e6279457-41d0-46e2-9a21-1d3c74311083-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-vcq8j\" (UID: \"e6279457-41d0-46e2-9a21-1d3c74311083\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vcq8j" Nov 24 08:49:38 crc kubenswrapper[4886]: I1124 08:49:38.539322 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/e6279457-41d0-46e2-9a21-1d3c74311083-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-vcq8j\" (UID: \"e6279457-41d0-46e2-9a21-1d3c74311083\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vcq8j" Nov 24 08:49:38 crc kubenswrapper[4886]: I1124 08:49:38.539467 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mvm6j\" (UniqueName: \"kubernetes.io/projected/e6279457-41d0-46e2-9a21-1d3c74311083-kube-api-access-mvm6j\") pod \"ovnkube-control-plane-749d76644c-vcq8j\" (UID: \"e6279457-41d0-46e2-9a21-1d3c74311083\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vcq8j" Nov 24 08:49:38 crc kubenswrapper[4886]: I1124 08:49:38.539583 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/e6279457-41d0-46e2-9a21-1d3c74311083-env-overrides\") pod \"ovnkube-control-plane-749d76644c-vcq8j\" (UID: \"e6279457-41d0-46e2-9a21-1d3c74311083\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vcq8j" Nov 24 08:49:38 crc kubenswrapper[4886]: I1124 08:49:38.540104 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/e6279457-41d0-46e2-9a21-1d3c74311083-env-overrides\") pod \"ovnkube-control-plane-749d76644c-vcq8j\" (UID: \"e6279457-41d0-46e2-9a21-1d3c74311083\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vcq8j" Nov 24 08:49:38 crc kubenswrapper[4886]: I1124 08:49:38.540831 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/e6279457-41d0-46e2-9a21-1d3c74311083-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-vcq8j\" (UID: \"e6279457-41d0-46e2-9a21-1d3c74311083\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vcq8j" Nov 24 08:49:38 crc kubenswrapper[4886]: I1124 08:49:38.545262 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/e6279457-41d0-46e2-9a21-1d3c74311083-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-vcq8j\" (UID: \"e6279457-41d0-46e2-9a21-1d3c74311083\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vcq8j" Nov 24 08:49:38 crc kubenswrapper[4886]: I1124 08:49:38.555362 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b56e78f-e569-4f01-9a0f-6b9db3b505d7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4cd419a96834f5093a6a3bad2d2b99f9ab6f4abdbe5d7fddadf1505a0fd1f3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff0e663f6feec9c77de1cd993c443c9b3df3492612c554cc8b95c8e4e4a026b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://146e753c49089634c3d3105a3debf9737245745d8ed9aee61ae2cf4f56ae37ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6d0c617fe424a39e246d154cd6c0d8dc451c251e24dffaf45d644354e94e01e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6d0c617fe424a39e246d154cd6c0d8dc451c251e24dffaf45d644354e94e01e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"message\\\":\\\" 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1124 08:49:25.277531 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1124 08:49:25.277556 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1124 08:49:25.277707 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1124 08:49:25.277722 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1124 08:49:25.278962 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-505038271/tls.crt::/tmp/serving-cert-505038271/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1763974159\\\\\\\\\\\\\\\" (2025-11-24 08:49:18 +0000 UTC to 2025-12-24 08:49:19 +0000 UTC (now=2025-11-24 08:49:25.278915173 +0000 UTC))\\\\\\\"\\\\nI1124 08:49:25.279136 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1763974165\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1763974164\\\\\\\\\\\\\\\" (2025-11-24 07:49:24 +0000 UTC to 2026-11-24 07:49:24 +0000 UTC (now=2025-11-24 08:49:25.279105228 +0000 UTC))\\\\\\\"\\\\nI1124 08:49:25.279186 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1124 08:49:25.279230 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1124 08:49:25.279272 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-505038271/tls.crt::/tmp/serving-cert-505038271/tls.key\\\\\\\"\\\\nI1124 08:49:25.279422 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF1124 08:49:25.273255 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://97e022f9b9279047d700f687148beaeef5ce4f72de418be1bdf0edde4e5211da\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc5ed0e2c8f6299305d6c26194c4b43e6e884785effd5ae99254221d738e7662\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc5ed0e2c8f6299305d6c26194c4b43e6e884785effd5ae99254221d738e7662\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:38Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:38 crc kubenswrapper[4886]: I1124 08:49:38.565312 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mvm6j\" (UniqueName: \"kubernetes.io/projected/e6279457-41d0-46e2-9a21-1d3c74311083-kube-api-access-mvm6j\") pod \"ovnkube-control-plane-749d76644c-vcq8j\" (UID: \"e6279457-41d0-46e2-9a21-1d3c74311083\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vcq8j" Nov 24 08:49:38 crc kubenswrapper[4886]: I1124 08:49:38.573100 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vcq8j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6279457-41d0-46e2-9a21-1d3c74311083\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mvm6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mvm6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-vcq8j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:38Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:38 crc kubenswrapper[4886]: I1124 08:49:38.591898 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2dk8j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d515fec-60f3-4bf7-9ba4-697bb691b670\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ead2821f6b71571212807c6f2984d541faf5143c28fee48594b19e08bd91d085\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6pfs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2dk8j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:38Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:38 crc kubenswrapper[4886]: I1124 08:49:38.593776 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:38 crc kubenswrapper[4886]: I1124 08:49:38.593810 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:38 crc kubenswrapper[4886]: I1124 08:49:38.593822 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:38 crc kubenswrapper[4886]: I1124 08:49:38.593841 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:49:38 crc kubenswrapper[4886]: I1124 08:49:38.593852 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:49:38Z","lastTransitionTime":"2025-11-24T08:49:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:49:38 crc kubenswrapper[4886]: I1124 08:49:38.606784 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:38Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:38 crc kubenswrapper[4886]: I1124 08:49:38.619847 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zc46q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23cb993e-0360-4449-b604-8ddd825a6502\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://008c7d8517b1c3c5f875d9e73315ceb9936936a1d977422f16bf2aaba7e3f4dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8hf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b6a02c10c81171f23bf0e623a3f86710da37f6dd62f885c0366830a60b2be34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8hf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zc46q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:38Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:38 crc kubenswrapper[4886]: I1124 08:49:38.647060 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vcq8j" Nov 24 08:49:38 crc kubenswrapper[4886]: W1124 08:49:38.664838 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode6279457_41d0_46e2_9a21_1d3c74311083.slice/crio-b64fa8b597b3cce34e5734e73f05c3276107e579c1191cd992cee109ef480ac3 WatchSource:0}: Error finding container b64fa8b597b3cce34e5734e73f05c3276107e579c1191cd992cee109ef480ac3: Status 404 returned error can't find the container with id b64fa8b597b3cce34e5734e73f05c3276107e579c1191cd992cee109ef480ac3 Nov 24 08:49:38 crc kubenswrapper[4886]: I1124 08:49:38.697859 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:38 crc kubenswrapper[4886]: I1124 08:49:38.697905 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:38 crc kubenswrapper[4886]: I1124 08:49:38.697914 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:38 crc kubenswrapper[4886]: I1124 08:49:38.697931 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:49:38 crc kubenswrapper[4886]: I1124 08:49:38.697940 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:49:38Z","lastTransitionTime":"2025-11-24T08:49:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:49:38 crc kubenswrapper[4886]: I1124 08:49:38.801634 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:38 crc kubenswrapper[4886]: I1124 08:49:38.801688 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:38 crc kubenswrapper[4886]: I1124 08:49:38.801703 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:38 crc kubenswrapper[4886]: I1124 08:49:38.801736 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:49:38 crc kubenswrapper[4886]: I1124 08:49:38.801749 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:49:38Z","lastTransitionTime":"2025-11-24T08:49:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:49:38 crc kubenswrapper[4886]: I1124 08:49:38.851355 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 08:49:38 crc kubenswrapper[4886]: E1124 08:49:38.851508 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 08:49:38 crc kubenswrapper[4886]: I1124 08:49:38.851939 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 08:49:38 crc kubenswrapper[4886]: E1124 08:49:38.851991 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 08:49:38 crc kubenswrapper[4886]: I1124 08:49:38.852033 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 08:49:38 crc kubenswrapper[4886]: E1124 08:49:38.852077 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 08:49:38 crc kubenswrapper[4886]: I1124 08:49:38.904768 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:38 crc kubenswrapper[4886]: I1124 08:49:38.904821 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:38 crc kubenswrapper[4886]: I1124 08:49:38.904833 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:38 crc kubenswrapper[4886]: I1124 08:49:38.904856 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:49:38 crc kubenswrapper[4886]: I1124 08:49:38.904871 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:49:38Z","lastTransitionTime":"2025-11-24T08:49:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:49:39 crc kubenswrapper[4886]: I1124 08:49:39.007344 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:39 crc kubenswrapper[4886]: I1124 08:49:39.007394 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:39 crc kubenswrapper[4886]: I1124 08:49:39.007405 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:39 crc kubenswrapper[4886]: I1124 08:49:39.007425 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:49:39 crc kubenswrapper[4886]: I1124 08:49:39.007448 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:49:39Z","lastTransitionTime":"2025-11-24T08:49:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:49:39 crc kubenswrapper[4886]: I1124 08:49:39.109919 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:39 crc kubenswrapper[4886]: I1124 08:49:39.109960 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:39 crc kubenswrapper[4886]: I1124 08:49:39.109972 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:39 crc kubenswrapper[4886]: I1124 08:49:39.109992 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:49:39 crc kubenswrapper[4886]: I1124 08:49:39.110002 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:49:39Z","lastTransitionTime":"2025-11-24T08:49:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:49:39 crc kubenswrapper[4886]: I1124 08:49:39.127061 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vcq8j" event={"ID":"e6279457-41d0-46e2-9a21-1d3c74311083","Type":"ContainerStarted","Data":"b64fa8b597b3cce34e5734e73f05c3276107e579c1191cd992cee109ef480ac3"} Nov 24 08:49:39 crc kubenswrapper[4886]: I1124 08:49:39.131383 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-kl8k6" event={"ID":"b578bbaf-7246-42d9-9d2d-346bd1da2c41","Type":"ContainerStarted","Data":"9d401bfaa5ee27091fc815de2752da57dcce2799f3ff3f6c88faacbe02565324"} Nov 24 08:49:39 crc kubenswrapper[4886]: I1124 08:49:39.146740 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb6a474a-7674-4280-8da4-784cc3c4fc67\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e99af72fba7cd02a022aede4ff904f6dc0d263429468b460f3886cbbf8767e9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3a5e301ad7f424b4db6c7c2d4054973bf707f225de81d58a13077e1a130fe59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://879cdf4650f88cd23cfed38dabd912619f3c95ab3c254c0eb893eb4cbb2f8c5c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://243dc8c38e86fbd96d073b5dc2edaf3f378f052563abfedf86cd78e6270f11b1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:04Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:39Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:39 crc kubenswrapper[4886]: I1124 08:49:39.161056 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://336d2d1c7e69b9e2c481681be12af7bfd17b8517e7fd97a67b3ca82dbc84ac35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c72af163fd4d5f2dc2a642984873ee83c62e9f1f96fac318fd17d5a74e5ea82f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:39Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:39 crc kubenswrapper[4886]: I1124 08:49:39.172363 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5g6ld" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82737edd-859f-4e06-8559-47375deb3a1a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f1c6253e8a3c02a22a64ad8913937a50b00f99c7c9da201aa0dd154175dc24c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ffhbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5g6ld\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:39Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:39 crc kubenswrapper[4886]: I1124 08:49:39.197077 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kl8k6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b578bbaf-7246-42d9-9d2d-346bd1da2c41\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d401bfaa5ee27091fc815de2752da57dcce2799f3ff3f6c88faacbe02565324\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6ng9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b4ab49b6707685341f4480554ba33f3dce9687661a3ed08ca228914a6821703\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b4ab49b6707685341f4480554ba33f3dce9687661a3ed08ca228914a6821703\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6ng9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce7a090aac10049685ef8de3f6533a3a7c61f8fe5cc79f2af17f6f36d0e90168\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce7a090aac10049685ef8de3f6533a3a7c61f8fe5cc79f2af17f6f36d0e90168\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6ng9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d53782a307c740b991c352671c3365951fb623f61ade4b061452b63f19b5e606\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d53782a307c740b991c352671c3365951fb623f61ade4b061452b63f19b5e606\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6ng9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce2a601938d1e214fdd14de097a4d95fdf3ac619b03bc2d269a9edf192fa940c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce2a601938d1e214fdd14de097a4d95fdf3ac619b03bc2d269a9edf192fa940c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6ng9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e219f09fdaf4fbce2d0c7f6d7dd746f27e424ef649941221dcd487c414e9eabd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e219f09fdaf4fbce2d0c7f6d7dd746f27e424ef649941221dcd487c414e9eabd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6ng9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bd697c5a7e9cd31c3ef75163507c789373f10f4ae458f55bf9bdfe318eda6e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bd697c5a7e9cd31c3ef75163507c789373f10f4ae458f55bf9bdfe318eda6e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6ng9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kl8k6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:39Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:39 crc kubenswrapper[4886]: I1124 08:49:39.213381 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:39 crc kubenswrapper[4886]: I1124 08:49:39.213433 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:39 crc kubenswrapper[4886]: I1124 08:49:39.213447 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:39 crc kubenswrapper[4886]: I1124 08:49:39.213469 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:49:39 crc kubenswrapper[4886]: I1124 08:49:39.213480 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:49:39Z","lastTransitionTime":"2025-11-24T08:49:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:49:39 crc kubenswrapper[4886]: I1124 08:49:39.220071 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-657wc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03f9078c-6b20-46d5-ae2a-2eb20e236769\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f59f3a090f9824d71c23fcb40dde8c9d34fa8116c17ba79a45c005b2719f3dc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8c129c56a2c69042516d9caa6abdf4095aefde8fdaef4953a8eed353940be89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f504f7f421bd783399a6d08fa713f0a885d8edd5f5245933a98b6830c5573d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51cd23eb1b9056a44b5716ce542fafec8e3ebde494f136655a2a5d838ec90ab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://48584e4993cf06ba4b29a0732136f0f86c23d81adea76e945015ffafb462819d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://97477bfe13de2286806b9a8e3c723e3828c1b0ef182329585eaef507dd0c36f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d74daaf9d67d4afdcf587b8a51260a178f2b6804c138da8f4eb1a229bdfa9328\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://070c77876ce83a479d0ab423d8107a14d506c655b476b2719766d78ad3c88dc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53fba806955f4d7e98c5a2c830bc39cc6f9ac2a6248099aa13da6dff5a0cde76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53fba806955f4d7e98c5a2c830bc39cc6f9ac2a6248099aa13da6dff5a0cde76\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-657wc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:39Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:39 crc kubenswrapper[4886]: I1124 08:49:39.245014 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1de93d0-350b-4f10-a50d-d4ca45b47a33\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6cae19a77290a6aa02abc366057b242de03f58a183d136dc19248cdfa2c0fa75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e34e380e0dd3b6788878e07e1d497d003445742fb3e976e78f116fd775fb0ae8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aa26820df6b9179aede13abc90067edc933b9bae6f11a6a1c32cf1a3ee494c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66381e4a8de0eb3ec683f0643c26082dac4f16a6e152164bf9a8b14449a23b0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52f6359fcbe915c66d972a4cdf89646528d2c1d5099c96cb3436e2da54d1099a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ee3470d2b357bc4a39b9fa7da8800930ea9f4d4a3926aa9d2000be8ca2da241\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ee3470d2b357bc4a39b9fa7da8800930ea9f4d4a3926aa9d2000be8ca2da241\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74bf2e186184e37d22515fc9a765b4ff0530354eb4b6af7679cf6f046651f9e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74bf2e186184e37d22515fc9a765b4ff0530354eb4b6af7679cf6f046651f9e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1794ee7f2b8bf736d63385a3755d5a50e5f05cc74470ed04c3f21054476ff5f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1794ee7f2b8bf736d63385a3755d5a50e5f05cc74470ed04c3f21054476ff5f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:04Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:39Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:39 crc kubenswrapper[4886]: I1124 08:49:39.260268 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b56e78f-e569-4f01-9a0f-6b9db3b505d7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4cd419a96834f5093a6a3bad2d2b99f9ab6f4abdbe5d7fddadf1505a0fd1f3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff0e663f6feec9c77de1cd993c443c9b3df3492612c554cc8b95c8e4e4a026b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://146e753c49089634c3d3105a3debf9737245745d8ed9aee61ae2cf4f56ae37ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6d0c617fe424a39e246d154cd6c0d8dc451c251e24dffaf45d644354e94e01e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6d0c617fe424a39e246d154cd6c0d8dc451c251e24dffaf45d644354e94e01e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"message\\\":\\\" 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1124 08:49:25.277531 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1124 08:49:25.277556 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1124 08:49:25.277707 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1124 08:49:25.277722 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1124 08:49:25.278962 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-505038271/tls.crt::/tmp/serving-cert-505038271/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1763974159\\\\\\\\\\\\\\\" (2025-11-24 08:49:18 +0000 UTC to 2025-12-24 08:49:19 +0000 UTC (now=2025-11-24 08:49:25.278915173 +0000 UTC))\\\\\\\"\\\\nI1124 08:49:25.279136 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1763974165\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1763974164\\\\\\\\\\\\\\\" (2025-11-24 07:49:24 +0000 UTC to 2026-11-24 07:49:24 +0000 UTC (now=2025-11-24 08:49:25.279105228 +0000 UTC))\\\\\\\"\\\\nI1124 08:49:25.279186 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1124 08:49:25.279230 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1124 08:49:25.279272 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-505038271/tls.crt::/tmp/serving-cert-505038271/tls.key\\\\\\\"\\\\nI1124 08:49:25.279422 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF1124 08:49:25.273255 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://97e022f9b9279047d700f687148beaeef5ce4f72de418be1bdf0edde4e5211da\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc5ed0e2c8f6299305d6c26194c4b43e6e884785effd5ae99254221d738e7662\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc5ed0e2c8f6299305d6c26194c4b43e6e884785effd5ae99254221d738e7662\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:39Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:39 crc kubenswrapper[4886]: I1124 08:49:39.272264 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://736f509b74b3eb557d2963822fd7f3aa507fc34bc178d6b2c05e05dde6e2c88e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:39Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:39 crc kubenswrapper[4886]: I1124 08:49:39.285074 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:39Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:39 crc kubenswrapper[4886]: I1124 08:49:39.298820 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vcq8j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6279457-41d0-46e2-9a21-1d3c74311083\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mvm6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mvm6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-vcq8j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:39Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:39 crc kubenswrapper[4886]: I1124 08:49:39.316063 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:39 crc kubenswrapper[4886]: I1124 08:49:39.316102 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:39 crc kubenswrapper[4886]: I1124 08:49:39.316111 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:39 crc kubenswrapper[4886]: I1124 08:49:39.316131 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:49:39 crc kubenswrapper[4886]: I1124 08:49:39.316142 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:49:39Z","lastTransitionTime":"2025-11-24T08:49:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:49:39 crc kubenswrapper[4886]: I1124 08:49:39.320260 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:39Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:39 crc kubenswrapper[4886]: I1124 08:49:39.334996 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zc46q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23cb993e-0360-4449-b604-8ddd825a6502\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://008c7d8517b1c3c5f875d9e73315ceb9936936a1d977422f16bf2aaba7e3f4dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8hf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b6a02c10c81171f23bf0e623a3f86710da37f6dd62f885c0366830a60b2be34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8hf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zc46q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:39Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:39 crc kubenswrapper[4886]: I1124 08:49:39.353697 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2dk8j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d515fec-60f3-4bf7-9ba4-697bb691b670\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ead2821f6b71571212807c6f2984d541faf5143c28fee48594b19e08bd91d085\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6pfs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2dk8j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:39Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:39 crc kubenswrapper[4886]: I1124 08:49:39.368420 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56bddcbe3378ccc43de3046352a1284d9590790973736e218e891b804bfe1ff8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:39Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:39 crc kubenswrapper[4886]: I1124 08:49:39.381380 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:39Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:39 crc kubenswrapper[4886]: I1124 08:49:39.394772 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-82v6c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5d34f7b-b75b-4572-87ca-a01c66ba67b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9fe889c11fc75d5c449c6facbeb67f8fc2d60ffedbdf1433b72147cdabd353e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8qmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-82v6c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:39Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:39 crc kubenswrapper[4886]: I1124 08:49:39.418849 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:39 crc kubenswrapper[4886]: I1124 08:49:39.419056 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:39 crc kubenswrapper[4886]: I1124 08:49:39.419164 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:39 crc kubenswrapper[4886]: I1124 08:49:39.419255 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:49:39 crc kubenswrapper[4886]: I1124 08:49:39.419340 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:49:39Z","lastTransitionTime":"2025-11-24T08:49:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:49:39 crc kubenswrapper[4886]: I1124 08:49:39.522318 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:39 crc kubenswrapper[4886]: I1124 08:49:39.522371 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:39 crc kubenswrapper[4886]: I1124 08:49:39.522390 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:39 crc kubenswrapper[4886]: I1124 08:49:39.522413 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:49:39 crc kubenswrapper[4886]: I1124 08:49:39.522429 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:49:39Z","lastTransitionTime":"2025-11-24T08:49:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:49:39 crc kubenswrapper[4886]: I1124 08:49:39.625353 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:39 crc kubenswrapper[4886]: I1124 08:49:39.625400 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:39 crc kubenswrapper[4886]: I1124 08:49:39.625413 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:39 crc kubenswrapper[4886]: I1124 08:49:39.625432 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:49:39 crc kubenswrapper[4886]: I1124 08:49:39.625443 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:49:39Z","lastTransitionTime":"2025-11-24T08:49:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:49:39 crc kubenswrapper[4886]: I1124 08:49:39.729068 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:39 crc kubenswrapper[4886]: I1124 08:49:39.729381 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:39 crc kubenswrapper[4886]: I1124 08:49:39.729393 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:39 crc kubenswrapper[4886]: I1124 08:49:39.729412 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:49:39 crc kubenswrapper[4886]: I1124 08:49:39.729424 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:49:39Z","lastTransitionTime":"2025-11-24T08:49:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:49:39 crc kubenswrapper[4886]: I1124 08:49:39.832619 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:39 crc kubenswrapper[4886]: I1124 08:49:39.832665 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:39 crc kubenswrapper[4886]: I1124 08:49:39.832674 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:39 crc kubenswrapper[4886]: I1124 08:49:39.832692 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:49:39 crc kubenswrapper[4886]: I1124 08:49:39.832704 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:49:39Z","lastTransitionTime":"2025-11-24T08:49:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:49:39 crc kubenswrapper[4886]: I1124 08:49:39.855993 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-fkfxv"] Nov 24 08:49:39 crc kubenswrapper[4886]: I1124 08:49:39.856687 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-fkfxv" Nov 24 08:49:39 crc kubenswrapper[4886]: E1124 08:49:39.856768 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-fkfxv" podUID="7844f778-bcd5-40fe-ad92-0cc0fcd6c5d7" Nov 24 08:49:39 crc kubenswrapper[4886]: I1124 08:49:39.871841 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:39Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:39 crc kubenswrapper[4886]: I1124 08:49:39.884544 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zc46q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23cb993e-0360-4449-b604-8ddd825a6502\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://008c7d8517b1c3c5f875d9e73315ceb9936936a1d977422f16bf2aaba7e3f4dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8hf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b6a02c10c81171f23bf0e623a3f86710da37f6dd62f885c0366830a60b2be34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8hf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zc46q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:39Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:39 crc kubenswrapper[4886]: I1124 08:49:39.900350 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2dk8j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d515fec-60f3-4bf7-9ba4-697bb691b670\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ead2821f6b71571212807c6f2984d541faf5143c28fee48594b19e08bd91d085\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6pfs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2dk8j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:39Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:39 crc kubenswrapper[4886]: I1124 08:49:39.915017 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56bddcbe3378ccc43de3046352a1284d9590790973736e218e891b804bfe1ff8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:39Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:39 crc kubenswrapper[4886]: I1124 08:49:39.930101 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:39Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:39 crc kubenswrapper[4886]: I1124 08:49:39.935391 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:39 crc kubenswrapper[4886]: I1124 08:49:39.935433 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:39 crc kubenswrapper[4886]: I1124 08:49:39.935443 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:39 crc kubenswrapper[4886]: I1124 08:49:39.935461 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:49:39 crc kubenswrapper[4886]: I1124 08:49:39.935472 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:49:39Z","lastTransitionTime":"2025-11-24T08:49:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:49:39 crc kubenswrapper[4886]: I1124 08:49:39.941196 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-82v6c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5d34f7b-b75b-4572-87ca-a01c66ba67b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9fe889c11fc75d5c449c6facbeb67f8fc2d60ffedbdf1433b72147cdabd353e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8qmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-82v6c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:39Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:39 crc kubenswrapper[4886]: I1124 08:49:39.954366 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7844f778-bcd5-40fe-ad92-0cc0fcd6c5d7-metrics-certs\") pod \"network-metrics-daemon-fkfxv\" (UID: \"7844f778-bcd5-40fe-ad92-0cc0fcd6c5d7\") " pod="openshift-multus/network-metrics-daemon-fkfxv" Nov 24 08:49:39 crc kubenswrapper[4886]: I1124 08:49:39.954644 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v9ckc\" (UniqueName: \"kubernetes.io/projected/7844f778-bcd5-40fe-ad92-0cc0fcd6c5d7-kube-api-access-v9ckc\") pod \"network-metrics-daemon-fkfxv\" (UID: \"7844f778-bcd5-40fe-ad92-0cc0fcd6c5d7\") " pod="openshift-multus/network-metrics-daemon-fkfxv" Nov 24 08:49:39 crc kubenswrapper[4886]: I1124 08:49:39.954483 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb6a474a-7674-4280-8da4-784cc3c4fc67\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e99af72fba7cd02a022aede4ff904f6dc0d263429468b460f3886cbbf8767e9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3a5e301ad7f424b4db6c7c2d4054973bf707f225de81d58a13077e1a130fe59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://879cdf4650f88cd23cfed38dabd912619f3c95ab3c254c0eb893eb4cbb2f8c5c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://243dc8c38e86fbd96d073b5dc2edaf3f378f052563abfedf86cd78e6270f11b1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:04Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:39Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:39 crc kubenswrapper[4886]: I1124 08:49:39.973321 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://336d2d1c7e69b9e2c481681be12af7bfd17b8517e7fd97a67b3ca82dbc84ac35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c72af163fd4d5f2dc2a642984873ee83c62e9f1f96fac318fd17d5a74e5ea82f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:39Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:39 crc kubenswrapper[4886]: I1124 08:49:39.990205 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-fkfxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7844f778-bcd5-40fe-ad92-0cc0fcd6c5d7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9ckc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9ckc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:39Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-fkfxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:39Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:40 crc kubenswrapper[4886]: I1124 08:49:40.007085 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:40Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:40 crc kubenswrapper[4886]: I1124 08:49:40.020765 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5g6ld" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82737edd-859f-4e06-8559-47375deb3a1a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f1c6253e8a3c02a22a64ad8913937a50b00f99c7c9da201aa0dd154175dc24c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ffhbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5g6ld\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:40Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:40 crc kubenswrapper[4886]: I1124 08:49:40.038293 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:40 crc kubenswrapper[4886]: I1124 08:49:40.038342 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:40 crc kubenswrapper[4886]: I1124 08:49:40.038356 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:40 crc kubenswrapper[4886]: I1124 08:49:40.038279 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kl8k6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b578bbaf-7246-42d9-9d2d-346bd1da2c41\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d401bfaa5ee27091fc815de2752da57dcce2799f3ff3f6c88faacbe02565324\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6ng9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b4ab49b6707685341f4480554ba33f3dce9687661a3ed08ca228914a6821703\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b4ab49b6707685341f4480554ba33f3dce9687661a3ed08ca228914a6821703\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6ng9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce7a090aac10049685ef8de3f6533a3a7c61f8fe5cc79f2af17f6f36d0e90168\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce7a090aac10049685ef8de3f6533a3a7c61f8fe5cc79f2af17f6f36d0e90168\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6ng9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d53782a307c740b991c352671c3365951fb623f61ade4b061452b63f19b5e606\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d53782a307c740b991c352671c3365951fb623f61ade4b061452b63f19b5e606\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6ng9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce2a601938d1e214fdd14de097a4d95fdf3ac619b03bc2d269a9edf192fa940c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce2a601938d1e214fdd14de097a4d95fdf3ac619b03bc2d269a9edf192fa940c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6ng9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e219f09fdaf4fbce2d0c7f6d7dd746f27e424ef649941221dcd487c414e9eabd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e219f09fdaf4fbce2d0c7f6d7dd746f27e424ef649941221dcd487c414e9eabd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6ng9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bd697c5a7e9cd31c3ef75163507c789373f10f4ae458f55bf9bdfe318eda6e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bd697c5a7e9cd31c3ef75163507c789373f10f4ae458f55bf9bdfe318eda6e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6ng9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kl8k6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:40Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:40 crc kubenswrapper[4886]: I1124 08:49:40.038375 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:49:40 crc kubenswrapper[4886]: I1124 08:49:40.038552 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:49:40Z","lastTransitionTime":"2025-11-24T08:49:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:49:40 crc kubenswrapper[4886]: I1124 08:49:40.056051 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7844f778-bcd5-40fe-ad92-0cc0fcd6c5d7-metrics-certs\") pod \"network-metrics-daemon-fkfxv\" (UID: \"7844f778-bcd5-40fe-ad92-0cc0fcd6c5d7\") " pod="openshift-multus/network-metrics-daemon-fkfxv" Nov 24 08:49:40 crc kubenswrapper[4886]: I1124 08:49:40.056122 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v9ckc\" (UniqueName: \"kubernetes.io/projected/7844f778-bcd5-40fe-ad92-0cc0fcd6c5d7-kube-api-access-v9ckc\") pod \"network-metrics-daemon-fkfxv\" (UID: \"7844f778-bcd5-40fe-ad92-0cc0fcd6c5d7\") " pod="openshift-multus/network-metrics-daemon-fkfxv" Nov 24 08:49:40 crc kubenswrapper[4886]: E1124 08:49:40.056440 4886 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 24 08:49:40 crc kubenswrapper[4886]: E1124 08:49:40.056574 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7844f778-bcd5-40fe-ad92-0cc0fcd6c5d7-metrics-certs podName:7844f778-bcd5-40fe-ad92-0cc0fcd6c5d7 nodeName:}" failed. No retries permitted until 2025-11-24 08:49:40.556547161 +0000 UTC m=+36.443285346 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/7844f778-bcd5-40fe-ad92-0cc0fcd6c5d7-metrics-certs") pod "network-metrics-daemon-fkfxv" (UID: "7844f778-bcd5-40fe-ad92-0cc0fcd6c5d7") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 24 08:49:40 crc kubenswrapper[4886]: I1124 08:49:40.061769 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-657wc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03f9078c-6b20-46d5-ae2a-2eb20e236769\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f59f3a090f9824d71c23fcb40dde8c9d34fa8116c17ba79a45c005b2719f3dc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8c129c56a2c69042516d9caa6abdf4095aefde8fdaef4953a8eed353940be89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f504f7f421bd783399a6d08fa713f0a885d8edd5f5245933a98b6830c5573d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51cd23eb1b9056a44b5716ce542fafec8e3ebde494f136655a2a5d838ec90ab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://48584e4993cf06ba4b29a0732136f0f86c23d81adea76e945015ffafb462819d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://97477bfe13de2286806b9a8e3c723e3828c1b0ef182329585eaef507dd0c36f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d74daaf9d67d4afdcf587b8a51260a178f2b6804c138da8f4eb1a229bdfa9328\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://070c77876ce83a479d0ab423d8107a14d506c655b476b2719766d78ad3c88dc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53fba806955f4d7e98c5a2c830bc39cc6f9ac2a6248099aa13da6dff5a0cde76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53fba806955f4d7e98c5a2c830bc39cc6f9ac2a6248099aa13da6dff5a0cde76\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-657wc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:40Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:40 crc kubenswrapper[4886]: I1124 08:49:40.073904 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v9ckc\" (UniqueName: \"kubernetes.io/projected/7844f778-bcd5-40fe-ad92-0cc0fcd6c5d7-kube-api-access-v9ckc\") pod \"network-metrics-daemon-fkfxv\" (UID: \"7844f778-bcd5-40fe-ad92-0cc0fcd6c5d7\") " pod="openshift-multus/network-metrics-daemon-fkfxv" Nov 24 08:49:40 crc kubenswrapper[4886]: I1124 08:49:40.082649 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1de93d0-350b-4f10-a50d-d4ca45b47a33\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6cae19a77290a6aa02abc366057b242de03f58a183d136dc19248cdfa2c0fa75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e34e380e0dd3b6788878e07e1d497d003445742fb3e976e78f116fd775fb0ae8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aa26820df6b9179aede13abc90067edc933b9bae6f11a6a1c32cf1a3ee494c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66381e4a8de0eb3ec683f0643c26082dac4f16a6e152164bf9a8b14449a23b0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52f6359fcbe915c66d972a4cdf89646528d2c1d5099c96cb3436e2da54d1099a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ee3470d2b357bc4a39b9fa7da8800930ea9f4d4a3926aa9d2000be8ca2da241\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ee3470d2b357bc4a39b9fa7da8800930ea9f4d4a3926aa9d2000be8ca2da241\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74bf2e186184e37d22515fc9a765b4ff0530354eb4b6af7679cf6f046651f9e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74bf2e186184e37d22515fc9a765b4ff0530354eb4b6af7679cf6f046651f9e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1794ee7f2b8bf736d63385a3755d5a50e5f05cc74470ed04c3f21054476ff5f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1794ee7f2b8bf736d63385a3755d5a50e5f05cc74470ed04c3f21054476ff5f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:04Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:40Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:40 crc kubenswrapper[4886]: I1124 08:49:40.097432 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b56e78f-e569-4f01-9a0f-6b9db3b505d7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4cd419a96834f5093a6a3bad2d2b99f9ab6f4abdbe5d7fddadf1505a0fd1f3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff0e663f6feec9c77de1cd993c443c9b3df3492612c554cc8b95c8e4e4a026b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://146e753c49089634c3d3105a3debf9737245745d8ed9aee61ae2cf4f56ae37ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6d0c617fe424a39e246d154cd6c0d8dc451c251e24dffaf45d644354e94e01e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6d0c617fe424a39e246d154cd6c0d8dc451c251e24dffaf45d644354e94e01e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"message\\\":\\\" 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1124 08:49:25.277531 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1124 08:49:25.277556 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1124 08:49:25.277707 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1124 08:49:25.277722 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1124 08:49:25.278962 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-505038271/tls.crt::/tmp/serving-cert-505038271/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1763974159\\\\\\\\\\\\\\\" (2025-11-24 08:49:18 +0000 UTC to 2025-12-24 08:49:19 +0000 UTC (now=2025-11-24 08:49:25.278915173 +0000 UTC))\\\\\\\"\\\\nI1124 08:49:25.279136 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1763974165\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1763974164\\\\\\\\\\\\\\\" (2025-11-24 07:49:24 +0000 UTC to 2026-11-24 07:49:24 +0000 UTC (now=2025-11-24 08:49:25.279105228 +0000 UTC))\\\\\\\"\\\\nI1124 08:49:25.279186 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1124 08:49:25.279230 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1124 08:49:25.279272 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-505038271/tls.crt::/tmp/serving-cert-505038271/tls.key\\\\\\\"\\\\nI1124 08:49:25.279422 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF1124 08:49:25.273255 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://97e022f9b9279047d700f687148beaeef5ce4f72de418be1bdf0edde4e5211da\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc5ed0e2c8f6299305d6c26194c4b43e6e884785effd5ae99254221d738e7662\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc5ed0e2c8f6299305d6c26194c4b43e6e884785effd5ae99254221d738e7662\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:40Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:40 crc kubenswrapper[4886]: I1124 08:49:40.111028 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://736f509b74b3eb557d2963822fd7f3aa507fc34bc178d6b2c05e05dde6e2c88e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:40Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:40 crc kubenswrapper[4886]: I1124 08:49:40.122886 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vcq8j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6279457-41d0-46e2-9a21-1d3c74311083\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mvm6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mvm6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-vcq8j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:40Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:40 crc kubenswrapper[4886]: I1124 08:49:40.141557 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:40 crc kubenswrapper[4886]: I1124 08:49:40.141596 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:40 crc kubenswrapper[4886]: I1124 08:49:40.141612 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:40 crc kubenswrapper[4886]: I1124 08:49:40.141632 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:49:40 crc kubenswrapper[4886]: I1124 08:49:40.141647 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:49:40Z","lastTransitionTime":"2025-11-24T08:49:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:49:40 crc kubenswrapper[4886]: I1124 08:49:40.143755 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-657wc_03f9078c-6b20-46d5-ae2a-2eb20e236769/ovnkube-controller/0.log" Nov 24 08:49:40 crc kubenswrapper[4886]: I1124 08:49:40.146803 4886 generic.go:334] "Generic (PLEG): container finished" podID="03f9078c-6b20-46d5-ae2a-2eb20e236769" containerID="d74daaf9d67d4afdcf587b8a51260a178f2b6804c138da8f4eb1a229bdfa9328" exitCode=1 Nov 24 08:49:40 crc kubenswrapper[4886]: I1124 08:49:40.146886 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-657wc" event={"ID":"03f9078c-6b20-46d5-ae2a-2eb20e236769","Type":"ContainerDied","Data":"d74daaf9d67d4afdcf587b8a51260a178f2b6804c138da8f4eb1a229bdfa9328"} Nov 24 08:49:40 crc kubenswrapper[4886]: I1124 08:49:40.148006 4886 scope.go:117] "RemoveContainer" containerID="d74daaf9d67d4afdcf587b8a51260a178f2b6804c138da8f4eb1a229bdfa9328" Nov 24 08:49:40 crc kubenswrapper[4886]: I1124 08:49:40.148466 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vcq8j" event={"ID":"e6279457-41d0-46e2-9a21-1d3c74311083","Type":"ContainerStarted","Data":"db48bd367bf72ecf426c27f7afe1df9a346b1d8bb8fe198a14399f47699d47b6"} Nov 24 08:49:40 crc kubenswrapper[4886]: I1124 08:49:40.148520 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vcq8j" event={"ID":"e6279457-41d0-46e2-9a21-1d3c74311083","Type":"ContainerStarted","Data":"35b8da4abfec501a571590893bbace52b9e8453890065d2a01b0da8c5b8f9aaf"} Nov 24 08:49:40 crc kubenswrapper[4886]: I1124 08:49:40.160248 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-82v6c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5d34f7b-b75b-4572-87ca-a01c66ba67b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9fe889c11fc75d5c449c6facbeb67f8fc2d60ffedbdf1433b72147cdabd353e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8qmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-82v6c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:40Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:40 crc kubenswrapper[4886]: I1124 08:49:40.181031 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56bddcbe3378ccc43de3046352a1284d9590790973736e218e891b804bfe1ff8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:40Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:40 crc kubenswrapper[4886]: I1124 08:49:40.197883 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:40Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:40 crc kubenswrapper[4886]: I1124 08:49:40.215006 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-fkfxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7844f778-bcd5-40fe-ad92-0cc0fcd6c5d7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9ckc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9ckc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:39Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-fkfxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:40Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:40 crc kubenswrapper[4886]: I1124 08:49:40.228973 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb6a474a-7674-4280-8da4-784cc3c4fc67\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e99af72fba7cd02a022aede4ff904f6dc0d263429468b460f3886cbbf8767e9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3a5e301ad7f424b4db6c7c2d4054973bf707f225de81d58a13077e1a130fe59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://879cdf4650f88cd23cfed38dabd912619f3c95ab3c254c0eb893eb4cbb2f8c5c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://243dc8c38e86fbd96d073b5dc2edaf3f378f052563abfedf86cd78e6270f11b1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:04Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:40Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:40 crc kubenswrapper[4886]: I1124 08:49:40.243891 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://336d2d1c7e69b9e2c481681be12af7bfd17b8517e7fd97a67b3ca82dbc84ac35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c72af163fd4d5f2dc2a642984873ee83c62e9f1f96fac318fd17d5a74e5ea82f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:40Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:40 crc kubenswrapper[4886]: I1124 08:49:40.244065 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:40 crc kubenswrapper[4886]: I1124 08:49:40.244106 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:40 crc kubenswrapper[4886]: I1124 08:49:40.244119 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:40 crc kubenswrapper[4886]: I1124 08:49:40.244140 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:49:40 crc kubenswrapper[4886]: I1124 08:49:40.244172 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:49:40Z","lastTransitionTime":"2025-11-24T08:49:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:49:40 crc kubenswrapper[4886]: I1124 08:49:40.255628 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://736f509b74b3eb557d2963822fd7f3aa507fc34bc178d6b2c05e05dde6e2c88e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:40Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:40 crc kubenswrapper[4886]: I1124 08:49:40.268842 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:40Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:40 crc kubenswrapper[4886]: I1124 08:49:40.279324 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5g6ld" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82737edd-859f-4e06-8559-47375deb3a1a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f1c6253e8a3c02a22a64ad8913937a50b00f99c7c9da201aa0dd154175dc24c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ffhbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5g6ld\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:40Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:40 crc kubenswrapper[4886]: I1124 08:49:40.295741 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kl8k6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b578bbaf-7246-42d9-9d2d-346bd1da2c41\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d401bfaa5ee27091fc815de2752da57dcce2799f3ff3f6c88faacbe02565324\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6ng9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b4ab49b6707685341f4480554ba33f3dce9687661a3ed08ca228914a6821703\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b4ab49b6707685341f4480554ba33f3dce9687661a3ed08ca228914a6821703\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6ng9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce7a090aac10049685ef8de3f6533a3a7c61f8fe5cc79f2af17f6f36d0e90168\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce7a090aac10049685ef8de3f6533a3a7c61f8fe5cc79f2af17f6f36d0e90168\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6ng9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d53782a307c740b991c352671c3365951fb623f61ade4b061452b63f19b5e606\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d53782a307c740b991c352671c3365951fb623f61ade4b061452b63f19b5e606\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6ng9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce2a601938d1e214fdd14de097a4d95fdf3ac619b03bc2d269a9edf192fa940c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce2a601938d1e214fdd14de097a4d95fdf3ac619b03bc2d269a9edf192fa940c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6ng9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e219f09fdaf4fbce2d0c7f6d7dd746f27e424ef649941221dcd487c414e9eabd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e219f09fdaf4fbce2d0c7f6d7dd746f27e424ef649941221dcd487c414e9eabd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6ng9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bd697c5a7e9cd31c3ef75163507c789373f10f4ae458f55bf9bdfe318eda6e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bd697c5a7e9cd31c3ef75163507c789373f10f4ae458f55bf9bdfe318eda6e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6ng9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kl8k6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:40Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:40 crc kubenswrapper[4886]: I1124 08:49:40.320227 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-657wc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03f9078c-6b20-46d5-ae2a-2eb20e236769\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f59f3a090f9824d71c23fcb40dde8c9d34fa8116c17ba79a45c005b2719f3dc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8c129c56a2c69042516d9caa6abdf4095aefde8fdaef4953a8eed353940be89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f504f7f421bd783399a6d08fa713f0a885d8edd5f5245933a98b6830c5573d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51cd23eb1b9056a44b5716ce542fafec8e3ebde494f136655a2a5d838ec90ab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://48584e4993cf06ba4b29a0732136f0f86c23d81adea76e945015ffafb462819d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://97477bfe13de2286806b9a8e3c723e3828c1b0ef182329585eaef507dd0c36f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d74daaf9d67d4afdcf587b8a51260a178f2b6804c138da8f4eb1a229bdfa9328\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d74daaf9d67d4afdcf587b8a51260a178f2b6804c138da8f4eb1a229bdfa9328\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T08:49:39Z\\\",\\\"message\\\":\\\"ere:[where column _uuid == {d71b38eb-32af-4c0f-9490-7c317c111e3a}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1124 08:49:39.631408 6152 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-kube-apiserver/apiserver]} name:Service_openshift-kube-apiserver/apiserver_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.93:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {d71b38eb-32af-4c0f-9490-7c317c111e3a}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1124 08:49:39.631466 6152 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-config-operator/machine-config-daemon\\\\\\\"}\\\\nI1124 08:49:39.631487 6152 services_controller.go:360] Finished syncing service machine-config-daemon on namespace openshift-machine-config-operator for network=default : 3.215833ms\\\\nI1124 08:49:39.631504 6152 services_controller.go:356] Processing sync for service openshift-machine-api/machine-api-controllers for network=default\\\\nF1124 08:49:39.629510 6152 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://070c77876ce83a479d0ab423d8107a14d506c655b476b2719766d78ad3c88dc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53fba806955f4d7e98c5a2c830bc39cc6f9ac2a6248099aa13da6dff5a0cde76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53fba806955f4d7e98c5a2c830bc39cc6f9ac2a6248099aa13da6dff5a0cde76\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-657wc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:40Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:40 crc kubenswrapper[4886]: I1124 08:49:40.340425 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1de93d0-350b-4f10-a50d-d4ca45b47a33\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6cae19a77290a6aa02abc366057b242de03f58a183d136dc19248cdfa2c0fa75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e34e380e0dd3b6788878e07e1d497d003445742fb3e976e78f116fd775fb0ae8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aa26820df6b9179aede13abc90067edc933b9bae6f11a6a1c32cf1a3ee494c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66381e4a8de0eb3ec683f0643c26082dac4f16a6e152164bf9a8b14449a23b0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52f6359fcbe915c66d972a4cdf89646528d2c1d5099c96cb3436e2da54d1099a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ee3470d2b357bc4a39b9fa7da8800930ea9f4d4a3926aa9d2000be8ca2da241\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ee3470d2b357bc4a39b9fa7da8800930ea9f4d4a3926aa9d2000be8ca2da241\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74bf2e186184e37d22515fc9a765b4ff0530354eb4b6af7679cf6f046651f9e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74bf2e186184e37d22515fc9a765b4ff0530354eb4b6af7679cf6f046651f9e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1794ee7f2b8bf736d63385a3755d5a50e5f05cc74470ed04c3f21054476ff5f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1794ee7f2b8bf736d63385a3755d5a50e5f05cc74470ed04c3f21054476ff5f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:04Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:40Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:40 crc kubenswrapper[4886]: I1124 08:49:40.353773 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:40 crc kubenswrapper[4886]: I1124 08:49:40.353827 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:40 crc kubenswrapper[4886]: I1124 08:49:40.353839 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:40 crc kubenswrapper[4886]: I1124 08:49:40.353862 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:49:40 crc kubenswrapper[4886]: I1124 08:49:40.353877 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:49:40Z","lastTransitionTime":"2025-11-24T08:49:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:49:40 crc kubenswrapper[4886]: I1124 08:49:40.355925 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b56e78f-e569-4f01-9a0f-6b9db3b505d7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4cd419a96834f5093a6a3bad2d2b99f9ab6f4abdbe5d7fddadf1505a0fd1f3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff0e663f6feec9c77de1cd993c443c9b3df3492612c554cc8b95c8e4e4a026b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://146e753c49089634c3d3105a3debf9737245745d8ed9aee61ae2cf4f56ae37ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6d0c617fe424a39e246d154cd6c0d8dc451c251e24dffaf45d644354e94e01e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6d0c617fe424a39e246d154cd6c0d8dc451c251e24dffaf45d644354e94e01e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"message\\\":\\\" 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1124 08:49:25.277531 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1124 08:49:25.277556 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1124 08:49:25.277707 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1124 08:49:25.277722 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1124 08:49:25.278962 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-505038271/tls.crt::/tmp/serving-cert-505038271/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1763974159\\\\\\\\\\\\\\\" (2025-11-24 08:49:18 +0000 UTC to 2025-12-24 08:49:19 +0000 UTC (now=2025-11-24 08:49:25.278915173 +0000 UTC))\\\\\\\"\\\\nI1124 08:49:25.279136 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1763974165\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1763974164\\\\\\\\\\\\\\\" (2025-11-24 07:49:24 +0000 UTC to 2026-11-24 07:49:24 +0000 UTC (now=2025-11-24 08:49:25.279105228 +0000 UTC))\\\\\\\"\\\\nI1124 08:49:25.279186 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1124 08:49:25.279230 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1124 08:49:25.279272 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-505038271/tls.crt::/tmp/serving-cert-505038271/tls.key\\\\\\\"\\\\nI1124 08:49:25.279422 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF1124 08:49:25.273255 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://97e022f9b9279047d700f687148beaeef5ce4f72de418be1bdf0edde4e5211da\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc5ed0e2c8f6299305d6c26194c4b43e6e884785effd5ae99254221d738e7662\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc5ed0e2c8f6299305d6c26194c4b43e6e884785effd5ae99254221d738e7662\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:40Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:40 crc kubenswrapper[4886]: I1124 08:49:40.369859 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vcq8j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6279457-41d0-46e2-9a21-1d3c74311083\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mvm6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mvm6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-vcq8j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:40Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:40 crc kubenswrapper[4886]: I1124 08:49:40.381566 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zc46q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23cb993e-0360-4449-b604-8ddd825a6502\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://008c7d8517b1c3c5f875d9e73315ceb9936936a1d977422f16bf2aaba7e3f4dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8hf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b6a02c10c81171f23bf0e623a3f86710da37f6dd62f885c0366830a60b2be34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8hf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zc46q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:40Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:40 crc kubenswrapper[4886]: I1124 08:49:40.395824 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2dk8j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d515fec-60f3-4bf7-9ba4-697bb691b670\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ead2821f6b71571212807c6f2984d541faf5143c28fee48594b19e08bd91d085\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6pfs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2dk8j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:40Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:40 crc kubenswrapper[4886]: I1124 08:49:40.410100 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:40Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:40 crc kubenswrapper[4886]: I1124 08:49:40.431109 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-657wc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03f9078c-6b20-46d5-ae2a-2eb20e236769\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f59f3a090f9824d71c23fcb40dde8c9d34fa8116c17ba79a45c005b2719f3dc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8c129c56a2c69042516d9caa6abdf4095aefde8fdaef4953a8eed353940be89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f504f7f421bd783399a6d08fa713f0a885d8edd5f5245933a98b6830c5573d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51cd23eb1b9056a44b5716ce542fafec8e3ebde494f136655a2a5d838ec90ab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://48584e4993cf06ba4b29a0732136f0f86c23d81adea76e945015ffafb462819d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://97477bfe13de2286806b9a8e3c723e3828c1b0ef182329585eaef507dd0c36f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d74daaf9d67d4afdcf587b8a51260a178f2b6804c138da8f4eb1a229bdfa9328\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d74daaf9d67d4afdcf587b8a51260a178f2b6804c138da8f4eb1a229bdfa9328\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T08:49:39Z\\\",\\\"message\\\":\\\"ere:[where column _uuid == {d71b38eb-32af-4c0f-9490-7c317c111e3a}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1124 08:49:39.631408 6152 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-kube-apiserver/apiserver]} name:Service_openshift-kube-apiserver/apiserver_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.93:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {d71b38eb-32af-4c0f-9490-7c317c111e3a}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1124 08:49:39.631466 6152 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-config-operator/machine-config-daemon\\\\\\\"}\\\\nI1124 08:49:39.631487 6152 services_controller.go:360] Finished syncing service machine-config-daemon on namespace openshift-machine-config-operator for network=default : 3.215833ms\\\\nI1124 08:49:39.631504 6152 services_controller.go:356] Processing sync for service openshift-machine-api/machine-api-controllers for network=default\\\\nF1124 08:49:39.629510 6152 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://070c77876ce83a479d0ab423d8107a14d506c655b476b2719766d78ad3c88dc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53fba806955f4d7e98c5a2c830bc39cc6f9ac2a6248099aa13da6dff5a0cde76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53fba806955f4d7e98c5a2c830bc39cc6f9ac2a6248099aa13da6dff5a0cde76\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-657wc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:40Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:40 crc kubenswrapper[4886]: I1124 08:49:40.452299 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1de93d0-350b-4f10-a50d-d4ca45b47a33\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6cae19a77290a6aa02abc366057b242de03f58a183d136dc19248cdfa2c0fa75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e34e380e0dd3b6788878e07e1d497d003445742fb3e976e78f116fd775fb0ae8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aa26820df6b9179aede13abc90067edc933b9bae6f11a6a1c32cf1a3ee494c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66381e4a8de0eb3ec683f0643c26082dac4f16a6e152164bf9a8b14449a23b0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52f6359fcbe915c66d972a4cdf89646528d2c1d5099c96cb3436e2da54d1099a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ee3470d2b357bc4a39b9fa7da8800930ea9f4d4a3926aa9d2000be8ca2da241\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ee3470d2b357bc4a39b9fa7da8800930ea9f4d4a3926aa9d2000be8ca2da241\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74bf2e186184e37d22515fc9a765b4ff0530354eb4b6af7679cf6f046651f9e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74bf2e186184e37d22515fc9a765b4ff0530354eb4b6af7679cf6f046651f9e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1794ee7f2b8bf736d63385a3755d5a50e5f05cc74470ed04c3f21054476ff5f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1794ee7f2b8bf736d63385a3755d5a50e5f05cc74470ed04c3f21054476ff5f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:04Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:40Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:40 crc kubenswrapper[4886]: I1124 08:49:40.456592 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:40 crc kubenswrapper[4886]: I1124 08:49:40.456636 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:40 crc kubenswrapper[4886]: I1124 08:49:40.456649 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:40 crc kubenswrapper[4886]: I1124 08:49:40.456671 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:49:40 crc kubenswrapper[4886]: I1124 08:49:40.456683 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:49:40Z","lastTransitionTime":"2025-11-24T08:49:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:49:40 crc kubenswrapper[4886]: I1124 08:49:40.469709 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b56e78f-e569-4f01-9a0f-6b9db3b505d7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4cd419a96834f5093a6a3bad2d2b99f9ab6f4abdbe5d7fddadf1505a0fd1f3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff0e663f6feec9c77de1cd993c443c9b3df3492612c554cc8b95c8e4e4a026b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://146e753c49089634c3d3105a3debf9737245745d8ed9aee61ae2cf4f56ae37ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6d0c617fe424a39e246d154cd6c0d8dc451c251e24dffaf45d644354e94e01e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6d0c617fe424a39e246d154cd6c0d8dc451c251e24dffaf45d644354e94e01e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"message\\\":\\\" 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1124 08:49:25.277531 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1124 08:49:25.277556 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1124 08:49:25.277707 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1124 08:49:25.277722 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1124 08:49:25.278962 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-505038271/tls.crt::/tmp/serving-cert-505038271/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1763974159\\\\\\\\\\\\\\\" (2025-11-24 08:49:18 +0000 UTC to 2025-12-24 08:49:19 +0000 UTC (now=2025-11-24 08:49:25.278915173 +0000 UTC))\\\\\\\"\\\\nI1124 08:49:25.279136 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1763974165\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1763974164\\\\\\\\\\\\\\\" (2025-11-24 07:49:24 +0000 UTC to 2026-11-24 07:49:24 +0000 UTC (now=2025-11-24 08:49:25.279105228 +0000 UTC))\\\\\\\"\\\\nI1124 08:49:25.279186 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1124 08:49:25.279230 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1124 08:49:25.279272 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-505038271/tls.crt::/tmp/serving-cert-505038271/tls.key\\\\\\\"\\\\nI1124 08:49:25.279422 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF1124 08:49:25.273255 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://97e022f9b9279047d700f687148beaeef5ce4f72de418be1bdf0edde4e5211da\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc5ed0e2c8f6299305d6c26194c4b43e6e884785effd5ae99254221d738e7662\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc5ed0e2c8f6299305d6c26194c4b43e6e884785effd5ae99254221d738e7662\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:40Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:40 crc kubenswrapper[4886]: I1124 08:49:40.485677 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://736f509b74b3eb557d2963822fd7f3aa507fc34bc178d6b2c05e05dde6e2c88e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:40Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:40 crc kubenswrapper[4886]: I1124 08:49:40.526820 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:40Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:40 crc kubenswrapper[4886]: I1124 08:49:40.550085 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5g6ld" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82737edd-859f-4e06-8559-47375deb3a1a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f1c6253e8a3c02a22a64ad8913937a50b00f99c7c9da201aa0dd154175dc24c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ffhbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5g6ld\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:40Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:40 crc kubenswrapper[4886]: I1124 08:49:40.559690 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:40 crc kubenswrapper[4886]: I1124 08:49:40.559740 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:40 crc kubenswrapper[4886]: I1124 08:49:40.559751 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:40 crc kubenswrapper[4886]: I1124 08:49:40.559771 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:49:40 crc kubenswrapper[4886]: I1124 08:49:40.559783 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:49:40Z","lastTransitionTime":"2025-11-24T08:49:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:49:40 crc kubenswrapper[4886]: I1124 08:49:40.560889 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 08:49:40 crc kubenswrapper[4886]: E1124 08:49:40.561088 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 08:49:56.561055423 +0000 UTC m=+52.447793618 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 08:49:40 crc kubenswrapper[4886]: I1124 08:49:40.561176 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 08:49:40 crc kubenswrapper[4886]: I1124 08:49:40.561240 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7844f778-bcd5-40fe-ad92-0cc0fcd6c5d7-metrics-certs\") pod \"network-metrics-daemon-fkfxv\" (UID: \"7844f778-bcd5-40fe-ad92-0cc0fcd6c5d7\") " pod="openshift-multus/network-metrics-daemon-fkfxv" Nov 24 08:49:40 crc kubenswrapper[4886]: E1124 08:49:40.561266 4886 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 24 08:49:40 crc kubenswrapper[4886]: I1124 08:49:40.561285 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 08:49:40 crc kubenswrapper[4886]: E1124 08:49:40.561350 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-24 08:49:56.561323541 +0000 UTC m=+52.448061746 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 24 08:49:40 crc kubenswrapper[4886]: E1124 08:49:40.561399 4886 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 24 08:49:40 crc kubenswrapper[4886]: E1124 08:49:40.561457 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7844f778-bcd5-40fe-ad92-0cc0fcd6c5d7-metrics-certs podName:7844f778-bcd5-40fe-ad92-0cc0fcd6c5d7 nodeName:}" failed. No retries permitted until 2025-11-24 08:49:41.561441834 +0000 UTC m=+37.448179969 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/7844f778-bcd5-40fe-ad92-0cc0fcd6c5d7-metrics-certs") pod "network-metrics-daemon-fkfxv" (UID: "7844f778-bcd5-40fe-ad92-0cc0fcd6c5d7") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 24 08:49:40 crc kubenswrapper[4886]: E1124 08:49:40.561398 4886 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 24 08:49:40 crc kubenswrapper[4886]: E1124 08:49:40.561491 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-24 08:49:56.561485276 +0000 UTC m=+52.448223401 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 24 08:49:40 crc kubenswrapper[4886]: I1124 08:49:40.569334 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kl8k6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b578bbaf-7246-42d9-9d2d-346bd1da2c41\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d401bfaa5ee27091fc815de2752da57dcce2799f3ff3f6c88faacbe02565324\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6ng9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b4ab49b6707685341f4480554ba33f3dce9687661a3ed08ca228914a6821703\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b4ab49b6707685341f4480554ba33f3dce9687661a3ed08ca228914a6821703\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6ng9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce7a090aac10049685ef8de3f6533a3a7c61f8fe5cc79f2af17f6f36d0e90168\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce7a090aac10049685ef8de3f6533a3a7c61f8fe5cc79f2af17f6f36d0e90168\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6ng9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d53782a307c740b991c352671c3365951fb623f61ade4b061452b63f19b5e606\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d53782a307c740b991c352671c3365951fb623f61ade4b061452b63f19b5e606\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6ng9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce2a601938d1e214fdd14de097a4d95fdf3ac619b03bc2d269a9edf192fa940c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce2a601938d1e214fdd14de097a4d95fdf3ac619b03bc2d269a9edf192fa940c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6ng9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e219f09fdaf4fbce2d0c7f6d7dd746f27e424ef649941221dcd487c414e9eabd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e219f09fdaf4fbce2d0c7f6d7dd746f27e424ef649941221dcd487c414e9eabd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6ng9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bd697c5a7e9cd31c3ef75163507c789373f10f4ae458f55bf9bdfe318eda6e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bd697c5a7e9cd31c3ef75163507c789373f10f4ae458f55bf9bdfe318eda6e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6ng9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kl8k6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:40Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:40 crc kubenswrapper[4886]: I1124 08:49:40.587756 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vcq8j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6279457-41d0-46e2-9a21-1d3c74311083\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35b8da4abfec501a571590893bbace52b9e8453890065d2a01b0da8c5b8f9aaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mvm6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db48bd367bf72ecf426c27f7afe1df9a346b1d8bb8fe198a14399f47699d47b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mvm6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-vcq8j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:40Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:40 crc kubenswrapper[4886]: I1124 08:49:40.605747 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:40Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:40 crc kubenswrapper[4886]: I1124 08:49:40.618449 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zc46q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23cb993e-0360-4449-b604-8ddd825a6502\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://008c7d8517b1c3c5f875d9e73315ceb9936936a1d977422f16bf2aaba7e3f4dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8hf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b6a02c10c81171f23bf0e623a3f86710da37f6dd62f885c0366830a60b2be34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8hf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zc46q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:40Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:40 crc kubenswrapper[4886]: I1124 08:49:40.634738 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2dk8j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d515fec-60f3-4bf7-9ba4-697bb691b670\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ead2821f6b71571212807c6f2984d541faf5143c28fee48594b19e08bd91d085\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6pfs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2dk8j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:40Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:40 crc kubenswrapper[4886]: I1124 08:49:40.649321 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56bddcbe3378ccc43de3046352a1284d9590790973736e218e891b804bfe1ff8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:40Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:40 crc kubenswrapper[4886]: I1124 08:49:40.662208 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:40 crc kubenswrapper[4886]: I1124 08:49:40.662258 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:40 crc kubenswrapper[4886]: I1124 08:49:40.662275 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:40 crc kubenswrapper[4886]: I1124 08:49:40.662296 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:49:40 crc kubenswrapper[4886]: I1124 08:49:40.662309 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:49:40Z","lastTransitionTime":"2025-11-24T08:49:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:49:40 crc kubenswrapper[4886]: E1124 08:49:40.662378 4886 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 24 08:49:40 crc kubenswrapper[4886]: E1124 08:49:40.662403 4886 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 24 08:49:40 crc kubenswrapper[4886]: E1124 08:49:40.662416 4886 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 08:49:40 crc kubenswrapper[4886]: I1124 08:49:40.662250 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 08:49:40 crc kubenswrapper[4886]: E1124 08:49:40.662465 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-24 08:49:56.662447574 +0000 UTC m=+52.549185709 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 08:49:40 crc kubenswrapper[4886]: I1124 08:49:40.662603 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 08:49:40 crc kubenswrapper[4886]: E1124 08:49:40.662882 4886 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 24 08:49:40 crc kubenswrapper[4886]: E1124 08:49:40.662925 4886 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 24 08:49:40 crc kubenswrapper[4886]: E1124 08:49:40.662943 4886 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 08:49:40 crc kubenswrapper[4886]: E1124 08:49:40.663012 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-24 08:49:56.66299496 +0000 UTC m=+52.549733095 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 08:49:40 crc kubenswrapper[4886]: I1124 08:49:40.664879 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:40Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:40 crc kubenswrapper[4886]: I1124 08:49:40.678201 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-82v6c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5d34f7b-b75b-4572-87ca-a01c66ba67b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9fe889c11fc75d5c449c6facbeb67f8fc2d60ffedbdf1433b72147cdabd353e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8qmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-82v6c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:40Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:40 crc kubenswrapper[4886]: I1124 08:49:40.691602 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb6a474a-7674-4280-8da4-784cc3c4fc67\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e99af72fba7cd02a022aede4ff904f6dc0d263429468b460f3886cbbf8767e9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3a5e301ad7f424b4db6c7c2d4054973bf707f225de81d58a13077e1a130fe59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://879cdf4650f88cd23cfed38dabd912619f3c95ab3c254c0eb893eb4cbb2f8c5c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://243dc8c38e86fbd96d073b5dc2edaf3f378f052563abfedf86cd78e6270f11b1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:04Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:40Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:40 crc kubenswrapper[4886]: I1124 08:49:40.708573 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://336d2d1c7e69b9e2c481681be12af7bfd17b8517e7fd97a67b3ca82dbc84ac35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c72af163fd4d5f2dc2a642984873ee83c62e9f1f96fac318fd17d5a74e5ea82f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:40Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:40 crc kubenswrapper[4886]: I1124 08:49:40.721824 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-fkfxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7844f778-bcd5-40fe-ad92-0cc0fcd6c5d7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9ckc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9ckc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:39Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-fkfxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:40Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:40 crc kubenswrapper[4886]: I1124 08:49:40.765344 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:40 crc kubenswrapper[4886]: I1124 08:49:40.765403 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:40 crc kubenswrapper[4886]: I1124 08:49:40.765413 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:40 crc kubenswrapper[4886]: I1124 08:49:40.765434 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:49:40 crc kubenswrapper[4886]: I1124 08:49:40.765446 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:49:40Z","lastTransitionTime":"2025-11-24T08:49:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:49:40 crc kubenswrapper[4886]: I1124 08:49:40.850931 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 08:49:40 crc kubenswrapper[4886]: I1124 08:49:40.850987 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 08:49:40 crc kubenswrapper[4886]: I1124 08:49:40.851058 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 08:49:40 crc kubenswrapper[4886]: E1124 08:49:40.851177 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 08:49:40 crc kubenswrapper[4886]: E1124 08:49:40.851277 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 08:49:40 crc kubenswrapper[4886]: E1124 08:49:40.851397 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 08:49:40 crc kubenswrapper[4886]: I1124 08:49:40.867127 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:40 crc kubenswrapper[4886]: I1124 08:49:40.867196 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:40 crc kubenswrapper[4886]: I1124 08:49:40.867209 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:40 crc kubenswrapper[4886]: I1124 08:49:40.867228 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:49:40 crc kubenswrapper[4886]: I1124 08:49:40.867244 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:49:40Z","lastTransitionTime":"2025-11-24T08:49:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:49:40 crc kubenswrapper[4886]: I1124 08:49:40.902685 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-657wc" Nov 24 08:49:40 crc kubenswrapper[4886]: I1124 08:49:40.969778 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:40 crc kubenswrapper[4886]: I1124 08:49:40.969813 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:40 crc kubenswrapper[4886]: I1124 08:49:40.969822 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:40 crc kubenswrapper[4886]: I1124 08:49:40.969836 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:49:40 crc kubenswrapper[4886]: I1124 08:49:40.969848 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:49:40Z","lastTransitionTime":"2025-11-24T08:49:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:49:41 crc kubenswrapper[4886]: I1124 08:49:41.072786 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:41 crc kubenswrapper[4886]: I1124 08:49:41.072845 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:41 crc kubenswrapper[4886]: I1124 08:49:41.072858 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:41 crc kubenswrapper[4886]: I1124 08:49:41.072882 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:49:41 crc kubenswrapper[4886]: I1124 08:49:41.072899 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:49:41Z","lastTransitionTime":"2025-11-24T08:49:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:49:41 crc kubenswrapper[4886]: I1124 08:49:41.153984 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-657wc_03f9078c-6b20-46d5-ae2a-2eb20e236769/ovnkube-controller/1.log" Nov 24 08:49:41 crc kubenswrapper[4886]: I1124 08:49:41.154829 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-657wc_03f9078c-6b20-46d5-ae2a-2eb20e236769/ovnkube-controller/0.log" Nov 24 08:49:41 crc kubenswrapper[4886]: I1124 08:49:41.157915 4886 generic.go:334] "Generic (PLEG): container finished" podID="03f9078c-6b20-46d5-ae2a-2eb20e236769" containerID="4abc505b9f103f78af1be41a40b7527bf35d25bc6c6e521c6e344290ff1d77b0" exitCode=1 Nov 24 08:49:41 crc kubenswrapper[4886]: I1124 08:49:41.158011 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-657wc" event={"ID":"03f9078c-6b20-46d5-ae2a-2eb20e236769","Type":"ContainerDied","Data":"4abc505b9f103f78af1be41a40b7527bf35d25bc6c6e521c6e344290ff1d77b0"} Nov 24 08:49:41 crc kubenswrapper[4886]: I1124 08:49:41.158092 4886 scope.go:117] "RemoveContainer" containerID="d74daaf9d67d4afdcf587b8a51260a178f2b6804c138da8f4eb1a229bdfa9328" Nov 24 08:49:41 crc kubenswrapper[4886]: I1124 08:49:41.159826 4886 scope.go:117] "RemoveContainer" containerID="4abc505b9f103f78af1be41a40b7527bf35d25bc6c6e521c6e344290ff1d77b0" Nov 24 08:49:41 crc kubenswrapper[4886]: E1124 08:49:41.160098 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-657wc_openshift-ovn-kubernetes(03f9078c-6b20-46d5-ae2a-2eb20e236769)\"" pod="openshift-ovn-kubernetes/ovnkube-node-657wc" podUID="03f9078c-6b20-46d5-ae2a-2eb20e236769" Nov 24 08:49:41 crc kubenswrapper[4886]: I1124 08:49:41.176000 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:41 crc kubenswrapper[4886]: I1124 08:49:41.176068 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:41 crc kubenswrapper[4886]: I1124 08:49:41.176052 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:41Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:41 crc kubenswrapper[4886]: I1124 08:49:41.176092 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:41 crc kubenswrapper[4886]: I1124 08:49:41.176292 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:49:41 crc kubenswrapper[4886]: I1124 08:49:41.176320 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:49:41Z","lastTransitionTime":"2025-11-24T08:49:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:49:41 crc kubenswrapper[4886]: I1124 08:49:41.187751 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zc46q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23cb993e-0360-4449-b604-8ddd825a6502\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://008c7d8517b1c3c5f875d9e73315ceb9936936a1d977422f16bf2aaba7e3f4dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8hf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b6a02c10c81171f23bf0e623a3f86710da37f6dd62f885c0366830a60b2be34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8hf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zc46q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:41Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:41 crc kubenswrapper[4886]: I1124 08:49:41.203279 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2dk8j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d515fec-60f3-4bf7-9ba4-697bb691b670\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ead2821f6b71571212807c6f2984d541faf5143c28fee48594b19e08bd91d085\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6pfs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2dk8j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:41Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:41 crc kubenswrapper[4886]: I1124 08:49:41.220709 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56bddcbe3378ccc43de3046352a1284d9590790973736e218e891b804bfe1ff8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:41Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:41 crc kubenswrapper[4886]: I1124 08:49:41.233735 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:41Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:41 crc kubenswrapper[4886]: I1124 08:49:41.245065 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-82v6c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5d34f7b-b75b-4572-87ca-a01c66ba67b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9fe889c11fc75d5c449c6facbeb67f8fc2d60ffedbdf1433b72147cdabd353e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8qmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-82v6c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:41Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:41 crc kubenswrapper[4886]: I1124 08:49:41.258031 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb6a474a-7674-4280-8da4-784cc3c4fc67\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e99af72fba7cd02a022aede4ff904f6dc0d263429468b460f3886cbbf8767e9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3a5e301ad7f424b4db6c7c2d4054973bf707f225de81d58a13077e1a130fe59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://879cdf4650f88cd23cfed38dabd912619f3c95ab3c254c0eb893eb4cbb2f8c5c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://243dc8c38e86fbd96d073b5dc2edaf3f378f052563abfedf86cd78e6270f11b1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:04Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:41Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:41 crc kubenswrapper[4886]: I1124 08:49:41.263872 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:41 crc kubenswrapper[4886]: I1124 08:49:41.263929 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:41 crc kubenswrapper[4886]: I1124 08:49:41.263945 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:41 crc kubenswrapper[4886]: I1124 08:49:41.263963 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:49:41 crc kubenswrapper[4886]: I1124 08:49:41.263975 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:49:41Z","lastTransitionTime":"2025-11-24T08:49:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:49:41 crc kubenswrapper[4886]: I1124 08:49:41.272222 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://336d2d1c7e69b9e2c481681be12af7bfd17b8517e7fd97a67b3ca82dbc84ac35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c72af163fd4d5f2dc2a642984873ee83c62e9f1f96fac318fd17d5a74e5ea82f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:41Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:41 crc kubenswrapper[4886]: E1124 08:49:41.276793 4886 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T08:49:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T08:49:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T08:49:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T08:49:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bf6e1d17-5641-40b5-abdc-9697895ace84\\\",\\\"systemUUID\\\":\\\"b95cd08d-0a26-454e-842e-e33553e0c6a8\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:41Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:41 crc kubenswrapper[4886]: I1124 08:49:41.280211 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:41 crc kubenswrapper[4886]: I1124 08:49:41.280252 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:41 crc kubenswrapper[4886]: I1124 08:49:41.280261 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:41 crc kubenswrapper[4886]: I1124 08:49:41.280278 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:49:41 crc kubenswrapper[4886]: I1124 08:49:41.280288 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:49:41Z","lastTransitionTime":"2025-11-24T08:49:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:49:41 crc kubenswrapper[4886]: I1124 08:49:41.286724 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-fkfxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7844f778-bcd5-40fe-ad92-0cc0fcd6c5d7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9ckc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9ckc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:39Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-fkfxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:41Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:41 crc kubenswrapper[4886]: E1124 08:49:41.293717 4886 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T08:49:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T08:49:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T08:49:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T08:49:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bf6e1d17-5641-40b5-abdc-9697895ace84\\\",\\\"systemUUID\\\":\\\"b95cd08d-0a26-454e-842e-e33553e0c6a8\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:41Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:41 crc kubenswrapper[4886]: I1124 08:49:41.296872 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:41 crc kubenswrapper[4886]: I1124 08:49:41.296912 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:41 crc kubenswrapper[4886]: I1124 08:49:41.296924 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:41 crc kubenswrapper[4886]: I1124 08:49:41.296948 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:49:41 crc kubenswrapper[4886]: I1124 08:49:41.296961 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:49:41Z","lastTransitionTime":"2025-11-24T08:49:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:49:41 crc kubenswrapper[4886]: I1124 08:49:41.300105 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5g6ld" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82737edd-859f-4e06-8559-47375deb3a1a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f1c6253e8a3c02a22a64ad8913937a50b00f99c7c9da201aa0dd154175dc24c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ffhbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5g6ld\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:41Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:41 crc kubenswrapper[4886]: E1124 08:49:41.317805 4886 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T08:49:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T08:49:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T08:49:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T08:49:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bf6e1d17-5641-40b5-abdc-9697895ace84\\\",\\\"systemUUID\\\":\\\"b95cd08d-0a26-454e-842e-e33553e0c6a8\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:41Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:41 crc kubenswrapper[4886]: I1124 08:49:41.322341 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:41 crc kubenswrapper[4886]: I1124 08:49:41.322386 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:41 crc kubenswrapper[4886]: I1124 08:49:41.322394 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:41 crc kubenswrapper[4886]: I1124 08:49:41.322410 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:49:41 crc kubenswrapper[4886]: I1124 08:49:41.322418 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:49:41Z","lastTransitionTime":"2025-11-24T08:49:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:49:41 crc kubenswrapper[4886]: I1124 08:49:41.322835 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kl8k6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b578bbaf-7246-42d9-9d2d-346bd1da2c41\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d401bfaa5ee27091fc815de2752da57dcce2799f3ff3f6c88faacbe02565324\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6ng9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b4ab49b6707685341f4480554ba33f3dce9687661a3ed08ca228914a6821703\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b4ab49b6707685341f4480554ba33f3dce9687661a3ed08ca228914a6821703\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6ng9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce7a090aac10049685ef8de3f6533a3a7c61f8fe5cc79f2af17f6f36d0e90168\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce7a090aac10049685ef8de3f6533a3a7c61f8fe5cc79f2af17f6f36d0e90168\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6ng9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d53782a307c740b991c352671c3365951fb623f61ade4b061452b63f19b5e606\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d53782a307c740b991c352671c3365951fb623f61ade4b061452b63f19b5e606\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6ng9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce2a601938d1e214fdd14de097a4d95fdf3ac619b03bc2d269a9edf192fa940c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce2a601938d1e214fdd14de097a4d95fdf3ac619b03bc2d269a9edf192fa940c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6ng9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e219f09fdaf4fbce2d0c7f6d7dd746f27e424ef649941221dcd487c414e9eabd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e219f09fdaf4fbce2d0c7f6d7dd746f27e424ef649941221dcd487c414e9eabd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6ng9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bd697c5a7e9cd31c3ef75163507c789373f10f4ae458f55bf9bdfe318eda6e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bd697c5a7e9cd31c3ef75163507c789373f10f4ae458f55bf9bdfe318eda6e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6ng9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kl8k6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:41Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:41 crc kubenswrapper[4886]: E1124 08:49:41.333430 4886 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T08:49:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T08:49:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T08:49:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T08:49:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bf6e1d17-5641-40b5-abdc-9697895ace84\\\",\\\"systemUUID\\\":\\\"b95cd08d-0a26-454e-842e-e33553e0c6a8\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:41Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:41 crc kubenswrapper[4886]: I1124 08:49:41.337185 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:41 crc kubenswrapper[4886]: I1124 08:49:41.337239 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:41 crc kubenswrapper[4886]: I1124 08:49:41.337249 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:41 crc kubenswrapper[4886]: I1124 08:49:41.337265 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:49:41 crc kubenswrapper[4886]: I1124 08:49:41.337277 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:49:41Z","lastTransitionTime":"2025-11-24T08:49:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:49:41 crc kubenswrapper[4886]: I1124 08:49:41.341779 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-657wc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03f9078c-6b20-46d5-ae2a-2eb20e236769\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f59f3a090f9824d71c23fcb40dde8c9d34fa8116c17ba79a45c005b2719f3dc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8c129c56a2c69042516d9caa6abdf4095aefde8fdaef4953a8eed353940be89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f504f7f421bd783399a6d08fa713f0a885d8edd5f5245933a98b6830c5573d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51cd23eb1b9056a44b5716ce542fafec8e3ebde494f136655a2a5d838ec90ab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://48584e4993cf06ba4b29a0732136f0f86c23d81adea76e945015ffafb462819d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://97477bfe13de2286806b9a8e3c723e3828c1b0ef182329585eaef507dd0c36f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4abc505b9f103f78af1be41a40b7527bf35d25bc6c6e521c6e344290ff1d77b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d74daaf9d67d4afdcf587b8a51260a178f2b6804c138da8f4eb1a229bdfa9328\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T08:49:39Z\\\",\\\"message\\\":\\\"ere:[where column _uuid == {d71b38eb-32af-4c0f-9490-7c317c111e3a}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1124 08:49:39.631408 6152 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-kube-apiserver/apiserver]} name:Service_openshift-kube-apiserver/apiserver_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.93:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {d71b38eb-32af-4c0f-9490-7c317c111e3a}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1124 08:49:39.631466 6152 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-config-operator/machine-config-daemon\\\\\\\"}\\\\nI1124 08:49:39.631487 6152 services_controller.go:360] Finished syncing service machine-config-daemon on namespace openshift-machine-config-operator for network=default : 3.215833ms\\\\nI1124 08:49:39.631504 6152 services_controller.go:356] Processing sync for service openshift-machine-api/machine-api-controllers for network=default\\\\nF1124 08:49:39.629510 6152 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:34Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4abc505b9f103f78af1be41a40b7527bf35d25bc6c6e521c6e344290ff1d77b0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T08:49:40Z\\\",\\\"message\\\":\\\"ices.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-route-controller-manager/route-controller-manager_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-route-controller-manager/route-controller-manager\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.239\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI1124 08:49:40.973903 6381 obj_retry.go:386] Retry successful for *v1.Pod openshift-etcd/etcd-crc after 0 failed attempt(s)\\\\nF1124 08:49:40.973905 6381 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://070c77876ce83a479d0ab423d8107a14d506c655b476b2719766d78ad3c88dc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53fba806955f4d7e98c5a2c830bc39cc6f9ac2a6248099aa13da6dff5a0cde76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53fba806955f4d7e98c5a2c830bc39cc6f9ac2a6248099aa13da6dff5a0cde76\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-657wc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:41Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:41 crc kubenswrapper[4886]: E1124 08:49:41.349530 4886 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T08:49:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T08:49:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T08:49:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T08:49:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bf6e1d17-5641-40b5-abdc-9697895ace84\\\",\\\"systemUUID\\\":\\\"b95cd08d-0a26-454e-842e-e33553e0c6a8\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:41Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:41 crc kubenswrapper[4886]: E1124 08:49:41.349701 4886 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 24 08:49:41 crc kubenswrapper[4886]: I1124 08:49:41.352217 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:41 crc kubenswrapper[4886]: I1124 08:49:41.352259 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:41 crc kubenswrapper[4886]: I1124 08:49:41.352273 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:41 crc kubenswrapper[4886]: I1124 08:49:41.352294 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:49:41 crc kubenswrapper[4886]: I1124 08:49:41.352307 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:49:41Z","lastTransitionTime":"2025-11-24T08:49:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:49:41 crc kubenswrapper[4886]: I1124 08:49:41.362627 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1de93d0-350b-4f10-a50d-d4ca45b47a33\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6cae19a77290a6aa02abc366057b242de03f58a183d136dc19248cdfa2c0fa75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e34e380e0dd3b6788878e07e1d497d003445742fb3e976e78f116fd775fb0ae8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aa26820df6b9179aede13abc90067edc933b9bae6f11a6a1c32cf1a3ee494c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66381e4a8de0eb3ec683f0643c26082dac4f16a6e152164bf9a8b14449a23b0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52f6359fcbe915c66d972a4cdf89646528d2c1d5099c96cb3436e2da54d1099a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ee3470d2b357bc4a39b9fa7da8800930ea9f4d4a3926aa9d2000be8ca2da241\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ee3470d2b357bc4a39b9fa7da8800930ea9f4d4a3926aa9d2000be8ca2da241\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74bf2e186184e37d22515fc9a765b4ff0530354eb4b6af7679cf6f046651f9e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74bf2e186184e37d22515fc9a765b4ff0530354eb4b6af7679cf6f046651f9e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1794ee7f2b8bf736d63385a3755d5a50e5f05cc74470ed04c3f21054476ff5f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1794ee7f2b8bf736d63385a3755d5a50e5f05cc74470ed04c3f21054476ff5f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:04Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:41Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:41 crc kubenswrapper[4886]: I1124 08:49:41.376130 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b56e78f-e569-4f01-9a0f-6b9db3b505d7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4cd419a96834f5093a6a3bad2d2b99f9ab6f4abdbe5d7fddadf1505a0fd1f3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff0e663f6feec9c77de1cd993c443c9b3df3492612c554cc8b95c8e4e4a026b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://146e753c49089634c3d3105a3debf9737245745d8ed9aee61ae2cf4f56ae37ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6d0c617fe424a39e246d154cd6c0d8dc451c251e24dffaf45d644354e94e01e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6d0c617fe424a39e246d154cd6c0d8dc451c251e24dffaf45d644354e94e01e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"message\\\":\\\" 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1124 08:49:25.277531 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1124 08:49:25.277556 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1124 08:49:25.277707 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1124 08:49:25.277722 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1124 08:49:25.278962 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-505038271/tls.crt::/tmp/serving-cert-505038271/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1763974159\\\\\\\\\\\\\\\" (2025-11-24 08:49:18 +0000 UTC to 2025-12-24 08:49:19 +0000 UTC (now=2025-11-24 08:49:25.278915173 +0000 UTC))\\\\\\\"\\\\nI1124 08:49:25.279136 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1763974165\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1763974164\\\\\\\\\\\\\\\" (2025-11-24 07:49:24 +0000 UTC to 2026-11-24 07:49:24 +0000 UTC (now=2025-11-24 08:49:25.279105228 +0000 UTC))\\\\\\\"\\\\nI1124 08:49:25.279186 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1124 08:49:25.279230 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1124 08:49:25.279272 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-505038271/tls.crt::/tmp/serving-cert-505038271/tls.key\\\\\\\"\\\\nI1124 08:49:25.279422 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF1124 08:49:25.273255 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://97e022f9b9279047d700f687148beaeef5ce4f72de418be1bdf0edde4e5211da\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc5ed0e2c8f6299305d6c26194c4b43e6e884785effd5ae99254221d738e7662\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc5ed0e2c8f6299305d6c26194c4b43e6e884785effd5ae99254221d738e7662\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:41Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:41 crc kubenswrapper[4886]: I1124 08:49:41.388862 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://736f509b74b3eb557d2963822fd7f3aa507fc34bc178d6b2c05e05dde6e2c88e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:41Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:41 crc kubenswrapper[4886]: I1124 08:49:41.402533 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:41Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:41 crc kubenswrapper[4886]: I1124 08:49:41.414034 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vcq8j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6279457-41d0-46e2-9a21-1d3c74311083\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35b8da4abfec501a571590893bbace52b9e8453890065d2a01b0da8c5b8f9aaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mvm6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db48bd367bf72ecf426c27f7afe1df9a346b1d8bb8fe198a14399f47699d47b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mvm6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-vcq8j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:41Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:41 crc kubenswrapper[4886]: I1124 08:49:41.456249 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:41 crc kubenswrapper[4886]: I1124 08:49:41.456307 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:41 crc kubenswrapper[4886]: I1124 08:49:41.456318 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:41 crc kubenswrapper[4886]: I1124 08:49:41.456337 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:49:41 crc kubenswrapper[4886]: I1124 08:49:41.456349 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:49:41Z","lastTransitionTime":"2025-11-24T08:49:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:49:41 crc kubenswrapper[4886]: I1124 08:49:41.559706 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:41 crc kubenswrapper[4886]: I1124 08:49:41.560320 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:41 crc kubenswrapper[4886]: I1124 08:49:41.560332 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:41 crc kubenswrapper[4886]: I1124 08:49:41.560352 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:49:41 crc kubenswrapper[4886]: I1124 08:49:41.560364 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:49:41Z","lastTransitionTime":"2025-11-24T08:49:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:49:41 crc kubenswrapper[4886]: I1124 08:49:41.572522 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7844f778-bcd5-40fe-ad92-0cc0fcd6c5d7-metrics-certs\") pod \"network-metrics-daemon-fkfxv\" (UID: \"7844f778-bcd5-40fe-ad92-0cc0fcd6c5d7\") " pod="openshift-multus/network-metrics-daemon-fkfxv" Nov 24 08:49:41 crc kubenswrapper[4886]: E1124 08:49:41.572745 4886 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 24 08:49:41 crc kubenswrapper[4886]: E1124 08:49:41.572895 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7844f778-bcd5-40fe-ad92-0cc0fcd6c5d7-metrics-certs podName:7844f778-bcd5-40fe-ad92-0cc0fcd6c5d7 nodeName:}" failed. No retries permitted until 2025-11-24 08:49:43.572869378 +0000 UTC m=+39.459607503 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/7844f778-bcd5-40fe-ad92-0cc0fcd6c5d7-metrics-certs") pod "network-metrics-daemon-fkfxv" (UID: "7844f778-bcd5-40fe-ad92-0cc0fcd6c5d7") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 24 08:49:41 crc kubenswrapper[4886]: I1124 08:49:41.664084 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:41 crc kubenswrapper[4886]: I1124 08:49:41.664126 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:41 crc kubenswrapper[4886]: I1124 08:49:41.664135 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:41 crc kubenswrapper[4886]: I1124 08:49:41.664174 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:49:41 crc kubenswrapper[4886]: I1124 08:49:41.664187 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:49:41Z","lastTransitionTime":"2025-11-24T08:49:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:49:41 crc kubenswrapper[4886]: I1124 08:49:41.767371 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:41 crc kubenswrapper[4886]: I1124 08:49:41.767422 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:41 crc kubenswrapper[4886]: I1124 08:49:41.767432 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:41 crc kubenswrapper[4886]: I1124 08:49:41.767450 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:49:41 crc kubenswrapper[4886]: I1124 08:49:41.767460 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:49:41Z","lastTransitionTime":"2025-11-24T08:49:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:49:41 crc kubenswrapper[4886]: I1124 08:49:41.848073 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-fkfxv" Nov 24 08:49:41 crc kubenswrapper[4886]: E1124 08:49:41.848288 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-fkfxv" podUID="7844f778-bcd5-40fe-ad92-0cc0fcd6c5d7" Nov 24 08:49:41 crc kubenswrapper[4886]: I1124 08:49:41.870400 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:41 crc kubenswrapper[4886]: I1124 08:49:41.870449 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:41 crc kubenswrapper[4886]: I1124 08:49:41.870466 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:41 crc kubenswrapper[4886]: I1124 08:49:41.870489 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:49:41 crc kubenswrapper[4886]: I1124 08:49:41.870509 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:49:41Z","lastTransitionTime":"2025-11-24T08:49:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:49:41 crc kubenswrapper[4886]: I1124 08:49:41.974273 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:41 crc kubenswrapper[4886]: I1124 08:49:41.974355 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:41 crc kubenswrapper[4886]: I1124 08:49:41.974378 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:41 crc kubenswrapper[4886]: I1124 08:49:41.974411 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:49:41 crc kubenswrapper[4886]: I1124 08:49:41.974435 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:49:41Z","lastTransitionTime":"2025-11-24T08:49:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:49:42 crc kubenswrapper[4886]: I1124 08:49:42.077352 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:42 crc kubenswrapper[4886]: I1124 08:49:42.077388 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:42 crc kubenswrapper[4886]: I1124 08:49:42.077398 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:42 crc kubenswrapper[4886]: I1124 08:49:42.077411 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:49:42 crc kubenswrapper[4886]: I1124 08:49:42.077420 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:49:42Z","lastTransitionTime":"2025-11-24T08:49:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:49:42 crc kubenswrapper[4886]: I1124 08:49:42.163753 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-657wc_03f9078c-6b20-46d5-ae2a-2eb20e236769/ovnkube-controller/1.log" Nov 24 08:49:42 crc kubenswrapper[4886]: I1124 08:49:42.169377 4886 scope.go:117] "RemoveContainer" containerID="4abc505b9f103f78af1be41a40b7527bf35d25bc6c6e521c6e344290ff1d77b0" Nov 24 08:49:42 crc kubenswrapper[4886]: E1124 08:49:42.169643 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-657wc_openshift-ovn-kubernetes(03f9078c-6b20-46d5-ae2a-2eb20e236769)\"" pod="openshift-ovn-kubernetes/ovnkube-node-657wc" podUID="03f9078c-6b20-46d5-ae2a-2eb20e236769" Nov 24 08:49:42 crc kubenswrapper[4886]: I1124 08:49:42.179424 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:42 crc kubenswrapper[4886]: I1124 08:49:42.179501 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:42 crc kubenswrapper[4886]: I1124 08:49:42.179519 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:42 crc kubenswrapper[4886]: I1124 08:49:42.179544 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:49:42 crc kubenswrapper[4886]: I1124 08:49:42.179560 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:49:42Z","lastTransitionTime":"2025-11-24T08:49:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:49:42 crc kubenswrapper[4886]: I1124 08:49:42.185094 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-fkfxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7844f778-bcd5-40fe-ad92-0cc0fcd6c5d7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9ckc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9ckc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:39Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-fkfxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:42Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:42 crc kubenswrapper[4886]: I1124 08:49:42.201110 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb6a474a-7674-4280-8da4-784cc3c4fc67\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e99af72fba7cd02a022aede4ff904f6dc0d263429468b460f3886cbbf8767e9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3a5e301ad7f424b4db6c7c2d4054973bf707f225de81d58a13077e1a130fe59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://879cdf4650f88cd23cfed38dabd912619f3c95ab3c254c0eb893eb4cbb2f8c5c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://243dc8c38e86fbd96d073b5dc2edaf3f378f052563abfedf86cd78e6270f11b1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:04Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:42Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:42 crc kubenswrapper[4886]: I1124 08:49:42.216881 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://336d2d1c7e69b9e2c481681be12af7bfd17b8517e7fd97a67b3ca82dbc84ac35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c72af163fd4d5f2dc2a642984873ee83c62e9f1f96fac318fd17d5a74e5ea82f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:42Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:42 crc kubenswrapper[4886]: I1124 08:49:42.233003 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://736f509b74b3eb557d2963822fd7f3aa507fc34bc178d6b2c05e05dde6e2c88e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:42Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:42 crc kubenswrapper[4886]: I1124 08:49:42.247760 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:42Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:42 crc kubenswrapper[4886]: I1124 08:49:42.262687 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5g6ld" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82737edd-859f-4e06-8559-47375deb3a1a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f1c6253e8a3c02a22a64ad8913937a50b00f99c7c9da201aa0dd154175dc24c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ffhbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5g6ld\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:42Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:42 crc kubenswrapper[4886]: I1124 08:49:42.281117 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kl8k6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b578bbaf-7246-42d9-9d2d-346bd1da2c41\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d401bfaa5ee27091fc815de2752da57dcce2799f3ff3f6c88faacbe02565324\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6ng9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b4ab49b6707685341f4480554ba33f3dce9687661a3ed08ca228914a6821703\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b4ab49b6707685341f4480554ba33f3dce9687661a3ed08ca228914a6821703\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6ng9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce7a090aac10049685ef8de3f6533a3a7c61f8fe5cc79f2af17f6f36d0e90168\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce7a090aac10049685ef8de3f6533a3a7c61f8fe5cc79f2af17f6f36d0e90168\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6ng9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d53782a307c740b991c352671c3365951fb623f61ade4b061452b63f19b5e606\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d53782a307c740b991c352671c3365951fb623f61ade4b061452b63f19b5e606\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6ng9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce2a601938d1e214fdd14de097a4d95fdf3ac619b03bc2d269a9edf192fa940c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce2a601938d1e214fdd14de097a4d95fdf3ac619b03bc2d269a9edf192fa940c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6ng9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e219f09fdaf4fbce2d0c7f6d7dd746f27e424ef649941221dcd487c414e9eabd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e219f09fdaf4fbce2d0c7f6d7dd746f27e424ef649941221dcd487c414e9eabd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6ng9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bd697c5a7e9cd31c3ef75163507c789373f10f4ae458f55bf9bdfe318eda6e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bd697c5a7e9cd31c3ef75163507c789373f10f4ae458f55bf9bdfe318eda6e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6ng9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kl8k6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:42Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:42 crc kubenswrapper[4886]: I1124 08:49:42.284063 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:42 crc kubenswrapper[4886]: I1124 08:49:42.284096 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:42 crc kubenswrapper[4886]: I1124 08:49:42.284111 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:42 crc kubenswrapper[4886]: I1124 08:49:42.284134 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:49:42 crc kubenswrapper[4886]: I1124 08:49:42.284173 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:49:42Z","lastTransitionTime":"2025-11-24T08:49:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:49:42 crc kubenswrapper[4886]: I1124 08:49:42.322945 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-657wc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03f9078c-6b20-46d5-ae2a-2eb20e236769\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f59f3a090f9824d71c23fcb40dde8c9d34fa8116c17ba79a45c005b2719f3dc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8c129c56a2c69042516d9caa6abdf4095aefde8fdaef4953a8eed353940be89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f504f7f421bd783399a6d08fa713f0a885d8edd5f5245933a98b6830c5573d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51cd23eb1b9056a44b5716ce542fafec8e3ebde494f136655a2a5d838ec90ab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://48584e4993cf06ba4b29a0732136f0f86c23d81adea76e945015ffafb462819d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://97477bfe13de2286806b9a8e3c723e3828c1b0ef182329585eaef507dd0c36f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4abc505b9f103f78af1be41a40b7527bf35d25bc6c6e521c6e344290ff1d77b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4abc505b9f103f78af1be41a40b7527bf35d25bc6c6e521c6e344290ff1d77b0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T08:49:40Z\\\",\\\"message\\\":\\\"ices.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-route-controller-manager/route-controller-manager_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-route-controller-manager/route-controller-manager\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.239\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI1124 08:49:40.973903 6381 obj_retry.go:386] Retry successful for *v1.Pod openshift-etcd/etcd-crc after 0 failed attempt(s)\\\\nF1124 08:49:40.973905 6381 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:40Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-657wc_openshift-ovn-kubernetes(03f9078c-6b20-46d5-ae2a-2eb20e236769)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://070c77876ce83a479d0ab423d8107a14d506c655b476b2719766d78ad3c88dc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53fba806955f4d7e98c5a2c830bc39cc6f9ac2a6248099aa13da6dff5a0cde76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53fba806955f4d7e98c5a2c830bc39cc6f9ac2a6248099aa13da6dff5a0cde76\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-657wc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:42Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:42 crc kubenswrapper[4886]: I1124 08:49:42.348212 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1de93d0-350b-4f10-a50d-d4ca45b47a33\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6cae19a77290a6aa02abc366057b242de03f58a183d136dc19248cdfa2c0fa75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e34e380e0dd3b6788878e07e1d497d003445742fb3e976e78f116fd775fb0ae8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aa26820df6b9179aede13abc90067edc933b9bae6f11a6a1c32cf1a3ee494c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66381e4a8de0eb3ec683f0643c26082dac4f16a6e152164bf9a8b14449a23b0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52f6359fcbe915c66d972a4cdf89646528d2c1d5099c96cb3436e2da54d1099a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ee3470d2b357bc4a39b9fa7da8800930ea9f4d4a3926aa9d2000be8ca2da241\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ee3470d2b357bc4a39b9fa7da8800930ea9f4d4a3926aa9d2000be8ca2da241\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74bf2e186184e37d22515fc9a765b4ff0530354eb4b6af7679cf6f046651f9e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74bf2e186184e37d22515fc9a765b4ff0530354eb4b6af7679cf6f046651f9e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1794ee7f2b8bf736d63385a3755d5a50e5f05cc74470ed04c3f21054476ff5f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1794ee7f2b8bf736d63385a3755d5a50e5f05cc74470ed04c3f21054476ff5f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:04Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:42Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:42 crc kubenswrapper[4886]: I1124 08:49:42.366240 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b56e78f-e569-4f01-9a0f-6b9db3b505d7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4cd419a96834f5093a6a3bad2d2b99f9ab6f4abdbe5d7fddadf1505a0fd1f3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff0e663f6feec9c77de1cd993c443c9b3df3492612c554cc8b95c8e4e4a026b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://146e753c49089634c3d3105a3debf9737245745d8ed9aee61ae2cf4f56ae37ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6d0c617fe424a39e246d154cd6c0d8dc451c251e24dffaf45d644354e94e01e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6d0c617fe424a39e246d154cd6c0d8dc451c251e24dffaf45d644354e94e01e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"message\\\":\\\" 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1124 08:49:25.277531 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1124 08:49:25.277556 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1124 08:49:25.277707 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1124 08:49:25.277722 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1124 08:49:25.278962 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-505038271/tls.crt::/tmp/serving-cert-505038271/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1763974159\\\\\\\\\\\\\\\" (2025-11-24 08:49:18 +0000 UTC to 2025-12-24 08:49:19 +0000 UTC (now=2025-11-24 08:49:25.278915173 +0000 UTC))\\\\\\\"\\\\nI1124 08:49:25.279136 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1763974165\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1763974164\\\\\\\\\\\\\\\" (2025-11-24 07:49:24 +0000 UTC to 2026-11-24 07:49:24 +0000 UTC (now=2025-11-24 08:49:25.279105228 +0000 UTC))\\\\\\\"\\\\nI1124 08:49:25.279186 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1124 08:49:25.279230 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1124 08:49:25.279272 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-505038271/tls.crt::/tmp/serving-cert-505038271/tls.key\\\\\\\"\\\\nI1124 08:49:25.279422 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF1124 08:49:25.273255 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://97e022f9b9279047d700f687148beaeef5ce4f72de418be1bdf0edde4e5211da\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc5ed0e2c8f6299305d6c26194c4b43e6e884785effd5ae99254221d738e7662\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc5ed0e2c8f6299305d6c26194c4b43e6e884785effd5ae99254221d738e7662\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:42Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:42 crc kubenswrapper[4886]: I1124 08:49:42.383520 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vcq8j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6279457-41d0-46e2-9a21-1d3c74311083\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35b8da4abfec501a571590893bbace52b9e8453890065d2a01b0da8c5b8f9aaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mvm6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db48bd367bf72ecf426c27f7afe1df9a346b1d8bb8fe198a14399f47699d47b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mvm6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-vcq8j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:42Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:42 crc kubenswrapper[4886]: I1124 08:49:42.386801 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:42 crc kubenswrapper[4886]: I1124 08:49:42.386964 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:42 crc kubenswrapper[4886]: I1124 08:49:42.387082 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:42 crc kubenswrapper[4886]: I1124 08:49:42.387222 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:49:42 crc kubenswrapper[4886]: I1124 08:49:42.387324 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:49:42Z","lastTransitionTime":"2025-11-24T08:49:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:49:42 crc kubenswrapper[4886]: I1124 08:49:42.398201 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zc46q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23cb993e-0360-4449-b604-8ddd825a6502\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://008c7d8517b1c3c5f875d9e73315ceb9936936a1d977422f16bf2aaba7e3f4dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8hf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b6a02c10c81171f23bf0e623a3f86710da37f6dd62f885c0366830a60b2be34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8hf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zc46q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:42Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:42 crc kubenswrapper[4886]: I1124 08:49:42.412782 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2dk8j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d515fec-60f3-4bf7-9ba4-697bb691b670\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ead2821f6b71571212807c6f2984d541faf5143c28fee48594b19e08bd91d085\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6pfs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2dk8j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:42Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:42 crc kubenswrapper[4886]: I1124 08:49:42.427981 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:42Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:42 crc kubenswrapper[4886]: I1124 08:49:42.440016 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-82v6c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5d34f7b-b75b-4572-87ca-a01c66ba67b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9fe889c11fc75d5c449c6facbeb67f8fc2d60ffedbdf1433b72147cdabd353e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8qmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-82v6c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:42Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:42 crc kubenswrapper[4886]: I1124 08:49:42.454725 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56bddcbe3378ccc43de3046352a1284d9590790973736e218e891b804bfe1ff8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:42Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:42 crc kubenswrapper[4886]: I1124 08:49:42.469622 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:42Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:42 crc kubenswrapper[4886]: I1124 08:49:42.489714 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:42 crc kubenswrapper[4886]: I1124 08:49:42.489754 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:42 crc kubenswrapper[4886]: I1124 08:49:42.489763 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:42 crc kubenswrapper[4886]: I1124 08:49:42.489780 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:49:42 crc kubenswrapper[4886]: I1124 08:49:42.489790 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:49:42Z","lastTransitionTime":"2025-11-24T08:49:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:49:42 crc kubenswrapper[4886]: I1124 08:49:42.592733 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:42 crc kubenswrapper[4886]: I1124 08:49:42.593037 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:42 crc kubenswrapper[4886]: I1124 08:49:42.593197 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:42 crc kubenswrapper[4886]: I1124 08:49:42.593342 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:49:42 crc kubenswrapper[4886]: I1124 08:49:42.593526 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:49:42Z","lastTransitionTime":"2025-11-24T08:49:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:49:42 crc kubenswrapper[4886]: I1124 08:49:42.697356 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:42 crc kubenswrapper[4886]: I1124 08:49:42.697406 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:42 crc kubenswrapper[4886]: I1124 08:49:42.697419 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:42 crc kubenswrapper[4886]: I1124 08:49:42.697442 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:49:42 crc kubenswrapper[4886]: I1124 08:49:42.697454 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:49:42Z","lastTransitionTime":"2025-11-24T08:49:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:49:42 crc kubenswrapper[4886]: I1124 08:49:42.800459 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:42 crc kubenswrapper[4886]: I1124 08:49:42.800504 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:42 crc kubenswrapper[4886]: I1124 08:49:42.800516 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:42 crc kubenswrapper[4886]: I1124 08:49:42.800534 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:49:42 crc kubenswrapper[4886]: I1124 08:49:42.800545 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:49:42Z","lastTransitionTime":"2025-11-24T08:49:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:49:42 crc kubenswrapper[4886]: I1124 08:49:42.848988 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 08:49:42 crc kubenswrapper[4886]: I1124 08:49:42.849034 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 08:49:42 crc kubenswrapper[4886]: I1124 08:49:42.849079 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 08:49:42 crc kubenswrapper[4886]: E1124 08:49:42.849725 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 08:49:42 crc kubenswrapper[4886]: E1124 08:49:42.849562 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 08:49:42 crc kubenswrapper[4886]: E1124 08:49:42.849405 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 08:49:42 crc kubenswrapper[4886]: I1124 08:49:42.902856 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:42 crc kubenswrapper[4886]: I1124 08:49:42.902894 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:42 crc kubenswrapper[4886]: I1124 08:49:42.902905 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:42 crc kubenswrapper[4886]: I1124 08:49:42.902923 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:49:42 crc kubenswrapper[4886]: I1124 08:49:42.902935 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:49:42Z","lastTransitionTime":"2025-11-24T08:49:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:49:43 crc kubenswrapper[4886]: I1124 08:49:43.005512 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:43 crc kubenswrapper[4886]: I1124 08:49:43.005581 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:43 crc kubenswrapper[4886]: I1124 08:49:43.005595 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:43 crc kubenswrapper[4886]: I1124 08:49:43.005614 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:49:43 crc kubenswrapper[4886]: I1124 08:49:43.005627 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:49:43Z","lastTransitionTime":"2025-11-24T08:49:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:49:43 crc kubenswrapper[4886]: I1124 08:49:43.113500 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:43 crc kubenswrapper[4886]: I1124 08:49:43.113554 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:43 crc kubenswrapper[4886]: I1124 08:49:43.113566 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:43 crc kubenswrapper[4886]: I1124 08:49:43.113584 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:49:43 crc kubenswrapper[4886]: I1124 08:49:43.113595 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:49:43Z","lastTransitionTime":"2025-11-24T08:49:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:49:43 crc kubenswrapper[4886]: I1124 08:49:43.216100 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:43 crc kubenswrapper[4886]: I1124 08:49:43.216423 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:43 crc kubenswrapper[4886]: I1124 08:49:43.216559 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:43 crc kubenswrapper[4886]: I1124 08:49:43.216665 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:49:43 crc kubenswrapper[4886]: I1124 08:49:43.216758 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:49:43Z","lastTransitionTime":"2025-11-24T08:49:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:49:43 crc kubenswrapper[4886]: I1124 08:49:43.319518 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:43 crc kubenswrapper[4886]: I1124 08:49:43.319562 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:43 crc kubenswrapper[4886]: I1124 08:49:43.319573 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:43 crc kubenswrapper[4886]: I1124 08:49:43.319588 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:49:43 crc kubenswrapper[4886]: I1124 08:49:43.319598 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:49:43Z","lastTransitionTime":"2025-11-24T08:49:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:49:43 crc kubenswrapper[4886]: I1124 08:49:43.423143 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:43 crc kubenswrapper[4886]: I1124 08:49:43.423437 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:43 crc kubenswrapper[4886]: I1124 08:49:43.423644 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:43 crc kubenswrapper[4886]: I1124 08:49:43.423854 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:49:43 crc kubenswrapper[4886]: I1124 08:49:43.424033 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:49:43Z","lastTransitionTime":"2025-11-24T08:49:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:49:43 crc kubenswrapper[4886]: I1124 08:49:43.526998 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:43 crc kubenswrapper[4886]: I1124 08:49:43.527261 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:43 crc kubenswrapper[4886]: I1124 08:49:43.527325 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:43 crc kubenswrapper[4886]: I1124 08:49:43.527391 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:49:43 crc kubenswrapper[4886]: I1124 08:49:43.527446 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:49:43Z","lastTransitionTime":"2025-11-24T08:49:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:49:43 crc kubenswrapper[4886]: I1124 08:49:43.597342 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7844f778-bcd5-40fe-ad92-0cc0fcd6c5d7-metrics-certs\") pod \"network-metrics-daemon-fkfxv\" (UID: \"7844f778-bcd5-40fe-ad92-0cc0fcd6c5d7\") " pod="openshift-multus/network-metrics-daemon-fkfxv" Nov 24 08:49:43 crc kubenswrapper[4886]: E1124 08:49:43.597889 4886 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 24 08:49:43 crc kubenswrapper[4886]: E1124 08:49:43.598060 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7844f778-bcd5-40fe-ad92-0cc0fcd6c5d7-metrics-certs podName:7844f778-bcd5-40fe-ad92-0cc0fcd6c5d7 nodeName:}" failed. No retries permitted until 2025-11-24 08:49:47.598043225 +0000 UTC m=+43.484781360 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/7844f778-bcd5-40fe-ad92-0cc0fcd6c5d7-metrics-certs") pod "network-metrics-daemon-fkfxv" (UID: "7844f778-bcd5-40fe-ad92-0cc0fcd6c5d7") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 24 08:49:43 crc kubenswrapper[4886]: I1124 08:49:43.630796 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:43 crc kubenswrapper[4886]: I1124 08:49:43.631188 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:43 crc kubenswrapper[4886]: I1124 08:49:43.631333 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:43 crc kubenswrapper[4886]: I1124 08:49:43.631440 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:49:43 crc kubenswrapper[4886]: I1124 08:49:43.631531 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:49:43Z","lastTransitionTime":"2025-11-24T08:49:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:49:43 crc kubenswrapper[4886]: I1124 08:49:43.748977 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:43 crc kubenswrapper[4886]: I1124 08:49:43.749019 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:43 crc kubenswrapper[4886]: I1124 08:49:43.749031 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:43 crc kubenswrapper[4886]: I1124 08:49:43.749050 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:49:43 crc kubenswrapper[4886]: I1124 08:49:43.749062 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:49:43Z","lastTransitionTime":"2025-11-24T08:49:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:49:43 crc kubenswrapper[4886]: I1124 08:49:43.848864 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-fkfxv" Nov 24 08:49:43 crc kubenswrapper[4886]: E1124 08:49:43.849028 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-fkfxv" podUID="7844f778-bcd5-40fe-ad92-0cc0fcd6c5d7" Nov 24 08:49:43 crc kubenswrapper[4886]: I1124 08:49:43.851518 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:43 crc kubenswrapper[4886]: I1124 08:49:43.851565 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:43 crc kubenswrapper[4886]: I1124 08:49:43.851577 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:43 crc kubenswrapper[4886]: I1124 08:49:43.851594 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:49:43 crc kubenswrapper[4886]: I1124 08:49:43.851613 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:49:43Z","lastTransitionTime":"2025-11-24T08:49:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:49:43 crc kubenswrapper[4886]: I1124 08:49:43.954013 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:43 crc kubenswrapper[4886]: I1124 08:49:43.954280 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:43 crc kubenswrapper[4886]: I1124 08:49:43.954344 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:43 crc kubenswrapper[4886]: I1124 08:49:43.954411 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:49:43 crc kubenswrapper[4886]: I1124 08:49:43.954485 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:49:43Z","lastTransitionTime":"2025-11-24T08:49:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:49:44 crc kubenswrapper[4886]: I1124 08:49:44.056713 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:44 crc kubenswrapper[4886]: I1124 08:49:44.056764 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:44 crc kubenswrapper[4886]: I1124 08:49:44.056776 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:44 crc kubenswrapper[4886]: I1124 08:49:44.056797 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:49:44 crc kubenswrapper[4886]: I1124 08:49:44.056810 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:49:44Z","lastTransitionTime":"2025-11-24T08:49:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:49:44 crc kubenswrapper[4886]: I1124 08:49:44.159284 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:44 crc kubenswrapper[4886]: I1124 08:49:44.159348 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:44 crc kubenswrapper[4886]: I1124 08:49:44.159373 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:44 crc kubenswrapper[4886]: I1124 08:49:44.159399 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:49:44 crc kubenswrapper[4886]: I1124 08:49:44.159412 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:49:44Z","lastTransitionTime":"2025-11-24T08:49:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:49:44 crc kubenswrapper[4886]: I1124 08:49:44.262483 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:44 crc kubenswrapper[4886]: I1124 08:49:44.262879 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:44 crc kubenswrapper[4886]: I1124 08:49:44.263098 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:44 crc kubenswrapper[4886]: I1124 08:49:44.263352 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:49:44 crc kubenswrapper[4886]: I1124 08:49:44.263502 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:49:44Z","lastTransitionTime":"2025-11-24T08:49:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:49:44 crc kubenswrapper[4886]: I1124 08:49:44.366553 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:44 crc kubenswrapper[4886]: I1124 08:49:44.366620 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:44 crc kubenswrapper[4886]: I1124 08:49:44.366637 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:44 crc kubenswrapper[4886]: I1124 08:49:44.366669 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:49:44 crc kubenswrapper[4886]: I1124 08:49:44.366686 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:49:44Z","lastTransitionTime":"2025-11-24T08:49:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:49:44 crc kubenswrapper[4886]: I1124 08:49:44.469700 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:44 crc kubenswrapper[4886]: I1124 08:49:44.469770 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:44 crc kubenswrapper[4886]: I1124 08:49:44.469789 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:44 crc kubenswrapper[4886]: I1124 08:49:44.469814 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:49:44 crc kubenswrapper[4886]: I1124 08:49:44.469830 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:49:44Z","lastTransitionTime":"2025-11-24T08:49:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:49:44 crc kubenswrapper[4886]: I1124 08:49:44.572675 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:44 crc kubenswrapper[4886]: I1124 08:49:44.572736 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:44 crc kubenswrapper[4886]: I1124 08:49:44.572750 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:44 crc kubenswrapper[4886]: I1124 08:49:44.572770 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:49:44 crc kubenswrapper[4886]: I1124 08:49:44.572782 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:49:44Z","lastTransitionTime":"2025-11-24T08:49:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:49:44 crc kubenswrapper[4886]: I1124 08:49:44.675628 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:44 crc kubenswrapper[4886]: I1124 08:49:44.675688 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:44 crc kubenswrapper[4886]: I1124 08:49:44.675706 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:44 crc kubenswrapper[4886]: I1124 08:49:44.675730 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:49:44 crc kubenswrapper[4886]: I1124 08:49:44.675745 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:49:44Z","lastTransitionTime":"2025-11-24T08:49:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:49:44 crc kubenswrapper[4886]: I1124 08:49:44.779198 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:44 crc kubenswrapper[4886]: I1124 08:49:44.779257 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:44 crc kubenswrapper[4886]: I1124 08:49:44.779266 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:44 crc kubenswrapper[4886]: I1124 08:49:44.779295 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:49:44 crc kubenswrapper[4886]: I1124 08:49:44.779306 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:49:44Z","lastTransitionTime":"2025-11-24T08:49:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:49:44 crc kubenswrapper[4886]: I1124 08:49:44.849271 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 08:49:44 crc kubenswrapper[4886]: I1124 08:49:44.849331 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 08:49:44 crc kubenswrapper[4886]: E1124 08:49:44.849512 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 08:49:44 crc kubenswrapper[4886]: I1124 08:49:44.850117 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 08:49:44 crc kubenswrapper[4886]: E1124 08:49:44.850220 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 08:49:44 crc kubenswrapper[4886]: E1124 08:49:44.850313 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 08:49:44 crc kubenswrapper[4886]: I1124 08:49:44.866442 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56bddcbe3378ccc43de3046352a1284d9590790973736e218e891b804bfe1ff8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:44Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:44 crc kubenswrapper[4886]: I1124 08:49:44.878587 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:44Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:44 crc kubenswrapper[4886]: I1124 08:49:44.881309 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:44 crc kubenswrapper[4886]: I1124 08:49:44.881339 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:44 crc kubenswrapper[4886]: I1124 08:49:44.881349 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:44 crc kubenswrapper[4886]: I1124 08:49:44.881365 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:49:44 crc kubenswrapper[4886]: I1124 08:49:44.881374 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:49:44Z","lastTransitionTime":"2025-11-24T08:49:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:49:44 crc kubenswrapper[4886]: I1124 08:49:44.888839 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-82v6c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5d34f7b-b75b-4572-87ca-a01c66ba67b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9fe889c11fc75d5c449c6facbeb67f8fc2d60ffedbdf1433b72147cdabd353e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8qmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-82v6c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:44Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:44 crc kubenswrapper[4886]: I1124 08:49:44.900458 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb6a474a-7674-4280-8da4-784cc3c4fc67\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e99af72fba7cd02a022aede4ff904f6dc0d263429468b460f3886cbbf8767e9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3a5e301ad7f424b4db6c7c2d4054973bf707f225de81d58a13077e1a130fe59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://879cdf4650f88cd23cfed38dabd912619f3c95ab3c254c0eb893eb4cbb2f8c5c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://243dc8c38e86fbd96d073b5dc2edaf3f378f052563abfedf86cd78e6270f11b1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:04Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:44Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:44 crc kubenswrapper[4886]: I1124 08:49:44.913445 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://336d2d1c7e69b9e2c481681be12af7bfd17b8517e7fd97a67b3ca82dbc84ac35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c72af163fd4d5f2dc2a642984873ee83c62e9f1f96fac318fd17d5a74e5ea82f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:44Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:44 crc kubenswrapper[4886]: I1124 08:49:44.932542 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-fkfxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7844f778-bcd5-40fe-ad92-0cc0fcd6c5d7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9ckc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9ckc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:39Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-fkfxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:44Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:44 crc kubenswrapper[4886]: I1124 08:49:44.943409 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5g6ld" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82737edd-859f-4e06-8559-47375deb3a1a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f1c6253e8a3c02a22a64ad8913937a50b00f99c7c9da201aa0dd154175dc24c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ffhbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5g6ld\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:44Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:44 crc kubenswrapper[4886]: I1124 08:49:44.960903 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kl8k6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b578bbaf-7246-42d9-9d2d-346bd1da2c41\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d401bfaa5ee27091fc815de2752da57dcce2799f3ff3f6c88faacbe02565324\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6ng9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b4ab49b6707685341f4480554ba33f3dce9687661a3ed08ca228914a6821703\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b4ab49b6707685341f4480554ba33f3dce9687661a3ed08ca228914a6821703\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6ng9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce7a090aac10049685ef8de3f6533a3a7c61f8fe5cc79f2af17f6f36d0e90168\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce7a090aac10049685ef8de3f6533a3a7c61f8fe5cc79f2af17f6f36d0e90168\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6ng9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d53782a307c740b991c352671c3365951fb623f61ade4b061452b63f19b5e606\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d53782a307c740b991c352671c3365951fb623f61ade4b061452b63f19b5e606\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6ng9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce2a601938d1e214fdd14de097a4d95fdf3ac619b03bc2d269a9edf192fa940c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce2a601938d1e214fdd14de097a4d95fdf3ac619b03bc2d269a9edf192fa940c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6ng9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e219f09fdaf4fbce2d0c7f6d7dd746f27e424ef649941221dcd487c414e9eabd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e219f09fdaf4fbce2d0c7f6d7dd746f27e424ef649941221dcd487c414e9eabd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6ng9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bd697c5a7e9cd31c3ef75163507c789373f10f4ae458f55bf9bdfe318eda6e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bd697c5a7e9cd31c3ef75163507c789373f10f4ae458f55bf9bdfe318eda6e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6ng9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kl8k6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:44Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:44 crc kubenswrapper[4886]: I1124 08:49:44.980899 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-657wc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03f9078c-6b20-46d5-ae2a-2eb20e236769\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f59f3a090f9824d71c23fcb40dde8c9d34fa8116c17ba79a45c005b2719f3dc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8c129c56a2c69042516d9caa6abdf4095aefde8fdaef4953a8eed353940be89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f504f7f421bd783399a6d08fa713f0a885d8edd5f5245933a98b6830c5573d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51cd23eb1b9056a44b5716ce542fafec8e3ebde494f136655a2a5d838ec90ab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://48584e4993cf06ba4b29a0732136f0f86c23d81adea76e945015ffafb462819d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://97477bfe13de2286806b9a8e3c723e3828c1b0ef182329585eaef507dd0c36f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4abc505b9f103f78af1be41a40b7527bf35d25bc6c6e521c6e344290ff1d77b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4abc505b9f103f78af1be41a40b7527bf35d25bc6c6e521c6e344290ff1d77b0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T08:49:40Z\\\",\\\"message\\\":\\\"ices.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-route-controller-manager/route-controller-manager_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-route-controller-manager/route-controller-manager\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.239\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI1124 08:49:40.973903 6381 obj_retry.go:386] Retry successful for *v1.Pod openshift-etcd/etcd-crc after 0 failed attempt(s)\\\\nF1124 08:49:40.973905 6381 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:40Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-657wc_openshift-ovn-kubernetes(03f9078c-6b20-46d5-ae2a-2eb20e236769)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://070c77876ce83a479d0ab423d8107a14d506c655b476b2719766d78ad3c88dc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53fba806955f4d7e98c5a2c830bc39cc6f9ac2a6248099aa13da6dff5a0cde76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53fba806955f4d7e98c5a2c830bc39cc6f9ac2a6248099aa13da6dff5a0cde76\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-657wc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:44Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:44 crc kubenswrapper[4886]: I1124 08:49:44.983671 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:44 crc kubenswrapper[4886]: I1124 08:49:44.983717 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:44 crc kubenswrapper[4886]: I1124 08:49:44.983731 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:44 crc kubenswrapper[4886]: I1124 08:49:44.983756 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:49:44 crc kubenswrapper[4886]: I1124 08:49:44.983771 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:49:44Z","lastTransitionTime":"2025-11-24T08:49:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:49:45 crc kubenswrapper[4886]: I1124 08:49:45.000498 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1de93d0-350b-4f10-a50d-d4ca45b47a33\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6cae19a77290a6aa02abc366057b242de03f58a183d136dc19248cdfa2c0fa75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e34e380e0dd3b6788878e07e1d497d003445742fb3e976e78f116fd775fb0ae8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aa26820df6b9179aede13abc90067edc933b9bae6f11a6a1c32cf1a3ee494c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66381e4a8de0eb3ec683f0643c26082dac4f16a6e152164bf9a8b14449a23b0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52f6359fcbe915c66d972a4cdf89646528d2c1d5099c96cb3436e2da54d1099a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ee3470d2b357bc4a39b9fa7da8800930ea9f4d4a3926aa9d2000be8ca2da241\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ee3470d2b357bc4a39b9fa7da8800930ea9f4d4a3926aa9d2000be8ca2da241\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74bf2e186184e37d22515fc9a765b4ff0530354eb4b6af7679cf6f046651f9e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74bf2e186184e37d22515fc9a765b4ff0530354eb4b6af7679cf6f046651f9e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1794ee7f2b8bf736d63385a3755d5a50e5f05cc74470ed04c3f21054476ff5f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1794ee7f2b8bf736d63385a3755d5a50e5f05cc74470ed04c3f21054476ff5f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:04Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:44Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:45 crc kubenswrapper[4886]: I1124 08:49:45.016626 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b56e78f-e569-4f01-9a0f-6b9db3b505d7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4cd419a96834f5093a6a3bad2d2b99f9ab6f4abdbe5d7fddadf1505a0fd1f3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff0e663f6feec9c77de1cd993c443c9b3df3492612c554cc8b95c8e4e4a026b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://146e753c49089634c3d3105a3debf9737245745d8ed9aee61ae2cf4f56ae37ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6d0c617fe424a39e246d154cd6c0d8dc451c251e24dffaf45d644354e94e01e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6d0c617fe424a39e246d154cd6c0d8dc451c251e24dffaf45d644354e94e01e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"message\\\":\\\" 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1124 08:49:25.277531 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1124 08:49:25.277556 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1124 08:49:25.277707 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1124 08:49:25.277722 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1124 08:49:25.278962 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-505038271/tls.crt::/tmp/serving-cert-505038271/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1763974159\\\\\\\\\\\\\\\" (2025-11-24 08:49:18 +0000 UTC to 2025-12-24 08:49:19 +0000 UTC (now=2025-11-24 08:49:25.278915173 +0000 UTC))\\\\\\\"\\\\nI1124 08:49:25.279136 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1763974165\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1763974164\\\\\\\\\\\\\\\" (2025-11-24 07:49:24 +0000 UTC to 2026-11-24 07:49:24 +0000 UTC (now=2025-11-24 08:49:25.279105228 +0000 UTC))\\\\\\\"\\\\nI1124 08:49:25.279186 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1124 08:49:25.279230 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1124 08:49:25.279272 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-505038271/tls.crt::/tmp/serving-cert-505038271/tls.key\\\\\\\"\\\\nI1124 08:49:25.279422 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF1124 08:49:25.273255 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://97e022f9b9279047d700f687148beaeef5ce4f72de418be1bdf0edde4e5211da\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc5ed0e2c8f6299305d6c26194c4b43e6e884785effd5ae99254221d738e7662\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc5ed0e2c8f6299305d6c26194c4b43e6e884785effd5ae99254221d738e7662\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:45Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:45 crc kubenswrapper[4886]: I1124 08:49:45.028010 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://736f509b74b3eb557d2963822fd7f3aa507fc34bc178d6b2c05e05dde6e2c88e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:45Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:45 crc kubenswrapper[4886]: I1124 08:49:45.041664 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:45Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:45 crc kubenswrapper[4886]: I1124 08:49:45.054412 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vcq8j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6279457-41d0-46e2-9a21-1d3c74311083\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35b8da4abfec501a571590893bbace52b9e8453890065d2a01b0da8c5b8f9aaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mvm6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db48bd367bf72ecf426c27f7afe1df9a346b1d8bb8fe198a14399f47699d47b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mvm6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-vcq8j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:45Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:45 crc kubenswrapper[4886]: I1124 08:49:45.066238 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:45Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:45 crc kubenswrapper[4886]: I1124 08:49:45.076874 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zc46q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23cb993e-0360-4449-b604-8ddd825a6502\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://008c7d8517b1c3c5f875d9e73315ceb9936936a1d977422f16bf2aaba7e3f4dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8hf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b6a02c10c81171f23bf0e623a3f86710da37f6dd62f885c0366830a60b2be34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8hf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zc46q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:45Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:45 crc kubenswrapper[4886]: I1124 08:49:45.086400 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:45 crc kubenswrapper[4886]: I1124 08:49:45.086454 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:45 crc kubenswrapper[4886]: I1124 08:49:45.086465 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:45 crc kubenswrapper[4886]: I1124 08:49:45.086486 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:49:45 crc kubenswrapper[4886]: I1124 08:49:45.086499 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:49:45Z","lastTransitionTime":"2025-11-24T08:49:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:49:45 crc kubenswrapper[4886]: I1124 08:49:45.089188 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2dk8j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d515fec-60f3-4bf7-9ba4-697bb691b670\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ead2821f6b71571212807c6f2984d541faf5143c28fee48594b19e08bd91d085\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6pfs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2dk8j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:45Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:45 crc kubenswrapper[4886]: I1124 08:49:45.189280 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:45 crc kubenswrapper[4886]: I1124 08:49:45.189329 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:45 crc kubenswrapper[4886]: I1124 08:49:45.189342 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:45 crc kubenswrapper[4886]: I1124 08:49:45.189359 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:49:45 crc kubenswrapper[4886]: I1124 08:49:45.189373 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:49:45Z","lastTransitionTime":"2025-11-24T08:49:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:49:45 crc kubenswrapper[4886]: I1124 08:49:45.291974 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:45 crc kubenswrapper[4886]: I1124 08:49:45.292014 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:45 crc kubenswrapper[4886]: I1124 08:49:45.292023 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:45 crc kubenswrapper[4886]: I1124 08:49:45.292039 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:49:45 crc kubenswrapper[4886]: I1124 08:49:45.292048 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:49:45Z","lastTransitionTime":"2025-11-24T08:49:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:49:45 crc kubenswrapper[4886]: I1124 08:49:45.394571 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:45 crc kubenswrapper[4886]: I1124 08:49:45.394607 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:45 crc kubenswrapper[4886]: I1124 08:49:45.394615 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:45 crc kubenswrapper[4886]: I1124 08:49:45.394631 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:49:45 crc kubenswrapper[4886]: I1124 08:49:45.394640 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:49:45Z","lastTransitionTime":"2025-11-24T08:49:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:49:45 crc kubenswrapper[4886]: I1124 08:49:45.497574 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:45 crc kubenswrapper[4886]: I1124 08:49:45.497628 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:45 crc kubenswrapper[4886]: I1124 08:49:45.497639 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:45 crc kubenswrapper[4886]: I1124 08:49:45.497707 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:49:45 crc kubenswrapper[4886]: I1124 08:49:45.497725 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:49:45Z","lastTransitionTime":"2025-11-24T08:49:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:49:45 crc kubenswrapper[4886]: I1124 08:49:45.600196 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:45 crc kubenswrapper[4886]: I1124 08:49:45.600223 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:45 crc kubenswrapper[4886]: I1124 08:49:45.600231 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:45 crc kubenswrapper[4886]: I1124 08:49:45.600246 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:49:45 crc kubenswrapper[4886]: I1124 08:49:45.600254 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:49:45Z","lastTransitionTime":"2025-11-24T08:49:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:49:45 crc kubenswrapper[4886]: I1124 08:49:45.702844 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:45 crc kubenswrapper[4886]: I1124 08:49:45.702901 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:45 crc kubenswrapper[4886]: I1124 08:49:45.702914 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:45 crc kubenswrapper[4886]: I1124 08:49:45.702933 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:49:45 crc kubenswrapper[4886]: I1124 08:49:45.702944 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:49:45Z","lastTransitionTime":"2025-11-24T08:49:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:49:45 crc kubenswrapper[4886]: I1124 08:49:45.805289 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:45 crc kubenswrapper[4886]: I1124 08:49:45.805322 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:45 crc kubenswrapper[4886]: I1124 08:49:45.805330 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:45 crc kubenswrapper[4886]: I1124 08:49:45.805345 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:49:45 crc kubenswrapper[4886]: I1124 08:49:45.805355 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:49:45Z","lastTransitionTime":"2025-11-24T08:49:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:49:45 crc kubenswrapper[4886]: I1124 08:49:45.848822 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-fkfxv" Nov 24 08:49:45 crc kubenswrapper[4886]: E1124 08:49:45.848968 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-fkfxv" podUID="7844f778-bcd5-40fe-ad92-0cc0fcd6c5d7" Nov 24 08:49:45 crc kubenswrapper[4886]: I1124 08:49:45.910511 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:45 crc kubenswrapper[4886]: I1124 08:49:45.910585 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:45 crc kubenswrapper[4886]: I1124 08:49:45.910622 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:45 crc kubenswrapper[4886]: I1124 08:49:45.910640 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:49:45 crc kubenswrapper[4886]: I1124 08:49:45.910654 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:49:45Z","lastTransitionTime":"2025-11-24T08:49:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:49:46 crc kubenswrapper[4886]: I1124 08:49:46.013400 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:46 crc kubenswrapper[4886]: I1124 08:49:46.013461 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:46 crc kubenswrapper[4886]: I1124 08:49:46.013480 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:46 crc kubenswrapper[4886]: I1124 08:49:46.013503 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:49:46 crc kubenswrapper[4886]: I1124 08:49:46.013548 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:49:46Z","lastTransitionTime":"2025-11-24T08:49:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:49:46 crc kubenswrapper[4886]: I1124 08:49:46.116346 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:46 crc kubenswrapper[4886]: I1124 08:49:46.116400 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:46 crc kubenswrapper[4886]: I1124 08:49:46.116417 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:46 crc kubenswrapper[4886]: I1124 08:49:46.116438 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:49:46 crc kubenswrapper[4886]: I1124 08:49:46.116454 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:49:46Z","lastTransitionTime":"2025-11-24T08:49:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:49:46 crc kubenswrapper[4886]: I1124 08:49:46.219637 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:46 crc kubenswrapper[4886]: I1124 08:49:46.219735 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:46 crc kubenswrapper[4886]: I1124 08:49:46.219781 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:46 crc kubenswrapper[4886]: I1124 08:49:46.219807 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:49:46 crc kubenswrapper[4886]: I1124 08:49:46.219847 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:49:46Z","lastTransitionTime":"2025-11-24T08:49:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:49:46 crc kubenswrapper[4886]: I1124 08:49:46.322473 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:46 crc kubenswrapper[4886]: I1124 08:49:46.322547 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:46 crc kubenswrapper[4886]: I1124 08:49:46.322566 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:46 crc kubenswrapper[4886]: I1124 08:49:46.322592 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:49:46 crc kubenswrapper[4886]: I1124 08:49:46.322611 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:49:46Z","lastTransitionTime":"2025-11-24T08:49:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:49:46 crc kubenswrapper[4886]: I1124 08:49:46.424906 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:46 crc kubenswrapper[4886]: I1124 08:49:46.424952 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:46 crc kubenswrapper[4886]: I1124 08:49:46.424965 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:46 crc kubenswrapper[4886]: I1124 08:49:46.424985 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:49:46 crc kubenswrapper[4886]: I1124 08:49:46.425010 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:49:46Z","lastTransitionTime":"2025-11-24T08:49:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:49:46 crc kubenswrapper[4886]: I1124 08:49:46.528185 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:46 crc kubenswrapper[4886]: I1124 08:49:46.528248 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:46 crc kubenswrapper[4886]: I1124 08:49:46.528264 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:46 crc kubenswrapper[4886]: I1124 08:49:46.528286 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:49:46 crc kubenswrapper[4886]: I1124 08:49:46.528301 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:49:46Z","lastTransitionTime":"2025-11-24T08:49:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:49:46 crc kubenswrapper[4886]: I1124 08:49:46.630486 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:46 crc kubenswrapper[4886]: I1124 08:49:46.630524 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:46 crc kubenswrapper[4886]: I1124 08:49:46.630536 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:46 crc kubenswrapper[4886]: I1124 08:49:46.630552 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:49:46 crc kubenswrapper[4886]: I1124 08:49:46.630565 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:49:46Z","lastTransitionTime":"2025-11-24T08:49:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:49:46 crc kubenswrapper[4886]: I1124 08:49:46.733680 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:46 crc kubenswrapper[4886]: I1124 08:49:46.733750 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:46 crc kubenswrapper[4886]: I1124 08:49:46.733762 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:46 crc kubenswrapper[4886]: I1124 08:49:46.733778 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:49:46 crc kubenswrapper[4886]: I1124 08:49:46.733789 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:49:46Z","lastTransitionTime":"2025-11-24T08:49:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:49:46 crc kubenswrapper[4886]: I1124 08:49:46.836279 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:46 crc kubenswrapper[4886]: I1124 08:49:46.836341 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:46 crc kubenswrapper[4886]: I1124 08:49:46.836357 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:46 crc kubenswrapper[4886]: I1124 08:49:46.836378 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:49:46 crc kubenswrapper[4886]: I1124 08:49:46.836394 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:49:46Z","lastTransitionTime":"2025-11-24T08:49:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:49:46 crc kubenswrapper[4886]: I1124 08:49:46.848706 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 08:49:46 crc kubenswrapper[4886]: I1124 08:49:46.848793 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 08:49:46 crc kubenswrapper[4886]: E1124 08:49:46.848861 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 08:49:46 crc kubenswrapper[4886]: E1124 08:49:46.848928 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 08:49:46 crc kubenswrapper[4886]: I1124 08:49:46.848986 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 08:49:46 crc kubenswrapper[4886]: E1124 08:49:46.849396 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 08:49:46 crc kubenswrapper[4886]: I1124 08:49:46.849564 4886 scope.go:117] "RemoveContainer" containerID="b6d0c617fe424a39e246d154cd6c0d8dc451c251e24dffaf45d644354e94e01e" Nov 24 08:49:46 crc kubenswrapper[4886]: I1124 08:49:46.939178 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:46 crc kubenswrapper[4886]: I1124 08:49:46.939219 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:46 crc kubenswrapper[4886]: I1124 08:49:46.939227 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:46 crc kubenswrapper[4886]: I1124 08:49:46.939242 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:49:46 crc kubenswrapper[4886]: I1124 08:49:46.939252 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:49:46Z","lastTransitionTime":"2025-11-24T08:49:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:49:47 crc kubenswrapper[4886]: I1124 08:49:47.042541 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:47 crc kubenswrapper[4886]: I1124 08:49:47.042571 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:47 crc kubenswrapper[4886]: I1124 08:49:47.042580 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:47 crc kubenswrapper[4886]: I1124 08:49:47.042603 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:49:47 crc kubenswrapper[4886]: I1124 08:49:47.042621 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:49:47Z","lastTransitionTime":"2025-11-24T08:49:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:49:47 crc kubenswrapper[4886]: I1124 08:49:47.144974 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:47 crc kubenswrapper[4886]: I1124 08:49:47.145011 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:47 crc kubenswrapper[4886]: I1124 08:49:47.145019 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:47 crc kubenswrapper[4886]: I1124 08:49:47.145034 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:49:47 crc kubenswrapper[4886]: I1124 08:49:47.145044 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:49:47Z","lastTransitionTime":"2025-11-24T08:49:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:49:47 crc kubenswrapper[4886]: I1124 08:49:47.186643 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Nov 24 08:49:47 crc kubenswrapper[4886]: I1124 08:49:47.188363 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"f13b61e62e7d9dc91305545f1843a74d2b10a9df1f207cddb9947059876a6d36"} Nov 24 08:49:47 crc kubenswrapper[4886]: I1124 08:49:47.189759 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 24 08:49:47 crc kubenswrapper[4886]: I1124 08:49:47.201380 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-82v6c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5d34f7b-b75b-4572-87ca-a01c66ba67b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9fe889c11fc75d5c449c6facbeb67f8fc2d60ffedbdf1433b72147cdabd353e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8qmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-82v6c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:47Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:47 crc kubenswrapper[4886]: I1124 08:49:47.212604 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56bddcbe3378ccc43de3046352a1284d9590790973736e218e891b804bfe1ff8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:47Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:47 crc kubenswrapper[4886]: I1124 08:49:47.223127 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:47Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:47 crc kubenswrapper[4886]: I1124 08:49:47.231797 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-fkfxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7844f778-bcd5-40fe-ad92-0cc0fcd6c5d7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9ckc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9ckc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:39Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-fkfxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:47Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:47 crc kubenswrapper[4886]: I1124 08:49:47.248082 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:47 crc kubenswrapper[4886]: I1124 08:49:47.248328 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:47 crc kubenswrapper[4886]: I1124 08:49:47.248440 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:47 crc kubenswrapper[4886]: I1124 08:49:47.248531 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:49:47 crc kubenswrapper[4886]: I1124 08:49:47.248598 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:49:47Z","lastTransitionTime":"2025-11-24T08:49:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:49:47 crc kubenswrapper[4886]: I1124 08:49:47.248428 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb6a474a-7674-4280-8da4-784cc3c4fc67\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e99af72fba7cd02a022aede4ff904f6dc0d263429468b460f3886cbbf8767e9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3a5e301ad7f424b4db6c7c2d4054973bf707f225de81d58a13077e1a130fe59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://879cdf4650f88cd23cfed38dabd912619f3c95ab3c254c0eb893eb4cbb2f8c5c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://243dc8c38e86fbd96d073b5dc2edaf3f378f052563abfedf86cd78e6270f11b1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:04Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:47Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:47 crc kubenswrapper[4886]: I1124 08:49:47.262145 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://336d2d1c7e69b9e2c481681be12af7bfd17b8517e7fd97a67b3ca82dbc84ac35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c72af163fd4d5f2dc2a642984873ee83c62e9f1f96fac318fd17d5a74e5ea82f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:47Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:47 crc kubenswrapper[4886]: I1124 08:49:47.274076 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://736f509b74b3eb557d2963822fd7f3aa507fc34bc178d6b2c05e05dde6e2c88e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:47Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:47 crc kubenswrapper[4886]: I1124 08:49:47.286557 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:47Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:47 crc kubenswrapper[4886]: I1124 08:49:47.296349 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5g6ld" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82737edd-859f-4e06-8559-47375deb3a1a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f1c6253e8a3c02a22a64ad8913937a50b00f99c7c9da201aa0dd154175dc24c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ffhbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5g6ld\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:47Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:47 crc kubenswrapper[4886]: I1124 08:49:47.309893 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kl8k6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b578bbaf-7246-42d9-9d2d-346bd1da2c41\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d401bfaa5ee27091fc815de2752da57dcce2799f3ff3f6c88faacbe02565324\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6ng9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b4ab49b6707685341f4480554ba33f3dce9687661a3ed08ca228914a6821703\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b4ab49b6707685341f4480554ba33f3dce9687661a3ed08ca228914a6821703\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6ng9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce7a090aac10049685ef8de3f6533a3a7c61f8fe5cc79f2af17f6f36d0e90168\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce7a090aac10049685ef8de3f6533a3a7c61f8fe5cc79f2af17f6f36d0e90168\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6ng9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d53782a307c740b991c352671c3365951fb623f61ade4b061452b63f19b5e606\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d53782a307c740b991c352671c3365951fb623f61ade4b061452b63f19b5e606\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6ng9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce2a601938d1e214fdd14de097a4d95fdf3ac619b03bc2d269a9edf192fa940c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce2a601938d1e214fdd14de097a4d95fdf3ac619b03bc2d269a9edf192fa940c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6ng9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e219f09fdaf4fbce2d0c7f6d7dd746f27e424ef649941221dcd487c414e9eabd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e219f09fdaf4fbce2d0c7f6d7dd746f27e424ef649941221dcd487c414e9eabd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6ng9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bd697c5a7e9cd31c3ef75163507c789373f10f4ae458f55bf9bdfe318eda6e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bd697c5a7e9cd31c3ef75163507c789373f10f4ae458f55bf9bdfe318eda6e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6ng9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kl8k6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:47Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:47 crc kubenswrapper[4886]: I1124 08:49:47.326533 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-657wc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03f9078c-6b20-46d5-ae2a-2eb20e236769\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f59f3a090f9824d71c23fcb40dde8c9d34fa8116c17ba79a45c005b2719f3dc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8c129c56a2c69042516d9caa6abdf4095aefde8fdaef4953a8eed353940be89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f504f7f421bd783399a6d08fa713f0a885d8edd5f5245933a98b6830c5573d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51cd23eb1b9056a44b5716ce542fafec8e3ebde494f136655a2a5d838ec90ab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://48584e4993cf06ba4b29a0732136f0f86c23d81adea76e945015ffafb462819d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://97477bfe13de2286806b9a8e3c723e3828c1b0ef182329585eaef507dd0c36f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4abc505b9f103f78af1be41a40b7527bf35d25bc6c6e521c6e344290ff1d77b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4abc505b9f103f78af1be41a40b7527bf35d25bc6c6e521c6e344290ff1d77b0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T08:49:40Z\\\",\\\"message\\\":\\\"ices.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-route-controller-manager/route-controller-manager_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-route-controller-manager/route-controller-manager\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.239\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI1124 08:49:40.973903 6381 obj_retry.go:386] Retry successful for *v1.Pod openshift-etcd/etcd-crc after 0 failed attempt(s)\\\\nF1124 08:49:40.973905 6381 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:40Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-657wc_openshift-ovn-kubernetes(03f9078c-6b20-46d5-ae2a-2eb20e236769)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://070c77876ce83a479d0ab423d8107a14d506c655b476b2719766d78ad3c88dc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53fba806955f4d7e98c5a2c830bc39cc6f9ac2a6248099aa13da6dff5a0cde76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53fba806955f4d7e98c5a2c830bc39cc6f9ac2a6248099aa13da6dff5a0cde76\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-657wc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:47Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:47 crc kubenswrapper[4886]: I1124 08:49:47.345508 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1de93d0-350b-4f10-a50d-d4ca45b47a33\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6cae19a77290a6aa02abc366057b242de03f58a183d136dc19248cdfa2c0fa75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e34e380e0dd3b6788878e07e1d497d003445742fb3e976e78f116fd775fb0ae8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aa26820df6b9179aede13abc90067edc933b9bae6f11a6a1c32cf1a3ee494c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66381e4a8de0eb3ec683f0643c26082dac4f16a6e152164bf9a8b14449a23b0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52f6359fcbe915c66d972a4cdf89646528d2c1d5099c96cb3436e2da54d1099a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ee3470d2b357bc4a39b9fa7da8800930ea9f4d4a3926aa9d2000be8ca2da241\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ee3470d2b357bc4a39b9fa7da8800930ea9f4d4a3926aa9d2000be8ca2da241\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74bf2e186184e37d22515fc9a765b4ff0530354eb4b6af7679cf6f046651f9e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74bf2e186184e37d22515fc9a765b4ff0530354eb4b6af7679cf6f046651f9e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1794ee7f2b8bf736d63385a3755d5a50e5f05cc74470ed04c3f21054476ff5f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1794ee7f2b8bf736d63385a3755d5a50e5f05cc74470ed04c3f21054476ff5f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:04Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:47Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:47 crc kubenswrapper[4886]: I1124 08:49:47.353260 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:47 crc kubenswrapper[4886]: I1124 08:49:47.353290 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:47 crc kubenswrapper[4886]: I1124 08:49:47.353300 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:47 crc kubenswrapper[4886]: I1124 08:49:47.353316 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:49:47 crc kubenswrapper[4886]: I1124 08:49:47.353325 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:49:47Z","lastTransitionTime":"2025-11-24T08:49:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:49:47 crc kubenswrapper[4886]: I1124 08:49:47.365489 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b56e78f-e569-4f01-9a0f-6b9db3b505d7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4cd419a96834f5093a6a3bad2d2b99f9ab6f4abdbe5d7fddadf1505a0fd1f3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff0e663f6feec9c77de1cd993c443c9b3df3492612c554cc8b95c8e4e4a026b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://146e753c49089634c3d3105a3debf9737245745d8ed9aee61ae2cf4f56ae37ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f13b61e62e7d9dc91305545f1843a74d2b10a9df1f207cddb9947059876a6d36\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6d0c617fe424a39e246d154cd6c0d8dc451c251e24dffaf45d644354e94e01e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"message\\\":\\\" 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1124 08:49:25.277531 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1124 08:49:25.277556 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1124 08:49:25.277707 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1124 08:49:25.277722 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1124 08:49:25.278962 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-505038271/tls.crt::/tmp/serving-cert-505038271/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1763974159\\\\\\\\\\\\\\\" (2025-11-24 08:49:18 +0000 UTC to 2025-12-24 08:49:19 +0000 UTC (now=2025-11-24 08:49:25.278915173 +0000 UTC))\\\\\\\"\\\\nI1124 08:49:25.279136 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1763974165\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1763974164\\\\\\\\\\\\\\\" (2025-11-24 07:49:24 +0000 UTC to 2026-11-24 07:49:24 +0000 UTC (now=2025-11-24 08:49:25.279105228 +0000 UTC))\\\\\\\"\\\\nI1124 08:49:25.279186 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1124 08:49:25.279230 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1124 08:49:25.279272 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-505038271/tls.crt::/tmp/serving-cert-505038271/tls.key\\\\\\\"\\\\nI1124 08:49:25.279422 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF1124 08:49:25.273255 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://97e022f9b9279047d700f687148beaeef5ce4f72de418be1bdf0edde4e5211da\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc5ed0e2c8f6299305d6c26194c4b43e6e884785effd5ae99254221d738e7662\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc5ed0e2c8f6299305d6c26194c4b43e6e884785effd5ae99254221d738e7662\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:47Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:47 crc kubenswrapper[4886]: I1124 08:49:47.376709 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vcq8j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6279457-41d0-46e2-9a21-1d3c74311083\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35b8da4abfec501a571590893bbace52b9e8453890065d2a01b0da8c5b8f9aaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mvm6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db48bd367bf72ecf426c27f7afe1df9a346b1d8bb8fe198a14399f47699d47b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mvm6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-vcq8j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:47Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:47 crc kubenswrapper[4886]: I1124 08:49:47.389011 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zc46q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23cb993e-0360-4449-b604-8ddd825a6502\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://008c7d8517b1c3c5f875d9e73315ceb9936936a1d977422f16bf2aaba7e3f4dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8hf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b6a02c10c81171f23bf0e623a3f86710da37f6dd62f885c0366830a60b2be34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8hf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zc46q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:47Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:47 crc kubenswrapper[4886]: I1124 08:49:47.402449 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2dk8j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d515fec-60f3-4bf7-9ba4-697bb691b670\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ead2821f6b71571212807c6f2984d541faf5143c28fee48594b19e08bd91d085\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6pfs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2dk8j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:47Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:47 crc kubenswrapper[4886]: I1124 08:49:47.413113 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:47Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:47 crc kubenswrapper[4886]: I1124 08:49:47.456121 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:47 crc kubenswrapper[4886]: I1124 08:49:47.456172 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:47 crc kubenswrapper[4886]: I1124 08:49:47.456183 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:47 crc kubenswrapper[4886]: I1124 08:49:47.456198 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:49:47 crc kubenswrapper[4886]: I1124 08:49:47.456206 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:49:47Z","lastTransitionTime":"2025-11-24T08:49:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:49:47 crc kubenswrapper[4886]: I1124 08:49:47.559261 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:47 crc kubenswrapper[4886]: I1124 08:49:47.559332 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:47 crc kubenswrapper[4886]: I1124 08:49:47.559355 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:47 crc kubenswrapper[4886]: I1124 08:49:47.559388 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:49:47 crc kubenswrapper[4886]: I1124 08:49:47.559410 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:49:47Z","lastTransitionTime":"2025-11-24T08:49:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:49:47 crc kubenswrapper[4886]: I1124 08:49:47.641328 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7844f778-bcd5-40fe-ad92-0cc0fcd6c5d7-metrics-certs\") pod \"network-metrics-daemon-fkfxv\" (UID: \"7844f778-bcd5-40fe-ad92-0cc0fcd6c5d7\") " pod="openshift-multus/network-metrics-daemon-fkfxv" Nov 24 08:49:47 crc kubenswrapper[4886]: E1124 08:49:47.641534 4886 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 24 08:49:47 crc kubenswrapper[4886]: E1124 08:49:47.641604 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7844f778-bcd5-40fe-ad92-0cc0fcd6c5d7-metrics-certs podName:7844f778-bcd5-40fe-ad92-0cc0fcd6c5d7 nodeName:}" failed. No retries permitted until 2025-11-24 08:49:55.64158182 +0000 UTC m=+51.528319985 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/7844f778-bcd5-40fe-ad92-0cc0fcd6c5d7-metrics-certs") pod "network-metrics-daemon-fkfxv" (UID: "7844f778-bcd5-40fe-ad92-0cc0fcd6c5d7") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 24 08:49:47 crc kubenswrapper[4886]: I1124 08:49:47.662598 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:47 crc kubenswrapper[4886]: I1124 08:49:47.662673 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:47 crc kubenswrapper[4886]: I1124 08:49:47.662693 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:47 crc kubenswrapper[4886]: I1124 08:49:47.662718 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:49:47 crc kubenswrapper[4886]: I1124 08:49:47.662736 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:49:47Z","lastTransitionTime":"2025-11-24T08:49:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:49:47 crc kubenswrapper[4886]: I1124 08:49:47.765576 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:47 crc kubenswrapper[4886]: I1124 08:49:47.765616 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:47 crc kubenswrapper[4886]: I1124 08:49:47.765628 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:47 crc kubenswrapper[4886]: I1124 08:49:47.765643 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:49:47 crc kubenswrapper[4886]: I1124 08:49:47.765656 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:49:47Z","lastTransitionTime":"2025-11-24T08:49:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:49:47 crc kubenswrapper[4886]: I1124 08:49:47.848718 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-fkfxv" Nov 24 08:49:47 crc kubenswrapper[4886]: E1124 08:49:47.848860 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-fkfxv" podUID="7844f778-bcd5-40fe-ad92-0cc0fcd6c5d7" Nov 24 08:49:47 crc kubenswrapper[4886]: I1124 08:49:47.867610 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:47 crc kubenswrapper[4886]: I1124 08:49:47.867660 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:47 crc kubenswrapper[4886]: I1124 08:49:47.867675 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:47 crc kubenswrapper[4886]: I1124 08:49:47.867697 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:49:47 crc kubenswrapper[4886]: I1124 08:49:47.867715 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:49:47Z","lastTransitionTime":"2025-11-24T08:49:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:49:47 crc kubenswrapper[4886]: I1124 08:49:47.970978 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:47 crc kubenswrapper[4886]: I1124 08:49:47.971013 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:47 crc kubenswrapper[4886]: I1124 08:49:47.971022 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:47 crc kubenswrapper[4886]: I1124 08:49:47.971040 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:49:47 crc kubenswrapper[4886]: I1124 08:49:47.971049 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:49:47Z","lastTransitionTime":"2025-11-24T08:49:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:49:48 crc kubenswrapper[4886]: I1124 08:49:48.073820 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:48 crc kubenswrapper[4886]: I1124 08:49:48.074109 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:48 crc kubenswrapper[4886]: I1124 08:49:48.074235 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:48 crc kubenswrapper[4886]: I1124 08:49:48.074334 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:49:48 crc kubenswrapper[4886]: I1124 08:49:48.074394 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:49:48Z","lastTransitionTime":"2025-11-24T08:49:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:49:48 crc kubenswrapper[4886]: I1124 08:49:48.177228 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:48 crc kubenswrapper[4886]: I1124 08:49:48.177303 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:48 crc kubenswrapper[4886]: I1124 08:49:48.177315 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:48 crc kubenswrapper[4886]: I1124 08:49:48.177337 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:49:48 crc kubenswrapper[4886]: I1124 08:49:48.177350 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:49:48Z","lastTransitionTime":"2025-11-24T08:49:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:49:48 crc kubenswrapper[4886]: I1124 08:49:48.280067 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:48 crc kubenswrapper[4886]: I1124 08:49:48.280107 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:48 crc kubenswrapper[4886]: I1124 08:49:48.280119 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:48 crc kubenswrapper[4886]: I1124 08:49:48.280134 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:49:48 crc kubenswrapper[4886]: I1124 08:49:48.280143 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:49:48Z","lastTransitionTime":"2025-11-24T08:49:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:49:48 crc kubenswrapper[4886]: I1124 08:49:48.382316 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:48 crc kubenswrapper[4886]: I1124 08:49:48.382362 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:48 crc kubenswrapper[4886]: I1124 08:49:48.382379 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:48 crc kubenswrapper[4886]: I1124 08:49:48.382397 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:49:48 crc kubenswrapper[4886]: I1124 08:49:48.382412 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:49:48Z","lastTransitionTime":"2025-11-24T08:49:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:49:48 crc kubenswrapper[4886]: I1124 08:49:48.484906 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:48 crc kubenswrapper[4886]: I1124 08:49:48.485529 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:48 crc kubenswrapper[4886]: I1124 08:49:48.485636 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:48 crc kubenswrapper[4886]: I1124 08:49:48.485766 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:49:48 crc kubenswrapper[4886]: I1124 08:49:48.485846 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:49:48Z","lastTransitionTime":"2025-11-24T08:49:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:49:48 crc kubenswrapper[4886]: I1124 08:49:48.588343 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:48 crc kubenswrapper[4886]: I1124 08:49:48.588385 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:48 crc kubenswrapper[4886]: I1124 08:49:48.588394 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:48 crc kubenswrapper[4886]: I1124 08:49:48.588409 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:49:48 crc kubenswrapper[4886]: I1124 08:49:48.588420 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:49:48Z","lastTransitionTime":"2025-11-24T08:49:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:49:48 crc kubenswrapper[4886]: I1124 08:49:48.691440 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:48 crc kubenswrapper[4886]: I1124 08:49:48.691503 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:48 crc kubenswrapper[4886]: I1124 08:49:48.691514 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:48 crc kubenswrapper[4886]: I1124 08:49:48.691534 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:49:48 crc kubenswrapper[4886]: I1124 08:49:48.691543 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:49:48Z","lastTransitionTime":"2025-11-24T08:49:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:49:48 crc kubenswrapper[4886]: I1124 08:49:48.794355 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:48 crc kubenswrapper[4886]: I1124 08:49:48.794411 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:48 crc kubenswrapper[4886]: I1124 08:49:48.794420 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:48 crc kubenswrapper[4886]: I1124 08:49:48.794437 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:49:48 crc kubenswrapper[4886]: I1124 08:49:48.794449 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:49:48Z","lastTransitionTime":"2025-11-24T08:49:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:49:48 crc kubenswrapper[4886]: I1124 08:49:48.848353 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 08:49:48 crc kubenswrapper[4886]: E1124 08:49:48.848780 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 08:49:48 crc kubenswrapper[4886]: I1124 08:49:48.848434 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 08:49:48 crc kubenswrapper[4886]: E1124 08:49:48.849021 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 08:49:48 crc kubenswrapper[4886]: I1124 08:49:48.848369 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 08:49:48 crc kubenswrapper[4886]: E1124 08:49:48.849276 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 08:49:48 crc kubenswrapper[4886]: I1124 08:49:48.896871 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:48 crc kubenswrapper[4886]: I1124 08:49:48.897162 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:48 crc kubenswrapper[4886]: I1124 08:49:48.897264 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:48 crc kubenswrapper[4886]: I1124 08:49:48.897385 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:49:48 crc kubenswrapper[4886]: I1124 08:49:48.897450 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:49:48Z","lastTransitionTime":"2025-11-24T08:49:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:49:49 crc kubenswrapper[4886]: I1124 08:49:49.000603 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:49 crc kubenswrapper[4886]: I1124 08:49:49.000639 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:49 crc kubenswrapper[4886]: I1124 08:49:49.000647 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:49 crc kubenswrapper[4886]: I1124 08:49:49.000661 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:49:49 crc kubenswrapper[4886]: I1124 08:49:49.000670 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:49:49Z","lastTransitionTime":"2025-11-24T08:49:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:49:49 crc kubenswrapper[4886]: I1124 08:49:49.103221 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:49 crc kubenswrapper[4886]: I1124 08:49:49.103275 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:49 crc kubenswrapper[4886]: I1124 08:49:49.103287 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:49 crc kubenswrapper[4886]: I1124 08:49:49.103310 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:49:49 crc kubenswrapper[4886]: I1124 08:49:49.103326 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:49:49Z","lastTransitionTime":"2025-11-24T08:49:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:49:49 crc kubenswrapper[4886]: I1124 08:49:49.205625 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:49 crc kubenswrapper[4886]: I1124 08:49:49.205959 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:49 crc kubenswrapper[4886]: I1124 08:49:49.206044 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:49 crc kubenswrapper[4886]: I1124 08:49:49.206129 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:49:49 crc kubenswrapper[4886]: I1124 08:49:49.206264 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:49:49Z","lastTransitionTime":"2025-11-24T08:49:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:49:49 crc kubenswrapper[4886]: I1124 08:49:49.309694 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:49 crc kubenswrapper[4886]: I1124 08:49:49.309772 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:49 crc kubenswrapper[4886]: I1124 08:49:49.309786 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:49 crc kubenswrapper[4886]: I1124 08:49:49.309821 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:49:49 crc kubenswrapper[4886]: I1124 08:49:49.309836 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:49:49Z","lastTransitionTime":"2025-11-24T08:49:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:49:49 crc kubenswrapper[4886]: I1124 08:49:49.412237 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:49 crc kubenswrapper[4886]: I1124 08:49:49.412300 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:49 crc kubenswrapper[4886]: I1124 08:49:49.412310 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:49 crc kubenswrapper[4886]: I1124 08:49:49.412331 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:49:49 crc kubenswrapper[4886]: I1124 08:49:49.412342 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:49:49Z","lastTransitionTime":"2025-11-24T08:49:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:49:49 crc kubenswrapper[4886]: I1124 08:49:49.516069 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:49 crc kubenswrapper[4886]: I1124 08:49:49.516110 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:49 crc kubenswrapper[4886]: I1124 08:49:49.516119 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:49 crc kubenswrapper[4886]: I1124 08:49:49.516137 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:49:49 crc kubenswrapper[4886]: I1124 08:49:49.516147 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:49:49Z","lastTransitionTime":"2025-11-24T08:49:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:49:49 crc kubenswrapper[4886]: I1124 08:49:49.620428 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:49 crc kubenswrapper[4886]: I1124 08:49:49.620509 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:49 crc kubenswrapper[4886]: I1124 08:49:49.620527 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:49 crc kubenswrapper[4886]: I1124 08:49:49.620556 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:49:49 crc kubenswrapper[4886]: I1124 08:49:49.620586 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:49:49Z","lastTransitionTime":"2025-11-24T08:49:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:49:49 crc kubenswrapper[4886]: I1124 08:49:49.724349 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:49 crc kubenswrapper[4886]: I1124 08:49:49.724384 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:49 crc kubenswrapper[4886]: I1124 08:49:49.724393 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:49 crc kubenswrapper[4886]: I1124 08:49:49.724413 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:49:49 crc kubenswrapper[4886]: I1124 08:49:49.724427 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:49:49Z","lastTransitionTime":"2025-11-24T08:49:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:49:49 crc kubenswrapper[4886]: I1124 08:49:49.827286 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:49 crc kubenswrapper[4886]: I1124 08:49:49.827650 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:49 crc kubenswrapper[4886]: I1124 08:49:49.827736 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:49 crc kubenswrapper[4886]: I1124 08:49:49.827802 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:49:49 crc kubenswrapper[4886]: I1124 08:49:49.827914 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:49:49Z","lastTransitionTime":"2025-11-24T08:49:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:49:49 crc kubenswrapper[4886]: I1124 08:49:49.848827 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-fkfxv" Nov 24 08:49:49 crc kubenswrapper[4886]: E1124 08:49:49.849081 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-fkfxv" podUID="7844f778-bcd5-40fe-ad92-0cc0fcd6c5d7" Nov 24 08:49:49 crc kubenswrapper[4886]: I1124 08:49:49.931332 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:49 crc kubenswrapper[4886]: I1124 08:49:49.931381 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:49 crc kubenswrapper[4886]: I1124 08:49:49.931391 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:49 crc kubenswrapper[4886]: I1124 08:49:49.931412 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:49:49 crc kubenswrapper[4886]: I1124 08:49:49.931424 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:49:49Z","lastTransitionTime":"2025-11-24T08:49:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:49:50 crc kubenswrapper[4886]: I1124 08:49:50.035138 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:50 crc kubenswrapper[4886]: I1124 08:49:50.035230 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:50 crc kubenswrapper[4886]: I1124 08:49:50.035244 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:50 crc kubenswrapper[4886]: I1124 08:49:50.035265 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:49:50 crc kubenswrapper[4886]: I1124 08:49:50.035280 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:49:50Z","lastTransitionTime":"2025-11-24T08:49:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:49:50 crc kubenswrapper[4886]: I1124 08:49:50.137687 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:50 crc kubenswrapper[4886]: I1124 08:49:50.137768 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:50 crc kubenswrapper[4886]: I1124 08:49:50.137896 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:50 crc kubenswrapper[4886]: I1124 08:49:50.137924 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:49:50 crc kubenswrapper[4886]: I1124 08:49:50.137964 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:49:50Z","lastTransitionTime":"2025-11-24T08:49:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:49:50 crc kubenswrapper[4886]: I1124 08:49:50.240665 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:50 crc kubenswrapper[4886]: I1124 08:49:50.240701 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:50 crc kubenswrapper[4886]: I1124 08:49:50.240710 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:50 crc kubenswrapper[4886]: I1124 08:49:50.240726 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:49:50 crc kubenswrapper[4886]: I1124 08:49:50.240736 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:49:50Z","lastTransitionTime":"2025-11-24T08:49:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:49:50 crc kubenswrapper[4886]: I1124 08:49:50.342983 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:50 crc kubenswrapper[4886]: I1124 08:49:50.343035 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:50 crc kubenswrapper[4886]: I1124 08:49:50.343046 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:50 crc kubenswrapper[4886]: I1124 08:49:50.343065 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:49:50 crc kubenswrapper[4886]: I1124 08:49:50.343077 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:49:50Z","lastTransitionTime":"2025-11-24T08:49:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:49:50 crc kubenswrapper[4886]: I1124 08:49:50.446075 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:50 crc kubenswrapper[4886]: I1124 08:49:50.446635 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:50 crc kubenswrapper[4886]: I1124 08:49:50.446792 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:50 crc kubenswrapper[4886]: I1124 08:49:50.447006 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:49:50 crc kubenswrapper[4886]: I1124 08:49:50.447313 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:49:50Z","lastTransitionTime":"2025-11-24T08:49:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:49:50 crc kubenswrapper[4886]: I1124 08:49:50.550687 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:50 crc kubenswrapper[4886]: I1124 08:49:50.550778 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:50 crc kubenswrapper[4886]: I1124 08:49:50.550806 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:50 crc kubenswrapper[4886]: I1124 08:49:50.550843 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:49:50 crc kubenswrapper[4886]: I1124 08:49:50.550869 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:49:50Z","lastTransitionTime":"2025-11-24T08:49:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:49:50 crc kubenswrapper[4886]: I1124 08:49:50.654470 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:50 crc kubenswrapper[4886]: I1124 08:49:50.654564 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:50 crc kubenswrapper[4886]: I1124 08:49:50.654586 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:50 crc kubenswrapper[4886]: I1124 08:49:50.654614 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:49:50 crc kubenswrapper[4886]: I1124 08:49:50.654636 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:49:50Z","lastTransitionTime":"2025-11-24T08:49:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:49:50 crc kubenswrapper[4886]: I1124 08:49:50.758800 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:50 crc kubenswrapper[4886]: I1124 08:49:50.758861 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:50 crc kubenswrapper[4886]: I1124 08:49:50.758876 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:50 crc kubenswrapper[4886]: I1124 08:49:50.758898 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:49:50 crc kubenswrapper[4886]: I1124 08:49:50.758911 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:49:50Z","lastTransitionTime":"2025-11-24T08:49:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:49:50 crc kubenswrapper[4886]: I1124 08:49:50.849178 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 08:49:50 crc kubenswrapper[4886]: I1124 08:49:50.849238 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 08:49:50 crc kubenswrapper[4886]: E1124 08:49:50.849367 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 08:49:50 crc kubenswrapper[4886]: E1124 08:49:50.849436 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 08:49:50 crc kubenswrapper[4886]: I1124 08:49:50.849787 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 08:49:50 crc kubenswrapper[4886]: E1124 08:49:50.850053 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 08:49:50 crc kubenswrapper[4886]: I1124 08:49:50.861265 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:50 crc kubenswrapper[4886]: I1124 08:49:50.861555 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:50 crc kubenswrapper[4886]: I1124 08:49:50.861637 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:50 crc kubenswrapper[4886]: I1124 08:49:50.861727 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:49:50 crc kubenswrapper[4886]: I1124 08:49:50.861868 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:49:50Z","lastTransitionTime":"2025-11-24T08:49:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:49:50 crc kubenswrapper[4886]: I1124 08:49:50.965048 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:50 crc kubenswrapper[4886]: I1124 08:49:50.965114 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:50 crc kubenswrapper[4886]: I1124 08:49:50.965127 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:50 crc kubenswrapper[4886]: I1124 08:49:50.965192 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:49:50 crc kubenswrapper[4886]: I1124 08:49:50.965212 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:49:50Z","lastTransitionTime":"2025-11-24T08:49:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:49:51 crc kubenswrapper[4886]: I1124 08:49:51.067903 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:51 crc kubenswrapper[4886]: I1124 08:49:51.067954 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:51 crc kubenswrapper[4886]: I1124 08:49:51.067968 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:51 crc kubenswrapper[4886]: I1124 08:49:51.067991 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:49:51 crc kubenswrapper[4886]: I1124 08:49:51.068005 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:49:51Z","lastTransitionTime":"2025-11-24T08:49:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:49:51 crc kubenswrapper[4886]: I1124 08:49:51.171560 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:51 crc kubenswrapper[4886]: I1124 08:49:51.171612 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:51 crc kubenswrapper[4886]: I1124 08:49:51.171624 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:51 crc kubenswrapper[4886]: I1124 08:49:51.171645 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:49:51 crc kubenswrapper[4886]: I1124 08:49:51.171658 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:49:51Z","lastTransitionTime":"2025-11-24T08:49:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:49:51 crc kubenswrapper[4886]: I1124 08:49:51.275111 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:51 crc kubenswrapper[4886]: I1124 08:49:51.275815 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:51 crc kubenswrapper[4886]: I1124 08:49:51.276002 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:51 crc kubenswrapper[4886]: I1124 08:49:51.276138 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:49:51 crc kubenswrapper[4886]: I1124 08:49:51.276268 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:49:51Z","lastTransitionTime":"2025-11-24T08:49:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:49:51 crc kubenswrapper[4886]: I1124 08:49:51.379242 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:51 crc kubenswrapper[4886]: I1124 08:49:51.379285 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:51 crc kubenswrapper[4886]: I1124 08:49:51.379298 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:51 crc kubenswrapper[4886]: I1124 08:49:51.379316 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:49:51 crc kubenswrapper[4886]: I1124 08:49:51.379329 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:49:51Z","lastTransitionTime":"2025-11-24T08:49:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:49:51 crc kubenswrapper[4886]: I1124 08:49:51.482322 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:51 crc kubenswrapper[4886]: I1124 08:49:51.482369 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:51 crc kubenswrapper[4886]: I1124 08:49:51.482378 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:51 crc kubenswrapper[4886]: I1124 08:49:51.482397 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:49:51 crc kubenswrapper[4886]: I1124 08:49:51.482409 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:49:51Z","lastTransitionTime":"2025-11-24T08:49:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:49:51 crc kubenswrapper[4886]: I1124 08:49:51.585063 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:51 crc kubenswrapper[4886]: I1124 08:49:51.585115 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:51 crc kubenswrapper[4886]: I1124 08:49:51.585127 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:51 crc kubenswrapper[4886]: I1124 08:49:51.585146 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:49:51 crc kubenswrapper[4886]: I1124 08:49:51.585188 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:49:51Z","lastTransitionTime":"2025-11-24T08:49:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:49:51 crc kubenswrapper[4886]: I1124 08:49:51.687827 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:51 crc kubenswrapper[4886]: I1124 08:49:51.687881 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:51 crc kubenswrapper[4886]: I1124 08:49:51.687901 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:51 crc kubenswrapper[4886]: I1124 08:49:51.687923 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:49:51 crc kubenswrapper[4886]: I1124 08:49:51.687936 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:49:51Z","lastTransitionTime":"2025-11-24T08:49:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:49:51 crc kubenswrapper[4886]: I1124 08:49:51.693030 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:51 crc kubenswrapper[4886]: I1124 08:49:51.693162 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:51 crc kubenswrapper[4886]: I1124 08:49:51.693271 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:51 crc kubenswrapper[4886]: I1124 08:49:51.693362 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:49:51 crc kubenswrapper[4886]: I1124 08:49:51.693445 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:49:51Z","lastTransitionTime":"2025-11-24T08:49:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:49:51 crc kubenswrapper[4886]: E1124 08:49:51.706534 4886 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T08:49:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T08:49:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T08:49:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T08:49:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bf6e1d17-5641-40b5-abdc-9697895ace84\\\",\\\"systemUUID\\\":\\\"b95cd08d-0a26-454e-842e-e33553e0c6a8\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:51Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:51 crc kubenswrapper[4886]: I1124 08:49:51.711445 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:51 crc kubenswrapper[4886]: I1124 08:49:51.711489 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:51 crc kubenswrapper[4886]: I1124 08:49:51.711502 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:51 crc kubenswrapper[4886]: I1124 08:49:51.711524 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:49:51 crc kubenswrapper[4886]: I1124 08:49:51.711541 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:49:51Z","lastTransitionTime":"2025-11-24T08:49:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:49:51 crc kubenswrapper[4886]: E1124 08:49:51.728997 4886 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T08:49:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T08:49:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T08:49:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T08:49:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bf6e1d17-5641-40b5-abdc-9697895ace84\\\",\\\"systemUUID\\\":\\\"b95cd08d-0a26-454e-842e-e33553e0c6a8\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:51Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:51 crc kubenswrapper[4886]: I1124 08:49:51.733741 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:51 crc kubenswrapper[4886]: I1124 08:49:51.733785 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:51 crc kubenswrapper[4886]: I1124 08:49:51.733798 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:51 crc kubenswrapper[4886]: I1124 08:49:51.733817 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:49:51 crc kubenswrapper[4886]: I1124 08:49:51.733829 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:49:51Z","lastTransitionTime":"2025-11-24T08:49:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:49:51 crc kubenswrapper[4886]: E1124 08:49:51.745751 4886 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T08:49:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T08:49:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T08:49:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T08:49:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bf6e1d17-5641-40b5-abdc-9697895ace84\\\",\\\"systemUUID\\\":\\\"b95cd08d-0a26-454e-842e-e33553e0c6a8\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:51Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:51 crc kubenswrapper[4886]: I1124 08:49:51.750243 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:51 crc kubenswrapper[4886]: I1124 08:49:51.750368 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:51 crc kubenswrapper[4886]: I1124 08:49:51.750444 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:51 crc kubenswrapper[4886]: I1124 08:49:51.750516 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:49:51 crc kubenswrapper[4886]: I1124 08:49:51.750581 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:49:51Z","lastTransitionTime":"2025-11-24T08:49:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:49:51 crc kubenswrapper[4886]: E1124 08:49:51.764058 4886 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T08:49:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T08:49:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T08:49:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T08:49:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bf6e1d17-5641-40b5-abdc-9697895ace84\\\",\\\"systemUUID\\\":\\\"b95cd08d-0a26-454e-842e-e33553e0c6a8\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:51Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:51 crc kubenswrapper[4886]: I1124 08:49:51.768924 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:51 crc kubenswrapper[4886]: I1124 08:49:51.769026 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:51 crc kubenswrapper[4886]: I1124 08:49:51.769042 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:51 crc kubenswrapper[4886]: I1124 08:49:51.769067 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:49:51 crc kubenswrapper[4886]: I1124 08:49:51.769099 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:49:51Z","lastTransitionTime":"2025-11-24T08:49:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:49:51 crc kubenswrapper[4886]: E1124 08:49:51.783189 4886 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T08:49:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T08:49:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T08:49:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T08:49:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bf6e1d17-5641-40b5-abdc-9697895ace84\\\",\\\"systemUUID\\\":\\\"b95cd08d-0a26-454e-842e-e33553e0c6a8\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:51Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:51 crc kubenswrapper[4886]: E1124 08:49:51.783374 4886 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 24 08:49:51 crc kubenswrapper[4886]: I1124 08:49:51.790665 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:51 crc kubenswrapper[4886]: I1124 08:49:51.790724 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:51 crc kubenswrapper[4886]: I1124 08:49:51.790735 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:51 crc kubenswrapper[4886]: I1124 08:49:51.790756 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:49:51 crc kubenswrapper[4886]: I1124 08:49:51.790768 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:49:51Z","lastTransitionTime":"2025-11-24T08:49:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:49:51 crc kubenswrapper[4886]: I1124 08:49:51.848762 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-fkfxv" Nov 24 08:49:51 crc kubenswrapper[4886]: E1124 08:49:51.848964 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-fkfxv" podUID="7844f778-bcd5-40fe-ad92-0cc0fcd6c5d7" Nov 24 08:49:51 crc kubenswrapper[4886]: I1124 08:49:51.894026 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:51 crc kubenswrapper[4886]: I1124 08:49:51.894119 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:51 crc kubenswrapper[4886]: I1124 08:49:51.894145 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:51 crc kubenswrapper[4886]: I1124 08:49:51.894241 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:49:51 crc kubenswrapper[4886]: I1124 08:49:51.894270 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:49:51Z","lastTransitionTime":"2025-11-24T08:49:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:49:51 crc kubenswrapper[4886]: I1124 08:49:51.997534 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:51 crc kubenswrapper[4886]: I1124 08:49:51.997584 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:51 crc kubenswrapper[4886]: I1124 08:49:51.997596 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:51 crc kubenswrapper[4886]: I1124 08:49:51.997618 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:49:51 crc kubenswrapper[4886]: I1124 08:49:51.997634 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:49:51Z","lastTransitionTime":"2025-11-24T08:49:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:49:52 crc kubenswrapper[4886]: I1124 08:49:52.101054 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:52 crc kubenswrapper[4886]: I1124 08:49:52.101111 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:52 crc kubenswrapper[4886]: I1124 08:49:52.101123 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:52 crc kubenswrapper[4886]: I1124 08:49:52.101145 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:49:52 crc kubenswrapper[4886]: I1124 08:49:52.101181 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:49:52Z","lastTransitionTime":"2025-11-24T08:49:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:49:52 crc kubenswrapper[4886]: I1124 08:49:52.204346 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:52 crc kubenswrapper[4886]: I1124 08:49:52.204420 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:52 crc kubenswrapper[4886]: I1124 08:49:52.204430 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:52 crc kubenswrapper[4886]: I1124 08:49:52.204450 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:49:52 crc kubenswrapper[4886]: I1124 08:49:52.204462 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:49:52Z","lastTransitionTime":"2025-11-24T08:49:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:49:52 crc kubenswrapper[4886]: I1124 08:49:52.308092 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:52 crc kubenswrapper[4886]: I1124 08:49:52.308209 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:52 crc kubenswrapper[4886]: I1124 08:49:52.308224 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:52 crc kubenswrapper[4886]: I1124 08:49:52.308244 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:49:52 crc kubenswrapper[4886]: I1124 08:49:52.308259 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:49:52Z","lastTransitionTime":"2025-11-24T08:49:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:49:52 crc kubenswrapper[4886]: I1124 08:49:52.411108 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:52 crc kubenswrapper[4886]: I1124 08:49:52.411171 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:52 crc kubenswrapper[4886]: I1124 08:49:52.411181 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:52 crc kubenswrapper[4886]: I1124 08:49:52.411200 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:49:52 crc kubenswrapper[4886]: I1124 08:49:52.411210 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:49:52Z","lastTransitionTime":"2025-11-24T08:49:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:49:52 crc kubenswrapper[4886]: I1124 08:49:52.513142 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:52 crc kubenswrapper[4886]: I1124 08:49:52.513218 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:52 crc kubenswrapper[4886]: I1124 08:49:52.513229 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:52 crc kubenswrapper[4886]: I1124 08:49:52.513245 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:49:52 crc kubenswrapper[4886]: I1124 08:49:52.513254 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:49:52Z","lastTransitionTime":"2025-11-24T08:49:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:49:52 crc kubenswrapper[4886]: I1124 08:49:52.616903 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:52 crc kubenswrapper[4886]: I1124 08:49:52.616967 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:52 crc kubenswrapper[4886]: I1124 08:49:52.616985 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:52 crc kubenswrapper[4886]: I1124 08:49:52.617013 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:49:52 crc kubenswrapper[4886]: I1124 08:49:52.617032 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:49:52Z","lastTransitionTime":"2025-11-24T08:49:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:49:52 crc kubenswrapper[4886]: I1124 08:49:52.720375 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:52 crc kubenswrapper[4886]: I1124 08:49:52.720453 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:52 crc kubenswrapper[4886]: I1124 08:49:52.720470 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:52 crc kubenswrapper[4886]: I1124 08:49:52.720507 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:49:52 crc kubenswrapper[4886]: I1124 08:49:52.720522 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:49:52Z","lastTransitionTime":"2025-11-24T08:49:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:49:52 crc kubenswrapper[4886]: I1124 08:49:52.824685 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:52 crc kubenswrapper[4886]: I1124 08:49:52.825106 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:52 crc kubenswrapper[4886]: I1124 08:49:52.825232 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:52 crc kubenswrapper[4886]: I1124 08:49:52.825342 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:49:52 crc kubenswrapper[4886]: I1124 08:49:52.825432 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:49:52Z","lastTransitionTime":"2025-11-24T08:49:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:49:52 crc kubenswrapper[4886]: I1124 08:49:52.848783 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 08:49:52 crc kubenswrapper[4886]: I1124 08:49:52.848830 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 08:49:52 crc kubenswrapper[4886]: I1124 08:49:52.849032 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 08:49:52 crc kubenswrapper[4886]: E1124 08:49:52.849684 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 08:49:52 crc kubenswrapper[4886]: E1124 08:49:52.849427 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 08:49:52 crc kubenswrapper[4886]: E1124 08:49:52.849821 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 08:49:52 crc kubenswrapper[4886]: I1124 08:49:52.929034 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:52 crc kubenswrapper[4886]: I1124 08:49:52.929099 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:52 crc kubenswrapper[4886]: I1124 08:49:52.929109 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:52 crc kubenswrapper[4886]: I1124 08:49:52.929128 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:49:52 crc kubenswrapper[4886]: I1124 08:49:52.929140 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:49:52Z","lastTransitionTime":"2025-11-24T08:49:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:49:53 crc kubenswrapper[4886]: I1124 08:49:53.032581 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:53 crc kubenswrapper[4886]: I1124 08:49:53.032638 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:53 crc kubenswrapper[4886]: I1124 08:49:53.032652 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:53 crc kubenswrapper[4886]: I1124 08:49:53.032675 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:49:53 crc kubenswrapper[4886]: I1124 08:49:53.032688 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:49:53Z","lastTransitionTime":"2025-11-24T08:49:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:49:53 crc kubenswrapper[4886]: I1124 08:49:53.135487 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:53 crc kubenswrapper[4886]: I1124 08:49:53.135572 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:53 crc kubenswrapper[4886]: I1124 08:49:53.135615 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:53 crc kubenswrapper[4886]: I1124 08:49:53.135644 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:49:53 crc kubenswrapper[4886]: I1124 08:49:53.135660 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:49:53Z","lastTransitionTime":"2025-11-24T08:49:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:49:53 crc kubenswrapper[4886]: I1124 08:49:53.238440 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:53 crc kubenswrapper[4886]: I1124 08:49:53.238495 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:53 crc kubenswrapper[4886]: I1124 08:49:53.238520 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:53 crc kubenswrapper[4886]: I1124 08:49:53.238542 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:49:53 crc kubenswrapper[4886]: I1124 08:49:53.238556 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:49:53Z","lastTransitionTime":"2025-11-24T08:49:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:49:53 crc kubenswrapper[4886]: I1124 08:49:53.341146 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:53 crc kubenswrapper[4886]: I1124 08:49:53.341243 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:53 crc kubenswrapper[4886]: I1124 08:49:53.341255 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:53 crc kubenswrapper[4886]: I1124 08:49:53.341275 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:49:53 crc kubenswrapper[4886]: I1124 08:49:53.341286 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:49:53Z","lastTransitionTime":"2025-11-24T08:49:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:49:53 crc kubenswrapper[4886]: I1124 08:49:53.443755 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:53 crc kubenswrapper[4886]: I1124 08:49:53.443839 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:53 crc kubenswrapper[4886]: I1124 08:49:53.443858 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:53 crc kubenswrapper[4886]: I1124 08:49:53.443878 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:49:53 crc kubenswrapper[4886]: I1124 08:49:53.443890 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:49:53Z","lastTransitionTime":"2025-11-24T08:49:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:49:53 crc kubenswrapper[4886]: I1124 08:49:53.550907 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:53 crc kubenswrapper[4886]: I1124 08:49:53.551338 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:53 crc kubenswrapper[4886]: I1124 08:49:53.551419 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:53 crc kubenswrapper[4886]: I1124 08:49:53.551524 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:49:53 crc kubenswrapper[4886]: I1124 08:49:53.551616 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:49:53Z","lastTransitionTime":"2025-11-24T08:49:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:49:53 crc kubenswrapper[4886]: I1124 08:49:53.654080 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:53 crc kubenswrapper[4886]: I1124 08:49:53.654121 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:53 crc kubenswrapper[4886]: I1124 08:49:53.654135 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:53 crc kubenswrapper[4886]: I1124 08:49:53.654162 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:49:53 crc kubenswrapper[4886]: I1124 08:49:53.654172 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:49:53Z","lastTransitionTime":"2025-11-24T08:49:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:49:53 crc kubenswrapper[4886]: I1124 08:49:53.756927 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:53 crc kubenswrapper[4886]: I1124 08:49:53.756988 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:53 crc kubenswrapper[4886]: I1124 08:49:53.757005 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:53 crc kubenswrapper[4886]: I1124 08:49:53.757032 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:49:53 crc kubenswrapper[4886]: I1124 08:49:53.757051 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:49:53Z","lastTransitionTime":"2025-11-24T08:49:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:49:53 crc kubenswrapper[4886]: I1124 08:49:53.848583 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-fkfxv" Nov 24 08:49:53 crc kubenswrapper[4886]: E1124 08:49:53.848763 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-fkfxv" podUID="7844f778-bcd5-40fe-ad92-0cc0fcd6c5d7" Nov 24 08:49:53 crc kubenswrapper[4886]: I1124 08:49:53.860007 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:53 crc kubenswrapper[4886]: I1124 08:49:53.860080 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:53 crc kubenswrapper[4886]: I1124 08:49:53.860099 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:53 crc kubenswrapper[4886]: I1124 08:49:53.860122 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:49:53 crc kubenswrapper[4886]: I1124 08:49:53.860136 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:49:53Z","lastTransitionTime":"2025-11-24T08:49:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:49:53 crc kubenswrapper[4886]: I1124 08:49:53.963011 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:53 crc kubenswrapper[4886]: I1124 08:49:53.963078 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:53 crc kubenswrapper[4886]: I1124 08:49:53.963100 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:53 crc kubenswrapper[4886]: I1124 08:49:53.963128 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:49:53 crc kubenswrapper[4886]: I1124 08:49:53.963179 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:49:53Z","lastTransitionTime":"2025-11-24T08:49:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:49:54 crc kubenswrapper[4886]: I1124 08:49:54.066296 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:54 crc kubenswrapper[4886]: I1124 08:49:54.066356 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:54 crc kubenswrapper[4886]: I1124 08:49:54.066372 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:54 crc kubenswrapper[4886]: I1124 08:49:54.066392 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:49:54 crc kubenswrapper[4886]: I1124 08:49:54.066406 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:49:54Z","lastTransitionTime":"2025-11-24T08:49:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:49:54 crc kubenswrapper[4886]: I1124 08:49:54.169677 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:54 crc kubenswrapper[4886]: I1124 08:49:54.169732 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:54 crc kubenswrapper[4886]: I1124 08:49:54.169746 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:54 crc kubenswrapper[4886]: I1124 08:49:54.169770 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:49:54 crc kubenswrapper[4886]: I1124 08:49:54.169783 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:49:54Z","lastTransitionTime":"2025-11-24T08:49:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:49:54 crc kubenswrapper[4886]: I1124 08:49:54.273169 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:54 crc kubenswrapper[4886]: I1124 08:49:54.273235 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:54 crc kubenswrapper[4886]: I1124 08:49:54.273257 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:54 crc kubenswrapper[4886]: I1124 08:49:54.273283 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:49:54 crc kubenswrapper[4886]: I1124 08:49:54.273295 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:49:54Z","lastTransitionTime":"2025-11-24T08:49:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:49:54 crc kubenswrapper[4886]: I1124 08:49:54.375707 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:54 crc kubenswrapper[4886]: I1124 08:49:54.375758 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:54 crc kubenswrapper[4886]: I1124 08:49:54.375769 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:54 crc kubenswrapper[4886]: I1124 08:49:54.375787 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:49:54 crc kubenswrapper[4886]: I1124 08:49:54.375803 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:49:54Z","lastTransitionTime":"2025-11-24T08:49:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:49:54 crc kubenswrapper[4886]: I1124 08:49:54.478427 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:54 crc kubenswrapper[4886]: I1124 08:49:54.478728 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:54 crc kubenswrapper[4886]: I1124 08:49:54.478805 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:54 crc kubenswrapper[4886]: I1124 08:49:54.478882 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:49:54 crc kubenswrapper[4886]: I1124 08:49:54.478950 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:49:54Z","lastTransitionTime":"2025-11-24T08:49:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:49:54 crc kubenswrapper[4886]: I1124 08:49:54.581897 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:54 crc kubenswrapper[4886]: I1124 08:49:54.581950 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:54 crc kubenswrapper[4886]: I1124 08:49:54.581963 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:54 crc kubenswrapper[4886]: I1124 08:49:54.581994 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:49:54 crc kubenswrapper[4886]: I1124 08:49:54.582010 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:49:54Z","lastTransitionTime":"2025-11-24T08:49:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:49:54 crc kubenswrapper[4886]: I1124 08:49:54.684735 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:54 crc kubenswrapper[4886]: I1124 08:49:54.684823 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:54 crc kubenswrapper[4886]: I1124 08:49:54.684842 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:54 crc kubenswrapper[4886]: I1124 08:49:54.684873 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:49:54 crc kubenswrapper[4886]: I1124 08:49:54.684929 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:49:54Z","lastTransitionTime":"2025-11-24T08:49:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:49:54 crc kubenswrapper[4886]: I1124 08:49:54.788867 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:54 crc kubenswrapper[4886]: I1124 08:49:54.788931 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:54 crc kubenswrapper[4886]: I1124 08:49:54.788943 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:54 crc kubenswrapper[4886]: I1124 08:49:54.788963 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:49:54 crc kubenswrapper[4886]: I1124 08:49:54.788974 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:49:54Z","lastTransitionTime":"2025-11-24T08:49:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:49:54 crc kubenswrapper[4886]: I1124 08:49:54.848533 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 08:49:54 crc kubenswrapper[4886]: E1124 08:49:54.848705 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 08:49:54 crc kubenswrapper[4886]: I1124 08:49:54.848738 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 08:49:54 crc kubenswrapper[4886]: I1124 08:49:54.848851 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 08:49:54 crc kubenswrapper[4886]: E1124 08:49:54.848917 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 08:49:54 crc kubenswrapper[4886]: E1124 08:49:54.849073 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 08:49:54 crc kubenswrapper[4886]: I1124 08:49:54.863795 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:54Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:54 crc kubenswrapper[4886]: I1124 08:49:54.877442 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zc46q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23cb993e-0360-4449-b604-8ddd825a6502\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://008c7d8517b1c3c5f875d9e73315ceb9936936a1d977422f16bf2aaba7e3f4dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8hf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b6a02c10c81171f23bf0e623a3f86710da37f6dd62f885c0366830a60b2be34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8hf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zc46q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:54Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:54 crc kubenswrapper[4886]: I1124 08:49:54.890699 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2dk8j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d515fec-60f3-4bf7-9ba4-697bb691b670\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ead2821f6b71571212807c6f2984d541faf5143c28fee48594b19e08bd91d085\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6pfs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2dk8j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:54Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:54 crc kubenswrapper[4886]: I1124 08:49:54.891183 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:54 crc kubenswrapper[4886]: I1124 08:49:54.891216 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:54 crc kubenswrapper[4886]: I1124 08:49:54.891227 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:54 crc kubenswrapper[4886]: I1124 08:49:54.891252 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:49:54 crc kubenswrapper[4886]: I1124 08:49:54.891291 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:49:54Z","lastTransitionTime":"2025-11-24T08:49:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:49:54 crc kubenswrapper[4886]: I1124 08:49:54.904357 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56bddcbe3378ccc43de3046352a1284d9590790973736e218e891b804bfe1ff8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:54Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:54 crc kubenswrapper[4886]: I1124 08:49:54.918614 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:54Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:54 crc kubenswrapper[4886]: I1124 08:49:54.931619 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-82v6c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5d34f7b-b75b-4572-87ca-a01c66ba67b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9fe889c11fc75d5c449c6facbeb67f8fc2d60ffedbdf1433b72147cdabd353e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8qmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-82v6c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:54Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:54 crc kubenswrapper[4886]: I1124 08:49:54.944138 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb6a474a-7674-4280-8da4-784cc3c4fc67\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e99af72fba7cd02a022aede4ff904f6dc0d263429468b460f3886cbbf8767e9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3a5e301ad7f424b4db6c7c2d4054973bf707f225de81d58a13077e1a130fe59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://879cdf4650f88cd23cfed38dabd912619f3c95ab3c254c0eb893eb4cbb2f8c5c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://243dc8c38e86fbd96d073b5dc2edaf3f378f052563abfedf86cd78e6270f11b1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:04Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:54Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:54 crc kubenswrapper[4886]: I1124 08:49:54.957765 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://336d2d1c7e69b9e2c481681be12af7bfd17b8517e7fd97a67b3ca82dbc84ac35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c72af163fd4d5f2dc2a642984873ee83c62e9f1f96fac318fd17d5a74e5ea82f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:54Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:54 crc kubenswrapper[4886]: I1124 08:49:54.972304 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-fkfxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7844f778-bcd5-40fe-ad92-0cc0fcd6c5d7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9ckc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9ckc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:39Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-fkfxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:54Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:54 crc kubenswrapper[4886]: I1124 08:49:54.990610 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kl8k6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b578bbaf-7246-42d9-9d2d-346bd1da2c41\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d401bfaa5ee27091fc815de2752da57dcce2799f3ff3f6c88faacbe02565324\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6ng9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b4ab49b6707685341f4480554ba33f3dce9687661a3ed08ca228914a6821703\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b4ab49b6707685341f4480554ba33f3dce9687661a3ed08ca228914a6821703\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6ng9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce7a090aac10049685ef8de3f6533a3a7c61f8fe5cc79f2af17f6f36d0e90168\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce7a090aac10049685ef8de3f6533a3a7c61f8fe5cc79f2af17f6f36d0e90168\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6ng9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d53782a307c740b991c352671c3365951fb623f61ade4b061452b63f19b5e606\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d53782a307c740b991c352671c3365951fb623f61ade4b061452b63f19b5e606\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6ng9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce2a601938d1e214fdd14de097a4d95fdf3ac619b03bc2d269a9edf192fa940c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce2a601938d1e214fdd14de097a4d95fdf3ac619b03bc2d269a9edf192fa940c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6ng9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e219f09fdaf4fbce2d0c7f6d7dd746f27e424ef649941221dcd487c414e9eabd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e219f09fdaf4fbce2d0c7f6d7dd746f27e424ef649941221dcd487c414e9eabd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6ng9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bd697c5a7e9cd31c3ef75163507c789373f10f4ae458f55bf9bdfe318eda6e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bd697c5a7e9cd31c3ef75163507c789373f10f4ae458f55bf9bdfe318eda6e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6ng9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kl8k6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:54Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:54 crc kubenswrapper[4886]: I1124 08:49:54.993909 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:54 crc kubenswrapper[4886]: I1124 08:49:54.993940 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:54 crc kubenswrapper[4886]: I1124 08:49:54.993952 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:54 crc kubenswrapper[4886]: I1124 08:49:54.993975 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:49:54 crc kubenswrapper[4886]: I1124 08:49:54.993987 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:49:54Z","lastTransitionTime":"2025-11-24T08:49:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:49:55 crc kubenswrapper[4886]: I1124 08:49:55.011559 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-657wc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03f9078c-6b20-46d5-ae2a-2eb20e236769\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f59f3a090f9824d71c23fcb40dde8c9d34fa8116c17ba79a45c005b2719f3dc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8c129c56a2c69042516d9caa6abdf4095aefde8fdaef4953a8eed353940be89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f504f7f421bd783399a6d08fa713f0a885d8edd5f5245933a98b6830c5573d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51cd23eb1b9056a44b5716ce542fafec8e3ebde494f136655a2a5d838ec90ab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://48584e4993cf06ba4b29a0732136f0f86c23d81adea76e945015ffafb462819d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://97477bfe13de2286806b9a8e3c723e3828c1b0ef182329585eaef507dd0c36f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4abc505b9f103f78af1be41a40b7527bf35d25bc6c6e521c6e344290ff1d77b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4abc505b9f103f78af1be41a40b7527bf35d25bc6c6e521c6e344290ff1d77b0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T08:49:40Z\\\",\\\"message\\\":\\\"ices.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-route-controller-manager/route-controller-manager_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-route-controller-manager/route-controller-manager\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.239\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI1124 08:49:40.973903 6381 obj_retry.go:386] Retry successful for *v1.Pod openshift-etcd/etcd-crc after 0 failed attempt(s)\\\\nF1124 08:49:40.973905 6381 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:40Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-657wc_openshift-ovn-kubernetes(03f9078c-6b20-46d5-ae2a-2eb20e236769)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://070c77876ce83a479d0ab423d8107a14d506c655b476b2719766d78ad3c88dc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53fba806955f4d7e98c5a2c830bc39cc6f9ac2a6248099aa13da6dff5a0cde76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53fba806955f4d7e98c5a2c830bc39cc6f9ac2a6248099aa13da6dff5a0cde76\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-657wc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:55Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:55 crc kubenswrapper[4886]: I1124 08:49:55.046860 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1de93d0-350b-4f10-a50d-d4ca45b47a33\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6cae19a77290a6aa02abc366057b242de03f58a183d136dc19248cdfa2c0fa75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e34e380e0dd3b6788878e07e1d497d003445742fb3e976e78f116fd775fb0ae8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aa26820df6b9179aede13abc90067edc933b9bae6f11a6a1c32cf1a3ee494c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66381e4a8de0eb3ec683f0643c26082dac4f16a6e152164bf9a8b14449a23b0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52f6359fcbe915c66d972a4cdf89646528d2c1d5099c96cb3436e2da54d1099a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ee3470d2b357bc4a39b9fa7da8800930ea9f4d4a3926aa9d2000be8ca2da241\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ee3470d2b357bc4a39b9fa7da8800930ea9f4d4a3926aa9d2000be8ca2da241\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74bf2e186184e37d22515fc9a765b4ff0530354eb4b6af7679cf6f046651f9e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74bf2e186184e37d22515fc9a765b4ff0530354eb4b6af7679cf6f046651f9e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1794ee7f2b8bf736d63385a3755d5a50e5f05cc74470ed04c3f21054476ff5f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1794ee7f2b8bf736d63385a3755d5a50e5f05cc74470ed04c3f21054476ff5f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:04Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:55Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:55 crc kubenswrapper[4886]: I1124 08:49:55.066667 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b56e78f-e569-4f01-9a0f-6b9db3b505d7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4cd419a96834f5093a6a3bad2d2b99f9ab6f4abdbe5d7fddadf1505a0fd1f3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff0e663f6feec9c77de1cd993c443c9b3df3492612c554cc8b95c8e4e4a026b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://146e753c49089634c3d3105a3debf9737245745d8ed9aee61ae2cf4f56ae37ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f13b61e62e7d9dc91305545f1843a74d2b10a9df1f207cddb9947059876a6d36\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6d0c617fe424a39e246d154cd6c0d8dc451c251e24dffaf45d644354e94e01e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"message\\\":\\\" 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1124 08:49:25.277531 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1124 08:49:25.277556 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1124 08:49:25.277707 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1124 08:49:25.277722 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1124 08:49:25.278962 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-505038271/tls.crt::/tmp/serving-cert-505038271/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1763974159\\\\\\\\\\\\\\\" (2025-11-24 08:49:18 +0000 UTC to 2025-12-24 08:49:19 +0000 UTC (now=2025-11-24 08:49:25.278915173 +0000 UTC))\\\\\\\"\\\\nI1124 08:49:25.279136 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1763974165\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1763974164\\\\\\\\\\\\\\\" (2025-11-24 07:49:24 +0000 UTC to 2026-11-24 07:49:24 +0000 UTC (now=2025-11-24 08:49:25.279105228 +0000 UTC))\\\\\\\"\\\\nI1124 08:49:25.279186 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1124 08:49:25.279230 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1124 08:49:25.279272 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-505038271/tls.crt::/tmp/serving-cert-505038271/tls.key\\\\\\\"\\\\nI1124 08:49:25.279422 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF1124 08:49:25.273255 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://97e022f9b9279047d700f687148beaeef5ce4f72de418be1bdf0edde4e5211da\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc5ed0e2c8f6299305d6c26194c4b43e6e884785effd5ae99254221d738e7662\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc5ed0e2c8f6299305d6c26194c4b43e6e884785effd5ae99254221d738e7662\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:55Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:55 crc kubenswrapper[4886]: I1124 08:49:55.083874 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://736f509b74b3eb557d2963822fd7f3aa507fc34bc178d6b2c05e05dde6e2c88e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:55Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:55 crc kubenswrapper[4886]: I1124 08:49:55.096667 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:55 crc kubenswrapper[4886]: I1124 08:49:55.096729 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:55 crc kubenswrapper[4886]: I1124 08:49:55.096739 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:55 crc kubenswrapper[4886]: I1124 08:49:55.096753 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:49:55 crc kubenswrapper[4886]: I1124 08:49:55.096762 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:49:55Z","lastTransitionTime":"2025-11-24T08:49:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:49:55 crc kubenswrapper[4886]: I1124 08:49:55.098558 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:55Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:55 crc kubenswrapper[4886]: I1124 08:49:55.110497 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5g6ld" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82737edd-859f-4e06-8559-47375deb3a1a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f1c6253e8a3c02a22a64ad8913937a50b00f99c7c9da201aa0dd154175dc24c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ffhbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5g6ld\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:55Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:55 crc kubenswrapper[4886]: I1124 08:49:55.123413 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vcq8j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6279457-41d0-46e2-9a21-1d3c74311083\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35b8da4abfec501a571590893bbace52b9e8453890065d2a01b0da8c5b8f9aaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mvm6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db48bd367bf72ecf426c27f7afe1df9a346b1d8bb8fe198a14399f47699d47b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mvm6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-vcq8j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:55Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:55 crc kubenswrapper[4886]: I1124 08:49:55.203406 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:55 crc kubenswrapper[4886]: I1124 08:49:55.203441 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:55 crc kubenswrapper[4886]: I1124 08:49:55.203451 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:55 crc kubenswrapper[4886]: I1124 08:49:55.203467 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:49:55 crc kubenswrapper[4886]: I1124 08:49:55.203477 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:49:55Z","lastTransitionTime":"2025-11-24T08:49:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:49:55 crc kubenswrapper[4886]: I1124 08:49:55.306165 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:55 crc kubenswrapper[4886]: I1124 08:49:55.306216 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:55 crc kubenswrapper[4886]: I1124 08:49:55.306233 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:55 crc kubenswrapper[4886]: I1124 08:49:55.306254 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:49:55 crc kubenswrapper[4886]: I1124 08:49:55.306266 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:49:55Z","lastTransitionTime":"2025-11-24T08:49:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:49:55 crc kubenswrapper[4886]: I1124 08:49:55.408885 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:55 crc kubenswrapper[4886]: I1124 08:49:55.408953 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:55 crc kubenswrapper[4886]: I1124 08:49:55.408966 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:55 crc kubenswrapper[4886]: I1124 08:49:55.408984 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:49:55 crc kubenswrapper[4886]: I1124 08:49:55.408999 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:49:55Z","lastTransitionTime":"2025-11-24T08:49:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:49:55 crc kubenswrapper[4886]: I1124 08:49:55.510967 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:55 crc kubenswrapper[4886]: I1124 08:49:55.511018 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:55 crc kubenswrapper[4886]: I1124 08:49:55.511029 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:55 crc kubenswrapper[4886]: I1124 08:49:55.511046 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:49:55 crc kubenswrapper[4886]: I1124 08:49:55.511082 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:49:55Z","lastTransitionTime":"2025-11-24T08:49:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:49:55 crc kubenswrapper[4886]: I1124 08:49:55.613677 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:55 crc kubenswrapper[4886]: I1124 08:49:55.613725 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:55 crc kubenswrapper[4886]: I1124 08:49:55.613738 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:55 crc kubenswrapper[4886]: I1124 08:49:55.613755 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:49:55 crc kubenswrapper[4886]: I1124 08:49:55.613768 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:49:55Z","lastTransitionTime":"2025-11-24T08:49:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:49:55 crc kubenswrapper[4886]: I1124 08:49:55.717035 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:55 crc kubenswrapper[4886]: I1124 08:49:55.717117 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:55 crc kubenswrapper[4886]: I1124 08:49:55.717134 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:55 crc kubenswrapper[4886]: I1124 08:49:55.717181 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:49:55 crc kubenswrapper[4886]: I1124 08:49:55.717196 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:49:55Z","lastTransitionTime":"2025-11-24T08:49:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:49:55 crc kubenswrapper[4886]: I1124 08:49:55.735194 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7844f778-bcd5-40fe-ad92-0cc0fcd6c5d7-metrics-certs\") pod \"network-metrics-daemon-fkfxv\" (UID: \"7844f778-bcd5-40fe-ad92-0cc0fcd6c5d7\") " pod="openshift-multus/network-metrics-daemon-fkfxv" Nov 24 08:49:55 crc kubenswrapper[4886]: E1124 08:49:55.735512 4886 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 24 08:49:55 crc kubenswrapper[4886]: E1124 08:49:55.735662 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7844f778-bcd5-40fe-ad92-0cc0fcd6c5d7-metrics-certs podName:7844f778-bcd5-40fe-ad92-0cc0fcd6c5d7 nodeName:}" failed. No retries permitted until 2025-11-24 08:50:11.735632794 +0000 UTC m=+67.622371089 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/7844f778-bcd5-40fe-ad92-0cc0fcd6c5d7-metrics-certs") pod "network-metrics-daemon-fkfxv" (UID: "7844f778-bcd5-40fe-ad92-0cc0fcd6c5d7") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 24 08:49:55 crc kubenswrapper[4886]: I1124 08:49:55.820502 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:55 crc kubenswrapper[4886]: I1124 08:49:55.820552 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:55 crc kubenswrapper[4886]: I1124 08:49:55.820561 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:55 crc kubenswrapper[4886]: I1124 08:49:55.820579 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:49:55 crc kubenswrapper[4886]: I1124 08:49:55.820592 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:49:55Z","lastTransitionTime":"2025-11-24T08:49:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:49:55 crc kubenswrapper[4886]: I1124 08:49:55.848298 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-fkfxv" Nov 24 08:49:55 crc kubenswrapper[4886]: E1124 08:49:55.848528 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-fkfxv" podUID="7844f778-bcd5-40fe-ad92-0cc0fcd6c5d7" Nov 24 08:49:55 crc kubenswrapper[4886]: I1124 08:49:55.923547 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:55 crc kubenswrapper[4886]: I1124 08:49:55.923604 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:55 crc kubenswrapper[4886]: I1124 08:49:55.923613 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:55 crc kubenswrapper[4886]: I1124 08:49:55.923632 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:49:55 crc kubenswrapper[4886]: I1124 08:49:55.923643 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:49:55Z","lastTransitionTime":"2025-11-24T08:49:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:49:56 crc kubenswrapper[4886]: I1124 08:49:56.026253 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:56 crc kubenswrapper[4886]: I1124 08:49:56.026323 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:56 crc kubenswrapper[4886]: I1124 08:49:56.026343 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:56 crc kubenswrapper[4886]: I1124 08:49:56.026369 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:49:56 crc kubenswrapper[4886]: I1124 08:49:56.026385 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:49:56Z","lastTransitionTime":"2025-11-24T08:49:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:49:56 crc kubenswrapper[4886]: I1124 08:49:56.129629 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:56 crc kubenswrapper[4886]: I1124 08:49:56.129719 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:56 crc kubenswrapper[4886]: I1124 08:49:56.129731 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:56 crc kubenswrapper[4886]: I1124 08:49:56.129752 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:49:56 crc kubenswrapper[4886]: I1124 08:49:56.129766 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:49:56Z","lastTransitionTime":"2025-11-24T08:49:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:49:56 crc kubenswrapper[4886]: I1124 08:49:56.232699 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:56 crc kubenswrapper[4886]: I1124 08:49:56.232739 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:56 crc kubenswrapper[4886]: I1124 08:49:56.232749 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:56 crc kubenswrapper[4886]: I1124 08:49:56.232766 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:49:56 crc kubenswrapper[4886]: I1124 08:49:56.232777 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:49:56Z","lastTransitionTime":"2025-11-24T08:49:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:49:56 crc kubenswrapper[4886]: I1124 08:49:56.335294 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:56 crc kubenswrapper[4886]: I1124 08:49:56.335331 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:56 crc kubenswrapper[4886]: I1124 08:49:56.335340 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:56 crc kubenswrapper[4886]: I1124 08:49:56.335354 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:49:56 crc kubenswrapper[4886]: I1124 08:49:56.335365 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:49:56Z","lastTransitionTime":"2025-11-24T08:49:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:49:56 crc kubenswrapper[4886]: I1124 08:49:56.438291 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:56 crc kubenswrapper[4886]: I1124 08:49:56.438337 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:56 crc kubenswrapper[4886]: I1124 08:49:56.438347 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:56 crc kubenswrapper[4886]: I1124 08:49:56.438366 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:49:56 crc kubenswrapper[4886]: I1124 08:49:56.438376 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:49:56Z","lastTransitionTime":"2025-11-24T08:49:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:49:56 crc kubenswrapper[4886]: I1124 08:49:56.541347 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:56 crc kubenswrapper[4886]: I1124 08:49:56.541397 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:56 crc kubenswrapper[4886]: I1124 08:49:56.541407 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:56 crc kubenswrapper[4886]: I1124 08:49:56.541422 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:49:56 crc kubenswrapper[4886]: I1124 08:49:56.541432 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:49:56Z","lastTransitionTime":"2025-11-24T08:49:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:49:56 crc kubenswrapper[4886]: I1124 08:49:56.644450 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:56 crc kubenswrapper[4886]: I1124 08:49:56.644489 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:56 crc kubenswrapper[4886]: I1124 08:49:56.644500 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:56 crc kubenswrapper[4886]: I1124 08:49:56.644522 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:49:56 crc kubenswrapper[4886]: I1124 08:49:56.644538 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:49:56Z","lastTransitionTime":"2025-11-24T08:49:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:49:56 crc kubenswrapper[4886]: I1124 08:49:56.645188 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 08:49:56 crc kubenswrapper[4886]: E1124 08:49:56.645589 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 08:50:28.645564434 +0000 UTC m=+84.532302589 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 08:49:56 crc kubenswrapper[4886]: E1124 08:49:56.645631 4886 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 24 08:49:56 crc kubenswrapper[4886]: I1124 08:49:56.645633 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 08:49:56 crc kubenswrapper[4886]: E1124 08:49:56.645682 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-24 08:50:28.645670098 +0000 UTC m=+84.532408223 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 24 08:49:56 crc kubenswrapper[4886]: I1124 08:49:56.645715 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 08:49:56 crc kubenswrapper[4886]: E1124 08:49:56.645781 4886 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 24 08:49:56 crc kubenswrapper[4886]: E1124 08:49:56.645810 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-24 08:50:28.645802761 +0000 UTC m=+84.532540896 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 24 08:49:56 crc kubenswrapper[4886]: I1124 08:49:56.746505 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 08:49:56 crc kubenswrapper[4886]: I1124 08:49:56.746618 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 08:49:56 crc kubenswrapper[4886]: E1124 08:49:56.746765 4886 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 24 08:49:56 crc kubenswrapper[4886]: E1124 08:49:56.746787 4886 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 24 08:49:56 crc kubenswrapper[4886]: E1124 08:49:56.746801 4886 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 08:49:56 crc kubenswrapper[4886]: E1124 08:49:56.746856 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-24 08:50:28.746839632 +0000 UTC m=+84.633577767 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 08:49:56 crc kubenswrapper[4886]: E1124 08:49:56.746765 4886 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 24 08:49:56 crc kubenswrapper[4886]: E1124 08:49:56.746935 4886 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 24 08:49:56 crc kubenswrapper[4886]: E1124 08:49:56.746950 4886 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 08:49:56 crc kubenswrapper[4886]: E1124 08:49:56.746988 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-24 08:50:28.746976586 +0000 UTC m=+84.633714721 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 08:49:56 crc kubenswrapper[4886]: I1124 08:49:56.748054 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:56 crc kubenswrapper[4886]: I1124 08:49:56.748085 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:56 crc kubenswrapper[4886]: I1124 08:49:56.748096 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:56 crc kubenswrapper[4886]: I1124 08:49:56.748113 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:49:56 crc kubenswrapper[4886]: I1124 08:49:56.748123 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:49:56Z","lastTransitionTime":"2025-11-24T08:49:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:49:56 crc kubenswrapper[4886]: I1124 08:49:56.848595 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 08:49:56 crc kubenswrapper[4886]: I1124 08:49:56.848852 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 08:49:56 crc kubenswrapper[4886]: I1124 08:49:56.849055 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 08:49:56 crc kubenswrapper[4886]: I1124 08:49:56.857830 4886 scope.go:117] "RemoveContainer" containerID="4abc505b9f103f78af1be41a40b7527bf35d25bc6c6e521c6e344290ff1d77b0" Nov 24 08:49:56 crc kubenswrapper[4886]: E1124 08:49:56.857907 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 08:49:56 crc kubenswrapper[4886]: E1124 08:49:56.858120 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 08:49:56 crc kubenswrapper[4886]: E1124 08:49:56.858229 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 08:49:56 crc kubenswrapper[4886]: I1124 08:49:56.860219 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:56 crc kubenswrapper[4886]: I1124 08:49:56.860272 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:56 crc kubenswrapper[4886]: I1124 08:49:56.860287 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:56 crc kubenswrapper[4886]: I1124 08:49:56.860309 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:49:56 crc kubenswrapper[4886]: I1124 08:49:56.860323 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:49:56Z","lastTransitionTime":"2025-11-24T08:49:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:49:56 crc kubenswrapper[4886]: I1124 08:49:56.963736 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:56 crc kubenswrapper[4886]: I1124 08:49:56.964073 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:56 crc kubenswrapper[4886]: I1124 08:49:56.964083 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:56 crc kubenswrapper[4886]: I1124 08:49:56.964098 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:49:56 crc kubenswrapper[4886]: I1124 08:49:56.964108 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:49:56Z","lastTransitionTime":"2025-11-24T08:49:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:49:57 crc kubenswrapper[4886]: I1124 08:49:57.067102 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:57 crc kubenswrapper[4886]: I1124 08:49:57.067262 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:57 crc kubenswrapper[4886]: I1124 08:49:57.067284 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:57 crc kubenswrapper[4886]: I1124 08:49:57.067306 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:49:57 crc kubenswrapper[4886]: I1124 08:49:57.067319 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:49:57Z","lastTransitionTime":"2025-11-24T08:49:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:49:57 crc kubenswrapper[4886]: I1124 08:49:57.172679 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:57 crc kubenswrapper[4886]: I1124 08:49:57.172735 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:57 crc kubenswrapper[4886]: I1124 08:49:57.172747 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:57 crc kubenswrapper[4886]: I1124 08:49:57.172769 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:49:57 crc kubenswrapper[4886]: I1124 08:49:57.172783 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:49:57Z","lastTransitionTime":"2025-11-24T08:49:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:49:57 crc kubenswrapper[4886]: I1124 08:49:57.220543 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 24 08:49:57 crc kubenswrapper[4886]: I1124 08:49:57.227482 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-657wc_03f9078c-6b20-46d5-ae2a-2eb20e236769/ovnkube-controller/1.log" Nov 24 08:49:57 crc kubenswrapper[4886]: I1124 08:49:57.229934 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-657wc" event={"ID":"03f9078c-6b20-46d5-ae2a-2eb20e236769","Type":"ContainerStarted","Data":"76a9b348444e1c0367116f0081d701af938e94b1cc0731ae7fe4294b12666b87"} Nov 24 08:49:57 crc kubenswrapper[4886]: I1124 08:49:57.231057 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-657wc" Nov 24 08:49:57 crc kubenswrapper[4886]: I1124 08:49:57.236705 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Nov 24 08:49:57 crc kubenswrapper[4886]: I1124 08:49:57.251888 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:57Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:57 crc kubenswrapper[4886]: I1124 08:49:57.267142 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zc46q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23cb993e-0360-4449-b604-8ddd825a6502\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://008c7d8517b1c3c5f875d9e73315ceb9936936a1d977422f16bf2aaba7e3f4dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8hf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b6a02c10c81171f23bf0e623a3f86710da37f6dd62f885c0366830a60b2be34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8hf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zc46q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:57Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:57 crc kubenswrapper[4886]: I1124 08:49:57.276894 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:57 crc kubenswrapper[4886]: I1124 08:49:57.276964 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:57 crc kubenswrapper[4886]: I1124 08:49:57.276982 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:57 crc kubenswrapper[4886]: I1124 08:49:57.277036 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:49:57 crc kubenswrapper[4886]: I1124 08:49:57.277053 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:49:57Z","lastTransitionTime":"2025-11-24T08:49:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:49:57 crc kubenswrapper[4886]: I1124 08:49:57.282436 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2dk8j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d515fec-60f3-4bf7-9ba4-697bb691b670\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ead2821f6b71571212807c6f2984d541faf5143c28fee48594b19e08bd91d085\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6pfs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2dk8j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:57Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:57 crc kubenswrapper[4886]: I1124 08:49:57.300736 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56bddcbe3378ccc43de3046352a1284d9590790973736e218e891b804bfe1ff8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:57Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:57 crc kubenswrapper[4886]: I1124 08:49:57.318130 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:57Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:57 crc kubenswrapper[4886]: I1124 08:49:57.336031 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-82v6c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5d34f7b-b75b-4572-87ca-a01c66ba67b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9fe889c11fc75d5c449c6facbeb67f8fc2d60ffedbdf1433b72147cdabd353e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8qmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-82v6c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:57Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:57 crc kubenswrapper[4886]: I1124 08:49:57.356804 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb6a474a-7674-4280-8da4-784cc3c4fc67\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e99af72fba7cd02a022aede4ff904f6dc0d263429468b460f3886cbbf8767e9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3a5e301ad7f424b4db6c7c2d4054973bf707f225de81d58a13077e1a130fe59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://879cdf4650f88cd23cfed38dabd912619f3c95ab3c254c0eb893eb4cbb2f8c5c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://243dc8c38e86fbd96d073b5dc2edaf3f378f052563abfedf86cd78e6270f11b1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:04Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:57Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:57 crc kubenswrapper[4886]: I1124 08:49:57.379899 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:57 crc kubenswrapper[4886]: I1124 08:49:57.379944 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:57 crc kubenswrapper[4886]: I1124 08:49:57.379955 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:57 crc kubenswrapper[4886]: I1124 08:49:57.379975 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:49:57 crc kubenswrapper[4886]: I1124 08:49:57.379988 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:49:57Z","lastTransitionTime":"2025-11-24T08:49:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:49:57 crc kubenswrapper[4886]: I1124 08:49:57.380605 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://336d2d1c7e69b9e2c481681be12af7bfd17b8517e7fd97a67b3ca82dbc84ac35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c72af163fd4d5f2dc2a642984873ee83c62e9f1f96fac318fd17d5a74e5ea82f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:57Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:57 crc kubenswrapper[4886]: I1124 08:49:57.396291 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-fkfxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7844f778-bcd5-40fe-ad92-0cc0fcd6c5d7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9ckc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9ckc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:39Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-fkfxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:57Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:57 crc kubenswrapper[4886]: I1124 08:49:57.482946 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:57 crc kubenswrapper[4886]: I1124 08:49:57.483004 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:57 crc kubenswrapper[4886]: I1124 08:49:57.483017 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:57 crc kubenswrapper[4886]: I1124 08:49:57.483045 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:49:57 crc kubenswrapper[4886]: I1124 08:49:57.483059 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:49:57Z","lastTransitionTime":"2025-11-24T08:49:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:49:57 crc kubenswrapper[4886]: I1124 08:49:57.585219 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:57 crc kubenswrapper[4886]: I1124 08:49:57.585310 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:57 crc kubenswrapper[4886]: I1124 08:49:57.585327 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:57 crc kubenswrapper[4886]: I1124 08:49:57.585353 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:49:57 crc kubenswrapper[4886]: I1124 08:49:57.585372 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:49:57Z","lastTransitionTime":"2025-11-24T08:49:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:49:57 crc kubenswrapper[4886]: I1124 08:49:57.688171 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:57 crc kubenswrapper[4886]: I1124 08:49:57.688201 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:57 crc kubenswrapper[4886]: I1124 08:49:57.688209 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:57 crc kubenswrapper[4886]: I1124 08:49:57.688224 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:49:57 crc kubenswrapper[4886]: I1124 08:49:57.688234 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:49:57Z","lastTransitionTime":"2025-11-24T08:49:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:49:57 crc kubenswrapper[4886]: I1124 08:49:57.791066 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:57 crc kubenswrapper[4886]: I1124 08:49:57.791102 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:57 crc kubenswrapper[4886]: I1124 08:49:57.791113 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:57 crc kubenswrapper[4886]: I1124 08:49:57.791131 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:49:57 crc kubenswrapper[4886]: I1124 08:49:57.791142 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:49:57Z","lastTransitionTime":"2025-11-24T08:49:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:49:57 crc kubenswrapper[4886]: I1124 08:49:57.893973 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:57 crc kubenswrapper[4886]: I1124 08:49:57.894005 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:57 crc kubenswrapper[4886]: I1124 08:49:57.894015 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:57 crc kubenswrapper[4886]: I1124 08:49:57.894029 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:49:57 crc kubenswrapper[4886]: I1124 08:49:57.894037 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:49:57Z","lastTransitionTime":"2025-11-24T08:49:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:49:57 crc kubenswrapper[4886]: I1124 08:49:57.997085 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:57 crc kubenswrapper[4886]: I1124 08:49:57.997122 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:57 crc kubenswrapper[4886]: I1124 08:49:57.997132 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:57 crc kubenswrapper[4886]: I1124 08:49:57.997161 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:49:57 crc kubenswrapper[4886]: I1124 08:49:57.997174 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:49:57Z","lastTransitionTime":"2025-11-24T08:49:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:49:58 crc kubenswrapper[4886]: I1124 08:49:58.100468 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:58 crc kubenswrapper[4886]: I1124 08:49:58.100513 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:58 crc kubenswrapper[4886]: I1124 08:49:58.100527 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:58 crc kubenswrapper[4886]: I1124 08:49:58.100546 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:49:58 crc kubenswrapper[4886]: I1124 08:49:58.100558 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:49:58Z","lastTransitionTime":"2025-11-24T08:49:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:49:58 crc kubenswrapper[4886]: I1124 08:49:58.203791 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:58 crc kubenswrapper[4886]: I1124 08:49:58.203830 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:58 crc kubenswrapper[4886]: I1124 08:49:58.203838 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:58 crc kubenswrapper[4886]: I1124 08:49:58.203858 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:49:58 crc kubenswrapper[4886]: I1124 08:49:58.203870 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:49:58Z","lastTransitionTime":"2025-11-24T08:49:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:49:58 crc kubenswrapper[4886]: I1124 08:49:58.307206 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:58 crc kubenswrapper[4886]: I1124 08:49:58.307248 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:58 crc kubenswrapper[4886]: I1124 08:49:58.307262 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:58 crc kubenswrapper[4886]: I1124 08:49:58.307285 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:49:58 crc kubenswrapper[4886]: I1124 08:49:58.307303 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:49:58Z","lastTransitionTime":"2025-11-24T08:49:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:49:58 crc kubenswrapper[4886]: I1124 08:49:58.409805 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:58 crc kubenswrapper[4886]: I1124 08:49:58.409833 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:58 crc kubenswrapper[4886]: I1124 08:49:58.409842 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:58 crc kubenswrapper[4886]: I1124 08:49:58.409861 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:49:58 crc kubenswrapper[4886]: I1124 08:49:58.409877 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:49:58Z","lastTransitionTime":"2025-11-24T08:49:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:49:58 crc kubenswrapper[4886]: I1124 08:49:58.512806 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:58 crc kubenswrapper[4886]: I1124 08:49:58.512845 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:58 crc kubenswrapper[4886]: I1124 08:49:58.512855 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:58 crc kubenswrapper[4886]: I1124 08:49:58.512872 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:49:58 crc kubenswrapper[4886]: I1124 08:49:58.512886 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:49:58Z","lastTransitionTime":"2025-11-24T08:49:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:49:58 crc kubenswrapper[4886]: I1124 08:49:58.616062 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:58 crc kubenswrapper[4886]: I1124 08:49:58.616108 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:58 crc kubenswrapper[4886]: I1124 08:49:58.616122 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:58 crc kubenswrapper[4886]: I1124 08:49:58.616161 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:49:58 crc kubenswrapper[4886]: I1124 08:49:58.616177 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:49:58Z","lastTransitionTime":"2025-11-24T08:49:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:49:58 crc kubenswrapper[4886]: I1124 08:49:58.693598 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 08:49:58 crc kubenswrapper[4886]: E1124 08:49:58.693759 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 08:49:58 crc kubenswrapper[4886]: I1124 08:49:58.693841 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 08:49:58 crc kubenswrapper[4886]: I1124 08:49:58.694430 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-fkfxv" Nov 24 08:49:58 crc kubenswrapper[4886]: E1124 08:49:58.694816 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-fkfxv" podUID="7844f778-bcd5-40fe-ad92-0cc0fcd6c5d7" Nov 24 08:49:58 crc kubenswrapper[4886]: E1124 08:49:58.694647 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 08:49:58 crc kubenswrapper[4886]: I1124 08:49:58.694700 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 08:49:58 crc kubenswrapper[4886]: E1124 08:49:58.694972 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 08:49:58 crc kubenswrapper[4886]: I1124 08:49:58.703724 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5g6ld" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82737edd-859f-4e06-8559-47375deb3a1a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f1c6253e8a3c02a22a64ad8913937a50b00f99c7c9da201aa0dd154175dc24c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ffhbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5g6ld\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:58Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:58 crc kubenswrapper[4886]: I1124 08:49:58.725351 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:58 crc kubenswrapper[4886]: I1124 08:49:58.725434 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:58 crc kubenswrapper[4886]: I1124 08:49:58.725448 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:58 crc kubenswrapper[4886]: I1124 08:49:58.725473 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:49:58 crc kubenswrapper[4886]: I1124 08:49:58.725489 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:49:58Z","lastTransitionTime":"2025-11-24T08:49:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:49:58 crc kubenswrapper[4886]: I1124 08:49:58.726209 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kl8k6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b578bbaf-7246-42d9-9d2d-346bd1da2c41\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d401bfaa5ee27091fc815de2752da57dcce2799f3ff3f6c88faacbe02565324\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6ng9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b4ab49b6707685341f4480554ba33f3dce9687661a3ed08ca228914a6821703\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b4ab49b6707685341f4480554ba33f3dce9687661a3ed08ca228914a6821703\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6ng9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce7a090aac10049685ef8de3f6533a3a7c61f8fe5cc79f2af17f6f36d0e90168\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce7a090aac10049685ef8de3f6533a3a7c61f8fe5cc79f2af17f6f36d0e90168\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6ng9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d53782a307c740b991c352671c3365951fb623f61ade4b061452b63f19b5e606\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d53782a307c740b991c352671c3365951fb623f61ade4b061452b63f19b5e606\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6ng9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce2a601938d1e214fdd14de097a4d95fdf3ac619b03bc2d269a9edf192fa940c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce2a601938d1e214fdd14de097a4d95fdf3ac619b03bc2d269a9edf192fa940c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6ng9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e219f09fdaf4fbce2d0c7f6d7dd746f27e424ef649941221dcd487c414e9eabd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e219f09fdaf4fbce2d0c7f6d7dd746f27e424ef649941221dcd487c414e9eabd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6ng9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bd697c5a7e9cd31c3ef75163507c789373f10f4ae458f55bf9bdfe318eda6e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bd697c5a7e9cd31c3ef75163507c789373f10f4ae458f55bf9bdfe318eda6e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6ng9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kl8k6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:58Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:58 crc kubenswrapper[4886]: I1124 08:49:58.751112 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-657wc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03f9078c-6b20-46d5-ae2a-2eb20e236769\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f59f3a090f9824d71c23fcb40dde8c9d34fa8116c17ba79a45c005b2719f3dc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8c129c56a2c69042516d9caa6abdf4095aefde8fdaef4953a8eed353940be89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f504f7f421bd783399a6d08fa713f0a885d8edd5f5245933a98b6830c5573d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51cd23eb1b9056a44b5716ce542fafec8e3ebde494f136655a2a5d838ec90ab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://48584e4993cf06ba4b29a0732136f0f86c23d81adea76e945015ffafb462819d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://97477bfe13de2286806b9a8e3c723e3828c1b0ef182329585eaef507dd0c36f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4abc505b9f103f78af1be41a40b7527bf35d25bc6c6e521c6e344290ff1d77b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4abc505b9f103f78af1be41a40b7527bf35d25bc6c6e521c6e344290ff1d77b0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T08:49:40Z\\\",\\\"message\\\":\\\"ices.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-route-controller-manager/route-controller-manager_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-route-controller-manager/route-controller-manager\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.239\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI1124 08:49:40.973903 6381 obj_retry.go:386] Retry successful for *v1.Pod openshift-etcd/etcd-crc after 0 failed attempt(s)\\\\nF1124 08:49:40.973905 6381 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:40Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-657wc_openshift-ovn-kubernetes(03f9078c-6b20-46d5-ae2a-2eb20e236769)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://070c77876ce83a479d0ab423d8107a14d506c655b476b2719766d78ad3c88dc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53fba806955f4d7e98c5a2c830bc39cc6f9ac2a6248099aa13da6dff5a0cde76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53fba806955f4d7e98c5a2c830bc39cc6f9ac2a6248099aa13da6dff5a0cde76\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-657wc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:58Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:58 crc kubenswrapper[4886]: I1124 08:49:58.780546 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1de93d0-350b-4f10-a50d-d4ca45b47a33\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6cae19a77290a6aa02abc366057b242de03f58a183d136dc19248cdfa2c0fa75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e34e380e0dd3b6788878e07e1d497d003445742fb3e976e78f116fd775fb0ae8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aa26820df6b9179aede13abc90067edc933b9bae6f11a6a1c32cf1a3ee494c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66381e4a8de0eb3ec683f0643c26082dac4f16a6e152164bf9a8b14449a23b0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52f6359fcbe915c66d972a4cdf89646528d2c1d5099c96cb3436e2da54d1099a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ee3470d2b357bc4a39b9fa7da8800930ea9f4d4a3926aa9d2000be8ca2da241\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ee3470d2b357bc4a39b9fa7da8800930ea9f4d4a3926aa9d2000be8ca2da241\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74bf2e186184e37d22515fc9a765b4ff0530354eb4b6af7679cf6f046651f9e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74bf2e186184e37d22515fc9a765b4ff0530354eb4b6af7679cf6f046651f9e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1794ee7f2b8bf736d63385a3755d5a50e5f05cc74470ed04c3f21054476ff5f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1794ee7f2b8bf736d63385a3755d5a50e5f05cc74470ed04c3f21054476ff5f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:04Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:58Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:58 crc kubenswrapper[4886]: I1124 08:49:58.799513 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b56e78f-e569-4f01-9a0f-6b9db3b505d7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4cd419a96834f5093a6a3bad2d2b99f9ab6f4abdbe5d7fddadf1505a0fd1f3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff0e663f6feec9c77de1cd993c443c9b3df3492612c554cc8b95c8e4e4a026b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://146e753c49089634c3d3105a3debf9737245745d8ed9aee61ae2cf4f56ae37ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f13b61e62e7d9dc91305545f1843a74d2b10a9df1f207cddb9947059876a6d36\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6d0c617fe424a39e246d154cd6c0d8dc451c251e24dffaf45d644354e94e01e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"message\\\":\\\" 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1124 08:49:25.277531 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1124 08:49:25.277556 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1124 08:49:25.277707 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1124 08:49:25.277722 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1124 08:49:25.278962 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-505038271/tls.crt::/tmp/serving-cert-505038271/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1763974159\\\\\\\\\\\\\\\" (2025-11-24 08:49:18 +0000 UTC to 2025-12-24 08:49:19 +0000 UTC (now=2025-11-24 08:49:25.278915173 +0000 UTC))\\\\\\\"\\\\nI1124 08:49:25.279136 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1763974165\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1763974164\\\\\\\\\\\\\\\" (2025-11-24 07:49:24 +0000 UTC to 2026-11-24 07:49:24 +0000 UTC (now=2025-11-24 08:49:25.279105228 +0000 UTC))\\\\\\\"\\\\nI1124 08:49:25.279186 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1124 08:49:25.279230 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1124 08:49:25.279272 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-505038271/tls.crt::/tmp/serving-cert-505038271/tls.key\\\\\\\"\\\\nI1124 08:49:25.279422 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF1124 08:49:25.273255 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://97e022f9b9279047d700f687148beaeef5ce4f72de418be1bdf0edde4e5211da\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc5ed0e2c8f6299305d6c26194c4b43e6e884785effd5ae99254221d738e7662\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc5ed0e2c8f6299305d6c26194c4b43e6e884785effd5ae99254221d738e7662\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:58Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:58 crc kubenswrapper[4886]: I1124 08:49:58.816664 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://736f509b74b3eb557d2963822fd7f3aa507fc34bc178d6b2c05e05dde6e2c88e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:58Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:58 crc kubenswrapper[4886]: I1124 08:49:58.828258 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:58 crc kubenswrapper[4886]: I1124 08:49:58.828294 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:58 crc kubenswrapper[4886]: I1124 08:49:58.828307 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:58 crc kubenswrapper[4886]: I1124 08:49:58.828324 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:49:58 crc kubenswrapper[4886]: I1124 08:49:58.828335 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:49:58Z","lastTransitionTime":"2025-11-24T08:49:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:49:58 crc kubenswrapper[4886]: I1124 08:49:58.841449 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:58Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:58 crc kubenswrapper[4886]: I1124 08:49:58.860228 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vcq8j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6279457-41d0-46e2-9a21-1d3c74311083\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35b8da4abfec501a571590893bbace52b9e8453890065d2a01b0da8c5b8f9aaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mvm6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db48bd367bf72ecf426c27f7afe1df9a346b1d8bb8fe198a14399f47699d47b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mvm6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-vcq8j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:58Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:58 crc kubenswrapper[4886]: I1124 08:49:58.881111 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb6a474a-7674-4280-8da4-784cc3c4fc67\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e99af72fba7cd02a022aede4ff904f6dc0d263429468b460f3886cbbf8767e9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3a5e301ad7f424b4db6c7c2d4054973bf707f225de81d58a13077e1a130fe59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://879cdf4650f88cd23cfed38dabd912619f3c95ab3c254c0eb893eb4cbb2f8c5c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://243dc8c38e86fbd96d073b5dc2edaf3f378f052563abfedf86cd78e6270f11b1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:04Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:58Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:58 crc kubenswrapper[4886]: I1124 08:49:58.893133 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://336d2d1c7e69b9e2c481681be12af7bfd17b8517e7fd97a67b3ca82dbc84ac35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c72af163fd4d5f2dc2a642984873ee83c62e9f1f96fac318fd17d5a74e5ea82f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:58Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:58 crc kubenswrapper[4886]: I1124 08:49:58.904983 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-fkfxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7844f778-bcd5-40fe-ad92-0cc0fcd6c5d7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9ckc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9ckc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:39Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-fkfxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:58Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:58 crc kubenswrapper[4886]: I1124 08:49:58.925334 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-657wc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03f9078c-6b20-46d5-ae2a-2eb20e236769\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f59f3a090f9824d71c23fcb40dde8c9d34fa8116c17ba79a45c005b2719f3dc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8c129c56a2c69042516d9caa6abdf4095aefde8fdaef4953a8eed353940be89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f504f7f421bd783399a6d08fa713f0a885d8edd5f5245933a98b6830c5573d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51cd23eb1b9056a44b5716ce542fafec8e3ebde494f136655a2a5d838ec90ab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://48584e4993cf06ba4b29a0732136f0f86c23d81adea76e945015ffafb462819d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://97477bfe13de2286806b9a8e3c723e3828c1b0ef182329585eaef507dd0c36f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76a9b348444e1c0367116f0081d701af938e94b1cc0731ae7fe4294b12666b87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4abc505b9f103f78af1be41a40b7527bf35d25bc6c6e521c6e344290ff1d77b0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T08:49:40Z\\\",\\\"message\\\":\\\"ices.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-route-controller-manager/route-controller-manager_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-route-controller-manager/route-controller-manager\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.239\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI1124 08:49:40.973903 6381 obj_retry.go:386] Retry successful for *v1.Pod openshift-etcd/etcd-crc after 0 failed attempt(s)\\\\nF1124 08:49:40.973905 6381 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:40Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://070c77876ce83a479d0ab423d8107a14d506c655b476b2719766d78ad3c88dc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53fba806955f4d7e98c5a2c830bc39cc6f9ac2a6248099aa13da6dff5a0cde76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53fba806955f4d7e98c5a2c830bc39cc6f9ac2a6248099aa13da6dff5a0cde76\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-657wc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:58Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:58 crc kubenswrapper[4886]: I1124 08:49:58.930544 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:58 crc kubenswrapper[4886]: I1124 08:49:58.930577 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:58 crc kubenswrapper[4886]: I1124 08:49:58.930587 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:58 crc kubenswrapper[4886]: I1124 08:49:58.930618 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:49:58 crc kubenswrapper[4886]: I1124 08:49:58.930630 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:49:58Z","lastTransitionTime":"2025-11-24T08:49:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:49:58 crc kubenswrapper[4886]: I1124 08:49:58.945960 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1de93d0-350b-4f10-a50d-d4ca45b47a33\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6cae19a77290a6aa02abc366057b242de03f58a183d136dc19248cdfa2c0fa75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e34e380e0dd3b6788878e07e1d497d003445742fb3e976e78f116fd775fb0ae8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aa26820df6b9179aede13abc90067edc933b9bae6f11a6a1c32cf1a3ee494c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66381e4a8de0eb3ec683f0643c26082dac4f16a6e152164bf9a8b14449a23b0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52f6359fcbe915c66d972a4cdf89646528d2c1d5099c96cb3436e2da54d1099a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ee3470d2b357bc4a39b9fa7da8800930ea9f4d4a3926aa9d2000be8ca2da241\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ee3470d2b357bc4a39b9fa7da8800930ea9f4d4a3926aa9d2000be8ca2da241\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74bf2e186184e37d22515fc9a765b4ff0530354eb4b6af7679cf6f046651f9e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74bf2e186184e37d22515fc9a765b4ff0530354eb4b6af7679cf6f046651f9e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1794ee7f2b8bf736d63385a3755d5a50e5f05cc74470ed04c3f21054476ff5f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1794ee7f2b8bf736d63385a3755d5a50e5f05cc74470ed04c3f21054476ff5f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:04Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:58Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:58 crc kubenswrapper[4886]: I1124 08:49:58.960596 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b56e78f-e569-4f01-9a0f-6b9db3b505d7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4cd419a96834f5093a6a3bad2d2b99f9ab6f4abdbe5d7fddadf1505a0fd1f3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff0e663f6feec9c77de1cd993c443c9b3df3492612c554cc8b95c8e4e4a026b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://146e753c49089634c3d3105a3debf9737245745d8ed9aee61ae2cf4f56ae37ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f13b61e62e7d9dc91305545f1843a74d2b10a9df1f207cddb9947059876a6d36\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6d0c617fe424a39e246d154cd6c0d8dc451c251e24dffaf45d644354e94e01e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"message\\\":\\\" 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1124 08:49:25.277531 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1124 08:49:25.277556 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1124 08:49:25.277707 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1124 08:49:25.277722 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1124 08:49:25.278962 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-505038271/tls.crt::/tmp/serving-cert-505038271/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1763974159\\\\\\\\\\\\\\\" (2025-11-24 08:49:18 +0000 UTC to 2025-12-24 08:49:19 +0000 UTC (now=2025-11-24 08:49:25.278915173 +0000 UTC))\\\\\\\"\\\\nI1124 08:49:25.279136 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1763974165\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1763974164\\\\\\\\\\\\\\\" (2025-11-24 07:49:24 +0000 UTC to 2026-11-24 07:49:24 +0000 UTC (now=2025-11-24 08:49:25.279105228 +0000 UTC))\\\\\\\"\\\\nI1124 08:49:25.279186 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1124 08:49:25.279230 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1124 08:49:25.279272 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-505038271/tls.crt::/tmp/serving-cert-505038271/tls.key\\\\\\\"\\\\nI1124 08:49:25.279422 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF1124 08:49:25.273255 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://97e022f9b9279047d700f687148beaeef5ce4f72de418be1bdf0edde4e5211da\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc5ed0e2c8f6299305d6c26194c4b43e6e884785effd5ae99254221d738e7662\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc5ed0e2c8f6299305d6c26194c4b43e6e884785effd5ae99254221d738e7662\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:58Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:58 crc kubenswrapper[4886]: I1124 08:49:58.976278 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15edcf2f-5ba7-4db0-b44e-fac293a82688\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44ecd99cee98ca44de5a0af10ab152bc27aca5ab8f94d9d58b295804261846d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b4868ba55a55a9c6373e02adc88283d51196d5e19b554d0f351c59599731130\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0c4a9d27ad84a91160eed506a2a6b0b34968fc71063ce3493326092049d2bf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://98787b856c8680640854ec328aa9dc502d03f8617e5b19283b2bd03c3cca7988\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://98787b856c8680640854ec328aa9dc502d03f8617e5b19283b2bd03c3cca7988\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:05Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:04Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:58Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:58 crc kubenswrapper[4886]: I1124 08:49:58.987763 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://736f509b74b3eb557d2963822fd7f3aa507fc34bc178d6b2c05e05dde6e2c88e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:58Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:59 crc kubenswrapper[4886]: I1124 08:49:59.002047 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:58Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:59 crc kubenswrapper[4886]: I1124 08:49:59.016627 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5g6ld" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82737edd-859f-4e06-8559-47375deb3a1a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f1c6253e8a3c02a22a64ad8913937a50b00f99c7c9da201aa0dd154175dc24c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ffhbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5g6ld\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:59Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:59 crc kubenswrapper[4886]: I1124 08:49:59.033840 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:59 crc kubenswrapper[4886]: I1124 08:49:59.034120 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:59 crc kubenswrapper[4886]: I1124 08:49:59.034207 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:59 crc kubenswrapper[4886]: I1124 08:49:59.034283 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:49:59 crc kubenswrapper[4886]: I1124 08:49:59.034352 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:49:59Z","lastTransitionTime":"2025-11-24T08:49:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:49:59 crc kubenswrapper[4886]: I1124 08:49:59.035090 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kl8k6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b578bbaf-7246-42d9-9d2d-346bd1da2c41\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d401bfaa5ee27091fc815de2752da57dcce2799f3ff3f6c88faacbe02565324\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6ng9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b4ab49b6707685341f4480554ba33f3dce9687661a3ed08ca228914a6821703\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b4ab49b6707685341f4480554ba33f3dce9687661a3ed08ca228914a6821703\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6ng9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce7a090aac10049685ef8de3f6533a3a7c61f8fe5cc79f2af17f6f36d0e90168\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce7a090aac10049685ef8de3f6533a3a7c61f8fe5cc79f2af17f6f36d0e90168\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6ng9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d53782a307c740b991c352671c3365951fb623f61ade4b061452b63f19b5e606\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d53782a307c740b991c352671c3365951fb623f61ade4b061452b63f19b5e606\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6ng9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce2a601938d1e214fdd14de097a4d95fdf3ac619b03bc2d269a9edf192fa940c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce2a601938d1e214fdd14de097a4d95fdf3ac619b03bc2d269a9edf192fa940c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6ng9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e219f09fdaf4fbce2d0c7f6d7dd746f27e424ef649941221dcd487c414e9eabd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e219f09fdaf4fbce2d0c7f6d7dd746f27e424ef649941221dcd487c414e9eabd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6ng9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bd697c5a7e9cd31c3ef75163507c789373f10f4ae458f55bf9bdfe318eda6e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bd697c5a7e9cd31c3ef75163507c789373f10f4ae458f55bf9bdfe318eda6e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6ng9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kl8k6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:59Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:59 crc kubenswrapper[4886]: I1124 08:49:59.050020 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vcq8j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6279457-41d0-46e2-9a21-1d3c74311083\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35b8da4abfec501a571590893bbace52b9e8453890065d2a01b0da8c5b8f9aaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mvm6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db48bd367bf72ecf426c27f7afe1df9a346b1d8bb8fe198a14399f47699d47b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mvm6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-vcq8j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:59Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:59 crc kubenswrapper[4886]: I1124 08:49:59.064570 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:59Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:59 crc kubenswrapper[4886]: I1124 08:49:59.082603 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zc46q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23cb993e-0360-4449-b604-8ddd825a6502\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://008c7d8517b1c3c5f875d9e73315ceb9936936a1d977422f16bf2aaba7e3f4dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8hf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b6a02c10c81171f23bf0e623a3f86710da37f6dd62f885c0366830a60b2be34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8hf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zc46q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:59Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:59 crc kubenswrapper[4886]: I1124 08:49:59.097746 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2dk8j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d515fec-60f3-4bf7-9ba4-697bb691b670\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ead2821f6b71571212807c6f2984d541faf5143c28fee48594b19e08bd91d085\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6pfs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2dk8j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:59Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:59 crc kubenswrapper[4886]: I1124 08:49:59.112730 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56bddcbe3378ccc43de3046352a1284d9590790973736e218e891b804bfe1ff8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:59Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:59 crc kubenswrapper[4886]: I1124 08:49:59.128603 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:59Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:59 crc kubenswrapper[4886]: I1124 08:49:59.137086 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:59 crc kubenswrapper[4886]: I1124 08:49:59.137142 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:59 crc kubenswrapper[4886]: I1124 08:49:59.137170 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:59 crc kubenswrapper[4886]: I1124 08:49:59.137194 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:49:59 crc kubenswrapper[4886]: I1124 08:49:59.137210 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:49:59Z","lastTransitionTime":"2025-11-24T08:49:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:49:59 crc kubenswrapper[4886]: I1124 08:49:59.141611 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-82v6c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5d34f7b-b75b-4572-87ca-a01c66ba67b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9fe889c11fc75d5c449c6facbeb67f8fc2d60ffedbdf1433b72147cdabd353e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8qmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-82v6c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:59Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:59 crc kubenswrapper[4886]: I1124 08:49:59.239599 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:59 crc kubenswrapper[4886]: I1124 08:49:59.239933 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:59 crc kubenswrapper[4886]: I1124 08:49:59.239944 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:59 crc kubenswrapper[4886]: I1124 08:49:59.239980 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:49:59 crc kubenswrapper[4886]: I1124 08:49:59.239992 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:49:59Z","lastTransitionTime":"2025-11-24T08:49:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:49:59 crc kubenswrapper[4886]: I1124 08:49:59.343332 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:59 crc kubenswrapper[4886]: I1124 08:49:59.343376 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:59 crc kubenswrapper[4886]: I1124 08:49:59.343386 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:59 crc kubenswrapper[4886]: I1124 08:49:59.343403 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:49:59 crc kubenswrapper[4886]: I1124 08:49:59.343414 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:49:59Z","lastTransitionTime":"2025-11-24T08:49:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:49:59 crc kubenswrapper[4886]: I1124 08:49:59.445911 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:59 crc kubenswrapper[4886]: I1124 08:49:59.446037 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:59 crc kubenswrapper[4886]: I1124 08:49:59.446058 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:59 crc kubenswrapper[4886]: I1124 08:49:59.446083 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:49:59 crc kubenswrapper[4886]: I1124 08:49:59.446127 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:49:59Z","lastTransitionTime":"2025-11-24T08:49:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:49:59 crc kubenswrapper[4886]: I1124 08:49:59.548359 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:59 crc kubenswrapper[4886]: I1124 08:49:59.548421 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:59 crc kubenswrapper[4886]: I1124 08:49:59.548430 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:59 crc kubenswrapper[4886]: I1124 08:49:59.548450 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:49:59 crc kubenswrapper[4886]: I1124 08:49:59.548459 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:49:59Z","lastTransitionTime":"2025-11-24T08:49:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:49:59 crc kubenswrapper[4886]: I1124 08:49:59.651849 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:59 crc kubenswrapper[4886]: I1124 08:49:59.651945 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:59 crc kubenswrapper[4886]: I1124 08:49:59.651970 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:59 crc kubenswrapper[4886]: I1124 08:49:59.652001 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:49:59 crc kubenswrapper[4886]: I1124 08:49:59.652024 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:49:59Z","lastTransitionTime":"2025-11-24T08:49:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:49:59 crc kubenswrapper[4886]: I1124 08:49:59.702066 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-657wc_03f9078c-6b20-46d5-ae2a-2eb20e236769/ovnkube-controller/2.log" Nov 24 08:49:59 crc kubenswrapper[4886]: I1124 08:49:59.702787 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-657wc_03f9078c-6b20-46d5-ae2a-2eb20e236769/ovnkube-controller/1.log" Nov 24 08:49:59 crc kubenswrapper[4886]: I1124 08:49:59.705084 4886 generic.go:334] "Generic (PLEG): container finished" podID="03f9078c-6b20-46d5-ae2a-2eb20e236769" containerID="76a9b348444e1c0367116f0081d701af938e94b1cc0731ae7fe4294b12666b87" exitCode=1 Nov 24 08:49:59 crc kubenswrapper[4886]: I1124 08:49:59.705157 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-657wc" event={"ID":"03f9078c-6b20-46d5-ae2a-2eb20e236769","Type":"ContainerDied","Data":"76a9b348444e1c0367116f0081d701af938e94b1cc0731ae7fe4294b12666b87"} Nov 24 08:49:59 crc kubenswrapper[4886]: I1124 08:49:59.705280 4886 scope.go:117] "RemoveContainer" containerID="4abc505b9f103f78af1be41a40b7527bf35d25bc6c6e521c6e344290ff1d77b0" Nov 24 08:49:59 crc kubenswrapper[4886]: I1124 08:49:59.709099 4886 scope.go:117] "RemoveContainer" containerID="76a9b348444e1c0367116f0081d701af938e94b1cc0731ae7fe4294b12666b87" Nov 24 08:49:59 crc kubenswrapper[4886]: E1124 08:49:59.709619 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-657wc_openshift-ovn-kubernetes(03f9078c-6b20-46d5-ae2a-2eb20e236769)\"" pod="openshift-ovn-kubernetes/ovnkube-node-657wc" podUID="03f9078c-6b20-46d5-ae2a-2eb20e236769" Nov 24 08:49:59 crc kubenswrapper[4886]: I1124 08:49:59.719935 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56bddcbe3378ccc43de3046352a1284d9590790973736e218e891b804bfe1ff8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:59Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:59 crc kubenswrapper[4886]: I1124 08:49:59.733107 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:59Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:59 crc kubenswrapper[4886]: I1124 08:49:59.745517 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-82v6c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5d34f7b-b75b-4572-87ca-a01c66ba67b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9fe889c11fc75d5c449c6facbeb67f8fc2d60ffedbdf1433b72147cdabd353e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8qmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-82v6c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:59Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:59 crc kubenswrapper[4886]: I1124 08:49:59.754398 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:59 crc kubenswrapper[4886]: I1124 08:49:59.754450 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:59 crc kubenswrapper[4886]: I1124 08:49:59.754467 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:59 crc kubenswrapper[4886]: I1124 08:49:59.754487 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:49:59 crc kubenswrapper[4886]: I1124 08:49:59.754499 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:49:59Z","lastTransitionTime":"2025-11-24T08:49:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:49:59 crc kubenswrapper[4886]: I1124 08:49:59.758439 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb6a474a-7674-4280-8da4-784cc3c4fc67\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e99af72fba7cd02a022aede4ff904f6dc0d263429468b460f3886cbbf8767e9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3a5e301ad7f424b4db6c7c2d4054973bf707f225de81d58a13077e1a130fe59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://879cdf4650f88cd23cfed38dabd912619f3c95ab3c254c0eb893eb4cbb2f8c5c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://243dc8c38e86fbd96d073b5dc2edaf3f378f052563abfedf86cd78e6270f11b1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:04Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:59Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:59 crc kubenswrapper[4886]: I1124 08:49:59.769582 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://336d2d1c7e69b9e2c481681be12af7bfd17b8517e7fd97a67b3ca82dbc84ac35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c72af163fd4d5f2dc2a642984873ee83c62e9f1f96fac318fd17d5a74e5ea82f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:59Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:59 crc kubenswrapper[4886]: I1124 08:49:59.778904 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-fkfxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7844f778-bcd5-40fe-ad92-0cc0fcd6c5d7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9ckc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9ckc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:39Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-fkfxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:59Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:59 crc kubenswrapper[4886]: I1124 08:49:59.788997 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:59Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:59 crc kubenswrapper[4886]: I1124 08:49:59.798144 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5g6ld" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82737edd-859f-4e06-8559-47375deb3a1a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f1c6253e8a3c02a22a64ad8913937a50b00f99c7c9da201aa0dd154175dc24c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ffhbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5g6ld\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:59Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:59 crc kubenswrapper[4886]: I1124 08:49:59.811352 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kl8k6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b578bbaf-7246-42d9-9d2d-346bd1da2c41\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d401bfaa5ee27091fc815de2752da57dcce2799f3ff3f6c88faacbe02565324\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6ng9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b4ab49b6707685341f4480554ba33f3dce9687661a3ed08ca228914a6821703\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b4ab49b6707685341f4480554ba33f3dce9687661a3ed08ca228914a6821703\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6ng9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce7a090aac10049685ef8de3f6533a3a7c61f8fe5cc79f2af17f6f36d0e90168\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce7a090aac10049685ef8de3f6533a3a7c61f8fe5cc79f2af17f6f36d0e90168\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6ng9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d53782a307c740b991c352671c3365951fb623f61ade4b061452b63f19b5e606\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d53782a307c740b991c352671c3365951fb623f61ade4b061452b63f19b5e606\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6ng9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce2a601938d1e214fdd14de097a4d95fdf3ac619b03bc2d269a9edf192fa940c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce2a601938d1e214fdd14de097a4d95fdf3ac619b03bc2d269a9edf192fa940c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6ng9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e219f09fdaf4fbce2d0c7f6d7dd746f27e424ef649941221dcd487c414e9eabd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e219f09fdaf4fbce2d0c7f6d7dd746f27e424ef649941221dcd487c414e9eabd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6ng9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bd697c5a7e9cd31c3ef75163507c789373f10f4ae458f55bf9bdfe318eda6e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bd697c5a7e9cd31c3ef75163507c789373f10f4ae458f55bf9bdfe318eda6e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6ng9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kl8k6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:59Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:59 crc kubenswrapper[4886]: I1124 08:49:59.827545 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-657wc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03f9078c-6b20-46d5-ae2a-2eb20e236769\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f59f3a090f9824d71c23fcb40dde8c9d34fa8116c17ba79a45c005b2719f3dc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8c129c56a2c69042516d9caa6abdf4095aefde8fdaef4953a8eed353940be89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f504f7f421bd783399a6d08fa713f0a885d8edd5f5245933a98b6830c5573d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51cd23eb1b9056a44b5716ce542fafec8e3ebde494f136655a2a5d838ec90ab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://48584e4993cf06ba4b29a0732136f0f86c23d81adea76e945015ffafb462819d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://97477bfe13de2286806b9a8e3c723e3828c1b0ef182329585eaef507dd0c36f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76a9b348444e1c0367116f0081d701af938e94b1cc0731ae7fe4294b12666b87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4abc505b9f103f78af1be41a40b7527bf35d25bc6c6e521c6e344290ff1d77b0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T08:49:40Z\\\",\\\"message\\\":\\\"ices.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-route-controller-manager/route-controller-manager_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-route-controller-manager/route-controller-manager\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.239\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI1124 08:49:40.973903 6381 obj_retry.go:386] Retry successful for *v1.Pod openshift-etcd/etcd-crc after 0 failed attempt(s)\\\\nF1124 08:49:40.973905 6381 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:40Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://76a9b348444e1c0367116f0081d701af938e94b1cc0731ae7fe4294b12666b87\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T08:49:59Z\\\",\\\"message\\\":\\\"s_controller.go:360] Finished syncing service ovn-kubernetes-control-plane on namespace openshift-ovn-kubernetes for network=default : 11.501µs\\\\nI1124 08:49:59.095768 6567 obj_retry.go:303] Retry object setup: *v1.Pod openshift-ovn-kubernetes/ovnkube-node-657wc\\\\nI1124 08:49:59.095697 6567 ovnkube_controller.go:1292] Config duration recorder: kind/namespace/name pod/openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-z89jx. OVN-Kubernetes controller took 0.000443563 seconds. No OVN measurement.\\\\nI1124 08:49:59.095828 6567 obj_retry.go:365] Adding new object: *v1.Pod openshift-ovn-kubernetes/ovnkube-node-657wc\\\\nI1124 08:49:59.095838 6567 services_controller.go:356] Processing sync for service openshift-controller-manager/controller-manager for network=default\\\\nF1124 08:49:59.095843 6567 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webh\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://070c77876ce83a479d0ab423d8107a14d506c655b476b2719766d78ad3c88dc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53fba806955f4d7e98c5a2c830bc39cc6f9ac2a6248099aa13da6dff5a0cde76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53fba806955f4d7e98c5a2c830bc39cc6f9ac2a6248099aa13da6dff5a0cde76\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-657wc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:59Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:59 crc kubenswrapper[4886]: I1124 08:49:59.845261 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1de93d0-350b-4f10-a50d-d4ca45b47a33\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6cae19a77290a6aa02abc366057b242de03f58a183d136dc19248cdfa2c0fa75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e34e380e0dd3b6788878e07e1d497d003445742fb3e976e78f116fd775fb0ae8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aa26820df6b9179aede13abc90067edc933b9bae6f11a6a1c32cf1a3ee494c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66381e4a8de0eb3ec683f0643c26082dac4f16a6e152164bf9a8b14449a23b0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52f6359fcbe915c66d972a4cdf89646528d2c1d5099c96cb3436e2da54d1099a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ee3470d2b357bc4a39b9fa7da8800930ea9f4d4a3926aa9d2000be8ca2da241\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ee3470d2b357bc4a39b9fa7da8800930ea9f4d4a3926aa9d2000be8ca2da241\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74bf2e186184e37d22515fc9a765b4ff0530354eb4b6af7679cf6f046651f9e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74bf2e186184e37d22515fc9a765b4ff0530354eb4b6af7679cf6f046651f9e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1794ee7f2b8bf736d63385a3755d5a50e5f05cc74470ed04c3f21054476ff5f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1794ee7f2b8bf736d63385a3755d5a50e5f05cc74470ed04c3f21054476ff5f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:04Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:59Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:59 crc kubenswrapper[4886]: I1124 08:49:59.848378 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 08:49:59 crc kubenswrapper[4886]: E1124 08:49:59.848483 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 08:49:59 crc kubenswrapper[4886]: I1124 08:49:59.848649 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 08:49:59 crc kubenswrapper[4886]: E1124 08:49:59.848724 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 08:49:59 crc kubenswrapper[4886]: I1124 08:49:59.857163 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:59 crc kubenswrapper[4886]: I1124 08:49:59.857192 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:59 crc kubenswrapper[4886]: I1124 08:49:59.857203 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:59 crc kubenswrapper[4886]: I1124 08:49:59.857218 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:49:59 crc kubenswrapper[4886]: I1124 08:49:59.857231 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:49:59Z","lastTransitionTime":"2025-11-24T08:49:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:49:59 crc kubenswrapper[4886]: I1124 08:49:59.857874 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b56e78f-e569-4f01-9a0f-6b9db3b505d7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4cd419a96834f5093a6a3bad2d2b99f9ab6f4abdbe5d7fddadf1505a0fd1f3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff0e663f6feec9c77de1cd993c443c9b3df3492612c554cc8b95c8e4e4a026b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://146e753c49089634c3d3105a3debf9737245745d8ed9aee61ae2cf4f56ae37ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f13b61e62e7d9dc91305545f1843a74d2b10a9df1f207cddb9947059876a6d36\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6d0c617fe424a39e246d154cd6c0d8dc451c251e24dffaf45d644354e94e01e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"message\\\":\\\" 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1124 08:49:25.277531 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1124 08:49:25.277556 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1124 08:49:25.277707 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1124 08:49:25.277722 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1124 08:49:25.278962 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-505038271/tls.crt::/tmp/serving-cert-505038271/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1763974159\\\\\\\\\\\\\\\" (2025-11-24 08:49:18 +0000 UTC to 2025-12-24 08:49:19 +0000 UTC (now=2025-11-24 08:49:25.278915173 +0000 UTC))\\\\\\\"\\\\nI1124 08:49:25.279136 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1763974165\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1763974164\\\\\\\\\\\\\\\" (2025-11-24 07:49:24 +0000 UTC to 2026-11-24 07:49:24 +0000 UTC (now=2025-11-24 08:49:25.279105228 +0000 UTC))\\\\\\\"\\\\nI1124 08:49:25.279186 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1124 08:49:25.279230 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1124 08:49:25.279272 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-505038271/tls.crt::/tmp/serving-cert-505038271/tls.key\\\\\\\"\\\\nI1124 08:49:25.279422 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF1124 08:49:25.273255 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://97e022f9b9279047d700f687148beaeef5ce4f72de418be1bdf0edde4e5211da\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc5ed0e2c8f6299305d6c26194c4b43e6e884785effd5ae99254221d738e7662\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc5ed0e2c8f6299305d6c26194c4b43e6e884785effd5ae99254221d738e7662\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:59Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:59 crc kubenswrapper[4886]: I1124 08:49:59.869544 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15edcf2f-5ba7-4db0-b44e-fac293a82688\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44ecd99cee98ca44de5a0af10ab152bc27aca5ab8f94d9d58b295804261846d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b4868ba55a55a9c6373e02adc88283d51196d5e19b554d0f351c59599731130\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0c4a9d27ad84a91160eed506a2a6b0b34968fc71063ce3493326092049d2bf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://98787b856c8680640854ec328aa9dc502d03f8617e5b19283b2bd03c3cca7988\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://98787b856c8680640854ec328aa9dc502d03f8617e5b19283b2bd03c3cca7988\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:05Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:04Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:59Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:59 crc kubenswrapper[4886]: I1124 08:49:59.883110 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://736f509b74b3eb557d2963822fd7f3aa507fc34bc178d6b2c05e05dde6e2c88e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:59Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:59 crc kubenswrapper[4886]: I1124 08:49:59.895777 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vcq8j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6279457-41d0-46e2-9a21-1d3c74311083\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35b8da4abfec501a571590893bbace52b9e8453890065d2a01b0da8c5b8f9aaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mvm6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db48bd367bf72ecf426c27f7afe1df9a346b1d8bb8fe198a14399f47699d47b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mvm6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-vcq8j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:59Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:59 crc kubenswrapper[4886]: I1124 08:49:59.907707 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:59Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:59 crc kubenswrapper[4886]: I1124 08:49:59.919467 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zc46q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23cb993e-0360-4449-b604-8ddd825a6502\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://008c7d8517b1c3c5f875d9e73315ceb9936936a1d977422f16bf2aaba7e3f4dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8hf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b6a02c10c81171f23bf0e623a3f86710da37f6dd62f885c0366830a60b2be34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8hf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zc46q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:59Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:59 crc kubenswrapper[4886]: I1124 08:49:59.932495 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2dk8j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d515fec-60f3-4bf7-9ba4-697bb691b670\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ead2821f6b71571212807c6f2984d541faf5143c28fee48594b19e08bd91d085\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6pfs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2dk8j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:49:59Z is after 2025-08-24T17:21:41Z" Nov 24 08:49:59 crc kubenswrapper[4886]: I1124 08:49:59.959667 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:49:59 crc kubenswrapper[4886]: I1124 08:49:59.959719 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:49:59 crc kubenswrapper[4886]: I1124 08:49:59.959734 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:49:59 crc kubenswrapper[4886]: I1124 08:49:59.959754 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:49:59 crc kubenswrapper[4886]: I1124 08:49:59.959767 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:49:59Z","lastTransitionTime":"2025-11-24T08:49:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:00 crc kubenswrapper[4886]: I1124 08:50:00.062453 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:00 crc kubenswrapper[4886]: I1124 08:50:00.062503 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:00 crc kubenswrapper[4886]: I1124 08:50:00.062519 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:00 crc kubenswrapper[4886]: I1124 08:50:00.062536 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:00 crc kubenswrapper[4886]: I1124 08:50:00.062547 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:00Z","lastTransitionTime":"2025-11-24T08:50:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:00 crc kubenswrapper[4886]: I1124 08:50:00.165279 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:00 crc kubenswrapper[4886]: I1124 08:50:00.165329 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:00 crc kubenswrapper[4886]: I1124 08:50:00.165340 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:00 crc kubenswrapper[4886]: I1124 08:50:00.165377 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:00 crc kubenswrapper[4886]: I1124 08:50:00.165390 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:00Z","lastTransitionTime":"2025-11-24T08:50:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:00 crc kubenswrapper[4886]: I1124 08:50:00.268318 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:00 crc kubenswrapper[4886]: I1124 08:50:00.268363 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:00 crc kubenswrapper[4886]: I1124 08:50:00.268374 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:00 crc kubenswrapper[4886]: I1124 08:50:00.268392 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:00 crc kubenswrapper[4886]: I1124 08:50:00.268405 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:00Z","lastTransitionTime":"2025-11-24T08:50:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:00 crc kubenswrapper[4886]: I1124 08:50:00.371286 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:00 crc kubenswrapper[4886]: I1124 08:50:00.371375 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:00 crc kubenswrapper[4886]: I1124 08:50:00.371385 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:00 crc kubenswrapper[4886]: I1124 08:50:00.371404 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:00 crc kubenswrapper[4886]: I1124 08:50:00.371414 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:00Z","lastTransitionTime":"2025-11-24T08:50:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:00 crc kubenswrapper[4886]: I1124 08:50:00.474380 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:00 crc kubenswrapper[4886]: I1124 08:50:00.474451 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:00 crc kubenswrapper[4886]: I1124 08:50:00.474470 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:00 crc kubenswrapper[4886]: I1124 08:50:00.474496 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:00 crc kubenswrapper[4886]: I1124 08:50:00.474508 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:00Z","lastTransitionTime":"2025-11-24T08:50:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:00 crc kubenswrapper[4886]: I1124 08:50:00.576902 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:00 crc kubenswrapper[4886]: I1124 08:50:00.576937 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:00 crc kubenswrapper[4886]: I1124 08:50:00.576948 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:00 crc kubenswrapper[4886]: I1124 08:50:00.576964 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:00 crc kubenswrapper[4886]: I1124 08:50:00.576972 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:00Z","lastTransitionTime":"2025-11-24T08:50:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:00 crc kubenswrapper[4886]: I1124 08:50:00.680393 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:00 crc kubenswrapper[4886]: I1124 08:50:00.680450 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:00 crc kubenswrapper[4886]: I1124 08:50:00.680466 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:00 crc kubenswrapper[4886]: I1124 08:50:00.680487 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:00 crc kubenswrapper[4886]: I1124 08:50:00.680501 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:00Z","lastTransitionTime":"2025-11-24T08:50:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:00 crc kubenswrapper[4886]: I1124 08:50:00.710497 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-657wc_03f9078c-6b20-46d5-ae2a-2eb20e236769/ovnkube-controller/2.log" Nov 24 08:50:00 crc kubenswrapper[4886]: I1124 08:50:00.782670 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:00 crc kubenswrapper[4886]: I1124 08:50:00.782718 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:00 crc kubenswrapper[4886]: I1124 08:50:00.782730 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:00 crc kubenswrapper[4886]: I1124 08:50:00.782750 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:00 crc kubenswrapper[4886]: I1124 08:50:00.782760 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:00Z","lastTransitionTime":"2025-11-24T08:50:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:00 crc kubenswrapper[4886]: I1124 08:50:00.848311 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 08:50:00 crc kubenswrapper[4886]: I1124 08:50:00.848320 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-fkfxv" Nov 24 08:50:00 crc kubenswrapper[4886]: E1124 08:50:00.848505 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 08:50:00 crc kubenswrapper[4886]: E1124 08:50:00.848650 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-fkfxv" podUID="7844f778-bcd5-40fe-ad92-0cc0fcd6c5d7" Nov 24 08:50:00 crc kubenswrapper[4886]: I1124 08:50:00.885558 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:00 crc kubenswrapper[4886]: I1124 08:50:00.885613 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:00 crc kubenswrapper[4886]: I1124 08:50:00.885624 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:00 crc kubenswrapper[4886]: I1124 08:50:00.885641 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:00 crc kubenswrapper[4886]: I1124 08:50:00.885652 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:00Z","lastTransitionTime":"2025-11-24T08:50:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:00 crc kubenswrapper[4886]: I1124 08:50:00.989432 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:00 crc kubenswrapper[4886]: I1124 08:50:00.989499 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:00 crc kubenswrapper[4886]: I1124 08:50:00.989513 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:00 crc kubenswrapper[4886]: I1124 08:50:00.989536 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:00 crc kubenswrapper[4886]: I1124 08:50:00.989552 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:00Z","lastTransitionTime":"2025-11-24T08:50:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:01 crc kubenswrapper[4886]: I1124 08:50:01.093909 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:01 crc kubenswrapper[4886]: I1124 08:50:01.093966 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:01 crc kubenswrapper[4886]: I1124 08:50:01.093975 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:01 crc kubenswrapper[4886]: I1124 08:50:01.093990 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:01 crc kubenswrapper[4886]: I1124 08:50:01.094020 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:01Z","lastTransitionTime":"2025-11-24T08:50:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:01 crc kubenswrapper[4886]: I1124 08:50:01.198358 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:01 crc kubenswrapper[4886]: I1124 08:50:01.198408 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:01 crc kubenswrapper[4886]: I1124 08:50:01.198418 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:01 crc kubenswrapper[4886]: I1124 08:50:01.198437 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:01 crc kubenswrapper[4886]: I1124 08:50:01.198446 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:01Z","lastTransitionTime":"2025-11-24T08:50:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:01 crc kubenswrapper[4886]: I1124 08:50:01.301541 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:01 crc kubenswrapper[4886]: I1124 08:50:01.301587 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:01 crc kubenswrapper[4886]: I1124 08:50:01.301602 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:01 crc kubenswrapper[4886]: I1124 08:50:01.301625 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:01 crc kubenswrapper[4886]: I1124 08:50:01.301641 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:01Z","lastTransitionTime":"2025-11-24T08:50:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:01 crc kubenswrapper[4886]: I1124 08:50:01.403892 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:01 crc kubenswrapper[4886]: I1124 08:50:01.403957 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:01 crc kubenswrapper[4886]: I1124 08:50:01.403973 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:01 crc kubenswrapper[4886]: I1124 08:50:01.403997 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:01 crc kubenswrapper[4886]: I1124 08:50:01.404014 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:01Z","lastTransitionTime":"2025-11-24T08:50:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:01 crc kubenswrapper[4886]: I1124 08:50:01.506484 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:01 crc kubenswrapper[4886]: I1124 08:50:01.506535 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:01 crc kubenswrapper[4886]: I1124 08:50:01.506545 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:01 crc kubenswrapper[4886]: I1124 08:50:01.506559 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:01 crc kubenswrapper[4886]: I1124 08:50:01.506568 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:01Z","lastTransitionTime":"2025-11-24T08:50:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:01 crc kubenswrapper[4886]: I1124 08:50:01.609081 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:01 crc kubenswrapper[4886]: I1124 08:50:01.609123 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:01 crc kubenswrapper[4886]: I1124 08:50:01.609134 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:01 crc kubenswrapper[4886]: I1124 08:50:01.609171 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:01 crc kubenswrapper[4886]: I1124 08:50:01.609184 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:01Z","lastTransitionTime":"2025-11-24T08:50:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:01 crc kubenswrapper[4886]: I1124 08:50:01.711267 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:01 crc kubenswrapper[4886]: I1124 08:50:01.711307 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:01 crc kubenswrapper[4886]: I1124 08:50:01.711316 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:01 crc kubenswrapper[4886]: I1124 08:50:01.711329 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:01 crc kubenswrapper[4886]: I1124 08:50:01.711340 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:01Z","lastTransitionTime":"2025-11-24T08:50:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:01 crc kubenswrapper[4886]: I1124 08:50:01.789446 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:01 crc kubenswrapper[4886]: I1124 08:50:01.789493 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:01 crc kubenswrapper[4886]: I1124 08:50:01.789505 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:01 crc kubenswrapper[4886]: I1124 08:50:01.789526 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:01 crc kubenswrapper[4886]: I1124 08:50:01.789540 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:01Z","lastTransitionTime":"2025-11-24T08:50:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:01 crc kubenswrapper[4886]: E1124 08:50:01.806496 4886 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T08:50:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T08:50:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T08:50:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T08:50:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T08:50:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T08:50:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T08:50:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T08:50:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bf6e1d17-5641-40b5-abdc-9697895ace84\\\",\\\"systemUUID\\\":\\\"b95cd08d-0a26-454e-842e-e33553e0c6a8\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:50:01Z is after 2025-08-24T17:21:41Z" Nov 24 08:50:01 crc kubenswrapper[4886]: I1124 08:50:01.811724 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:01 crc kubenswrapper[4886]: I1124 08:50:01.811783 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:01 crc kubenswrapper[4886]: I1124 08:50:01.811798 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:01 crc kubenswrapper[4886]: I1124 08:50:01.811820 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:01 crc kubenswrapper[4886]: I1124 08:50:01.811833 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:01Z","lastTransitionTime":"2025-11-24T08:50:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:01 crc kubenswrapper[4886]: E1124 08:50:01.825884 4886 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T08:50:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T08:50:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T08:50:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T08:50:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T08:50:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T08:50:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T08:50:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T08:50:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bf6e1d17-5641-40b5-abdc-9697895ace84\\\",\\\"systemUUID\\\":\\\"b95cd08d-0a26-454e-842e-e33553e0c6a8\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:50:01Z is after 2025-08-24T17:21:41Z" Nov 24 08:50:01 crc kubenswrapper[4886]: I1124 08:50:01.831180 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:01 crc kubenswrapper[4886]: I1124 08:50:01.831230 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:01 crc kubenswrapper[4886]: I1124 08:50:01.831241 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:01 crc kubenswrapper[4886]: I1124 08:50:01.831263 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:01 crc kubenswrapper[4886]: I1124 08:50:01.831278 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:01Z","lastTransitionTime":"2025-11-24T08:50:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:01 crc kubenswrapper[4886]: E1124 08:50:01.846827 4886 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T08:50:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T08:50:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T08:50:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T08:50:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T08:50:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T08:50:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T08:50:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T08:50:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bf6e1d17-5641-40b5-abdc-9697895ace84\\\",\\\"systemUUID\\\":\\\"b95cd08d-0a26-454e-842e-e33553e0c6a8\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:50:01Z is after 2025-08-24T17:21:41Z" Nov 24 08:50:01 crc kubenswrapper[4886]: I1124 08:50:01.848935 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 08:50:01 crc kubenswrapper[4886]: I1124 08:50:01.848968 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 08:50:01 crc kubenswrapper[4886]: E1124 08:50:01.849101 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 08:50:01 crc kubenswrapper[4886]: E1124 08:50:01.849224 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 08:50:01 crc kubenswrapper[4886]: I1124 08:50:01.851865 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:01 crc kubenswrapper[4886]: I1124 08:50:01.851907 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:01 crc kubenswrapper[4886]: I1124 08:50:01.851920 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:01 crc kubenswrapper[4886]: I1124 08:50:01.851942 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:01 crc kubenswrapper[4886]: I1124 08:50:01.851956 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:01Z","lastTransitionTime":"2025-11-24T08:50:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:01 crc kubenswrapper[4886]: E1124 08:50:01.865131 4886 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T08:50:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T08:50:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T08:50:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T08:50:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T08:50:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T08:50:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T08:50:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T08:50:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bf6e1d17-5641-40b5-abdc-9697895ace84\\\",\\\"systemUUID\\\":\\\"b95cd08d-0a26-454e-842e-e33553e0c6a8\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:50:01Z is after 2025-08-24T17:21:41Z" Nov 24 08:50:01 crc kubenswrapper[4886]: I1124 08:50:01.869965 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:01 crc kubenswrapper[4886]: I1124 08:50:01.870045 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:01 crc kubenswrapper[4886]: I1124 08:50:01.870064 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:01 crc kubenswrapper[4886]: I1124 08:50:01.870092 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:01 crc kubenswrapper[4886]: I1124 08:50:01.870113 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:01Z","lastTransitionTime":"2025-11-24T08:50:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:01 crc kubenswrapper[4886]: E1124 08:50:01.886671 4886 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T08:50:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T08:50:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T08:50:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T08:50:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T08:50:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T08:50:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T08:50:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T08:50:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bf6e1d17-5641-40b5-abdc-9697895ace84\\\",\\\"systemUUID\\\":\\\"b95cd08d-0a26-454e-842e-e33553e0c6a8\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:50:01Z is after 2025-08-24T17:21:41Z" Nov 24 08:50:01 crc kubenswrapper[4886]: E1124 08:50:01.886859 4886 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 24 08:50:01 crc kubenswrapper[4886]: I1124 08:50:01.889171 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:01 crc kubenswrapper[4886]: I1124 08:50:01.889245 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:01 crc kubenswrapper[4886]: I1124 08:50:01.889259 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:01 crc kubenswrapper[4886]: I1124 08:50:01.889282 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:01 crc kubenswrapper[4886]: I1124 08:50:01.889293 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:01Z","lastTransitionTime":"2025-11-24T08:50:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:01 crc kubenswrapper[4886]: I1124 08:50:01.991642 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:01 crc kubenswrapper[4886]: I1124 08:50:01.991693 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:01 crc kubenswrapper[4886]: I1124 08:50:01.991703 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:01 crc kubenswrapper[4886]: I1124 08:50:01.991721 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:01 crc kubenswrapper[4886]: I1124 08:50:01.991734 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:01Z","lastTransitionTime":"2025-11-24T08:50:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:02 crc kubenswrapper[4886]: I1124 08:50:02.094432 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:02 crc kubenswrapper[4886]: I1124 08:50:02.094469 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:02 crc kubenswrapper[4886]: I1124 08:50:02.094478 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:02 crc kubenswrapper[4886]: I1124 08:50:02.094499 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:02 crc kubenswrapper[4886]: I1124 08:50:02.094510 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:02Z","lastTransitionTime":"2025-11-24T08:50:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:02 crc kubenswrapper[4886]: I1124 08:50:02.197142 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:02 crc kubenswrapper[4886]: I1124 08:50:02.197265 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:02 crc kubenswrapper[4886]: I1124 08:50:02.197277 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:02 crc kubenswrapper[4886]: I1124 08:50:02.197295 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:02 crc kubenswrapper[4886]: I1124 08:50:02.197308 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:02Z","lastTransitionTime":"2025-11-24T08:50:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:02 crc kubenswrapper[4886]: I1124 08:50:02.300138 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:02 crc kubenswrapper[4886]: I1124 08:50:02.300827 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:02 crc kubenswrapper[4886]: I1124 08:50:02.300844 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:02 crc kubenswrapper[4886]: I1124 08:50:02.300865 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:02 crc kubenswrapper[4886]: I1124 08:50:02.300878 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:02Z","lastTransitionTime":"2025-11-24T08:50:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:02 crc kubenswrapper[4886]: I1124 08:50:02.403125 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:02 crc kubenswrapper[4886]: I1124 08:50:02.403184 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:02 crc kubenswrapper[4886]: I1124 08:50:02.403195 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:02 crc kubenswrapper[4886]: I1124 08:50:02.403209 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:02 crc kubenswrapper[4886]: I1124 08:50:02.403220 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:02Z","lastTransitionTime":"2025-11-24T08:50:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:02 crc kubenswrapper[4886]: I1124 08:50:02.506167 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:02 crc kubenswrapper[4886]: I1124 08:50:02.506206 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:02 crc kubenswrapper[4886]: I1124 08:50:02.506219 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:02 crc kubenswrapper[4886]: I1124 08:50:02.506236 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:02 crc kubenswrapper[4886]: I1124 08:50:02.506248 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:02Z","lastTransitionTime":"2025-11-24T08:50:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:02 crc kubenswrapper[4886]: I1124 08:50:02.608337 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:02 crc kubenswrapper[4886]: I1124 08:50:02.608397 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:02 crc kubenswrapper[4886]: I1124 08:50:02.608421 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:02 crc kubenswrapper[4886]: I1124 08:50:02.608438 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:02 crc kubenswrapper[4886]: I1124 08:50:02.608449 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:02Z","lastTransitionTime":"2025-11-24T08:50:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:02 crc kubenswrapper[4886]: I1124 08:50:02.711349 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:02 crc kubenswrapper[4886]: I1124 08:50:02.711401 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:02 crc kubenswrapper[4886]: I1124 08:50:02.711417 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:02 crc kubenswrapper[4886]: I1124 08:50:02.711440 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:02 crc kubenswrapper[4886]: I1124 08:50:02.711457 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:02Z","lastTransitionTime":"2025-11-24T08:50:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:02 crc kubenswrapper[4886]: I1124 08:50:02.814649 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:02 crc kubenswrapper[4886]: I1124 08:50:02.814704 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:02 crc kubenswrapper[4886]: I1124 08:50:02.814714 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:02 crc kubenswrapper[4886]: I1124 08:50:02.814737 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:02 crc kubenswrapper[4886]: I1124 08:50:02.814750 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:02Z","lastTransitionTime":"2025-11-24T08:50:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:02 crc kubenswrapper[4886]: I1124 08:50:02.848381 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 08:50:02 crc kubenswrapper[4886]: E1124 08:50:02.848612 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 08:50:02 crc kubenswrapper[4886]: I1124 08:50:02.848830 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-fkfxv" Nov 24 08:50:02 crc kubenswrapper[4886]: E1124 08:50:02.848906 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-fkfxv" podUID="7844f778-bcd5-40fe-ad92-0cc0fcd6c5d7" Nov 24 08:50:02 crc kubenswrapper[4886]: I1124 08:50:02.918256 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:02 crc kubenswrapper[4886]: I1124 08:50:02.918303 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:02 crc kubenswrapper[4886]: I1124 08:50:02.918315 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:02 crc kubenswrapper[4886]: I1124 08:50:02.918332 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:02 crc kubenswrapper[4886]: I1124 08:50:02.918343 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:02Z","lastTransitionTime":"2025-11-24T08:50:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:03 crc kubenswrapper[4886]: I1124 08:50:03.021482 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:03 crc kubenswrapper[4886]: I1124 08:50:03.021524 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:03 crc kubenswrapper[4886]: I1124 08:50:03.021532 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:03 crc kubenswrapper[4886]: I1124 08:50:03.021546 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:03 crc kubenswrapper[4886]: I1124 08:50:03.021555 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:03Z","lastTransitionTime":"2025-11-24T08:50:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:03 crc kubenswrapper[4886]: I1124 08:50:03.124322 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:03 crc kubenswrapper[4886]: I1124 08:50:03.124363 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:03 crc kubenswrapper[4886]: I1124 08:50:03.124373 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:03 crc kubenswrapper[4886]: I1124 08:50:03.124391 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:03 crc kubenswrapper[4886]: I1124 08:50:03.124402 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:03Z","lastTransitionTime":"2025-11-24T08:50:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:03 crc kubenswrapper[4886]: I1124 08:50:03.228342 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:03 crc kubenswrapper[4886]: I1124 08:50:03.228406 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:03 crc kubenswrapper[4886]: I1124 08:50:03.228419 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:03 crc kubenswrapper[4886]: I1124 08:50:03.228478 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:03 crc kubenswrapper[4886]: I1124 08:50:03.228490 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:03Z","lastTransitionTime":"2025-11-24T08:50:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:03 crc kubenswrapper[4886]: I1124 08:50:03.331266 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:03 crc kubenswrapper[4886]: I1124 08:50:03.331323 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:03 crc kubenswrapper[4886]: I1124 08:50:03.331333 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:03 crc kubenswrapper[4886]: I1124 08:50:03.331353 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:03 crc kubenswrapper[4886]: I1124 08:50:03.331369 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:03Z","lastTransitionTime":"2025-11-24T08:50:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:03 crc kubenswrapper[4886]: I1124 08:50:03.411021 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 24 08:50:03 crc kubenswrapper[4886]: I1124 08:50:03.426375 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56bddcbe3378ccc43de3046352a1284d9590790973736e218e891b804bfe1ff8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:50:03Z is after 2025-08-24T17:21:41Z" Nov 24 08:50:03 crc kubenswrapper[4886]: I1124 08:50:03.434166 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:03 crc kubenswrapper[4886]: I1124 08:50:03.434249 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:03 crc kubenswrapper[4886]: I1124 08:50:03.434264 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:03 crc kubenswrapper[4886]: I1124 08:50:03.434282 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:03 crc kubenswrapper[4886]: I1124 08:50:03.434320 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:03Z","lastTransitionTime":"2025-11-24T08:50:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:03 crc kubenswrapper[4886]: I1124 08:50:03.443502 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:50:03Z is after 2025-08-24T17:21:41Z" Nov 24 08:50:03 crc kubenswrapper[4886]: I1124 08:50:03.459288 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-82v6c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5d34f7b-b75b-4572-87ca-a01c66ba67b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9fe889c11fc75d5c449c6facbeb67f8fc2d60ffedbdf1433b72147cdabd353e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8qmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-82v6c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:50:03Z is after 2025-08-24T17:21:41Z" Nov 24 08:50:03 crc kubenswrapper[4886]: I1124 08:50:03.473760 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb6a474a-7674-4280-8da4-784cc3c4fc67\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e99af72fba7cd02a022aede4ff904f6dc0d263429468b460f3886cbbf8767e9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3a5e301ad7f424b4db6c7c2d4054973bf707f225de81d58a13077e1a130fe59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://879cdf4650f88cd23cfed38dabd912619f3c95ab3c254c0eb893eb4cbb2f8c5c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://243dc8c38e86fbd96d073b5dc2edaf3f378f052563abfedf86cd78e6270f11b1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:04Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:50:03Z is after 2025-08-24T17:21:41Z" Nov 24 08:50:03 crc kubenswrapper[4886]: I1124 08:50:03.486694 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://336d2d1c7e69b9e2c481681be12af7bfd17b8517e7fd97a67b3ca82dbc84ac35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c72af163fd4d5f2dc2a642984873ee83c62e9f1f96fac318fd17d5a74e5ea82f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:50:03Z is after 2025-08-24T17:21:41Z" Nov 24 08:50:03 crc kubenswrapper[4886]: I1124 08:50:03.498307 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-fkfxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7844f778-bcd5-40fe-ad92-0cc0fcd6c5d7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9ckc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9ckc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:39Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-fkfxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:50:03Z is after 2025-08-24T17:21:41Z" Nov 24 08:50:03 crc kubenswrapper[4886]: I1124 08:50:03.522822 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-657wc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03f9078c-6b20-46d5-ae2a-2eb20e236769\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f59f3a090f9824d71c23fcb40dde8c9d34fa8116c17ba79a45c005b2719f3dc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8c129c56a2c69042516d9caa6abdf4095aefde8fdaef4953a8eed353940be89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f504f7f421bd783399a6d08fa713f0a885d8edd5f5245933a98b6830c5573d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51cd23eb1b9056a44b5716ce542fafec8e3ebde494f136655a2a5d838ec90ab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://48584e4993cf06ba4b29a0732136f0f86c23d81adea76e945015ffafb462819d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://97477bfe13de2286806b9a8e3c723e3828c1b0ef182329585eaef507dd0c36f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76a9b348444e1c0367116f0081d701af938e94b1cc0731ae7fe4294b12666b87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4abc505b9f103f78af1be41a40b7527bf35d25bc6c6e521c6e344290ff1d77b0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T08:49:40Z\\\",\\\"message\\\":\\\"ices.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-route-controller-manager/route-controller-manager_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-route-controller-manager/route-controller-manager\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.239\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI1124 08:49:40.973903 6381 obj_retry.go:386] Retry successful for *v1.Pod openshift-etcd/etcd-crc after 0 failed attempt(s)\\\\nF1124 08:49:40.973905 6381 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:40Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://76a9b348444e1c0367116f0081d701af938e94b1cc0731ae7fe4294b12666b87\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T08:49:59Z\\\",\\\"message\\\":\\\"s_controller.go:360] Finished syncing service ovn-kubernetes-control-plane on namespace openshift-ovn-kubernetes for network=default : 11.501µs\\\\nI1124 08:49:59.095768 6567 obj_retry.go:303] Retry object setup: *v1.Pod openshift-ovn-kubernetes/ovnkube-node-657wc\\\\nI1124 08:49:59.095697 6567 ovnkube_controller.go:1292] Config duration recorder: kind/namespace/name pod/openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-z89jx. OVN-Kubernetes controller took 0.000443563 seconds. No OVN measurement.\\\\nI1124 08:49:59.095828 6567 obj_retry.go:365] Adding new object: *v1.Pod openshift-ovn-kubernetes/ovnkube-node-657wc\\\\nI1124 08:49:59.095838 6567 services_controller.go:356] Processing sync for service openshift-controller-manager/controller-manager for network=default\\\\nF1124 08:49:59.095843 6567 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webh\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://070c77876ce83a479d0ab423d8107a14d506c655b476b2719766d78ad3c88dc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53fba806955f4d7e98c5a2c830bc39cc6f9ac2a6248099aa13da6dff5a0cde76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53fba806955f4d7e98c5a2c830bc39cc6f9ac2a6248099aa13da6dff5a0cde76\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-657wc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:50:03Z is after 2025-08-24T17:21:41Z" Nov 24 08:50:03 crc kubenswrapper[4886]: I1124 08:50:03.537513 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:03 crc kubenswrapper[4886]: I1124 08:50:03.537566 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:03 crc kubenswrapper[4886]: I1124 08:50:03.537575 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:03 crc kubenswrapper[4886]: I1124 08:50:03.537591 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:03 crc kubenswrapper[4886]: I1124 08:50:03.537600 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:03Z","lastTransitionTime":"2025-11-24T08:50:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:03 crc kubenswrapper[4886]: I1124 08:50:03.543956 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1de93d0-350b-4f10-a50d-d4ca45b47a33\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6cae19a77290a6aa02abc366057b242de03f58a183d136dc19248cdfa2c0fa75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e34e380e0dd3b6788878e07e1d497d003445742fb3e976e78f116fd775fb0ae8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aa26820df6b9179aede13abc90067edc933b9bae6f11a6a1c32cf1a3ee494c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66381e4a8de0eb3ec683f0643c26082dac4f16a6e152164bf9a8b14449a23b0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52f6359fcbe915c66d972a4cdf89646528d2c1d5099c96cb3436e2da54d1099a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ee3470d2b357bc4a39b9fa7da8800930ea9f4d4a3926aa9d2000be8ca2da241\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ee3470d2b357bc4a39b9fa7da8800930ea9f4d4a3926aa9d2000be8ca2da241\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74bf2e186184e37d22515fc9a765b4ff0530354eb4b6af7679cf6f046651f9e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74bf2e186184e37d22515fc9a765b4ff0530354eb4b6af7679cf6f046651f9e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1794ee7f2b8bf736d63385a3755d5a50e5f05cc74470ed04c3f21054476ff5f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1794ee7f2b8bf736d63385a3755d5a50e5f05cc74470ed04c3f21054476ff5f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:04Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:50:03Z is after 2025-08-24T17:21:41Z" Nov 24 08:50:03 crc kubenswrapper[4886]: I1124 08:50:03.555226 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b56e78f-e569-4f01-9a0f-6b9db3b505d7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:50:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:50:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4cd419a96834f5093a6a3bad2d2b99f9ab6f4abdbe5d7fddadf1505a0fd1f3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff0e663f6feec9c77de1cd993c443c9b3df3492612c554cc8b95c8e4e4a026b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://146e753c49089634c3d3105a3debf9737245745d8ed9aee61ae2cf4f56ae37ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f13b61e62e7d9dc91305545f1843a74d2b10a9df1f207cddb9947059876a6d36\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6d0c617fe424a39e246d154cd6c0d8dc451c251e24dffaf45d644354e94e01e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"message\\\":\\\" 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1124 08:49:25.277531 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1124 08:49:25.277556 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1124 08:49:25.277707 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1124 08:49:25.277722 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1124 08:49:25.278962 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-505038271/tls.crt::/tmp/serving-cert-505038271/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1763974159\\\\\\\\\\\\\\\" (2025-11-24 08:49:18 +0000 UTC to 2025-12-24 08:49:19 +0000 UTC (now=2025-11-24 08:49:25.278915173 +0000 UTC))\\\\\\\"\\\\nI1124 08:49:25.279136 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1763974165\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1763974164\\\\\\\\\\\\\\\" (2025-11-24 07:49:24 +0000 UTC to 2026-11-24 07:49:24 +0000 UTC (now=2025-11-24 08:49:25.279105228 +0000 UTC))\\\\\\\"\\\\nI1124 08:49:25.279186 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1124 08:49:25.279230 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1124 08:49:25.279272 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-505038271/tls.crt::/tmp/serving-cert-505038271/tls.key\\\\\\\"\\\\nI1124 08:49:25.279422 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF1124 08:49:25.273255 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://97e022f9b9279047d700f687148beaeef5ce4f72de418be1bdf0edde4e5211da\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc5ed0e2c8f6299305d6c26194c4b43e6e884785effd5ae99254221d738e7662\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc5ed0e2c8f6299305d6c26194c4b43e6e884785effd5ae99254221d738e7662\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:50:03Z is after 2025-08-24T17:21:41Z" Nov 24 08:50:03 crc kubenswrapper[4886]: I1124 08:50:03.566643 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15edcf2f-5ba7-4db0-b44e-fac293a82688\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44ecd99cee98ca44de5a0af10ab152bc27aca5ab8f94d9d58b295804261846d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b4868ba55a55a9c6373e02adc88283d51196d5e19b554d0f351c59599731130\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0c4a9d27ad84a91160eed506a2a6b0b34968fc71063ce3493326092049d2bf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://98787b856c8680640854ec328aa9dc502d03f8617e5b19283b2bd03c3cca7988\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://98787b856c8680640854ec328aa9dc502d03f8617e5b19283b2bd03c3cca7988\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:05Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:04Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:50:03Z is after 2025-08-24T17:21:41Z" Nov 24 08:50:03 crc kubenswrapper[4886]: I1124 08:50:03.577523 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://736f509b74b3eb557d2963822fd7f3aa507fc34bc178d6b2c05e05dde6e2c88e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:50:03Z is after 2025-08-24T17:21:41Z" Nov 24 08:50:03 crc kubenswrapper[4886]: I1124 08:50:03.590185 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:50:03Z is after 2025-08-24T17:21:41Z" Nov 24 08:50:03 crc kubenswrapper[4886]: I1124 08:50:03.600539 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5g6ld" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82737edd-859f-4e06-8559-47375deb3a1a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f1c6253e8a3c02a22a64ad8913937a50b00f99c7c9da201aa0dd154175dc24c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ffhbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5g6ld\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:50:03Z is after 2025-08-24T17:21:41Z" Nov 24 08:50:03 crc kubenswrapper[4886]: I1124 08:50:03.615032 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kl8k6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b578bbaf-7246-42d9-9d2d-346bd1da2c41\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d401bfaa5ee27091fc815de2752da57dcce2799f3ff3f6c88faacbe02565324\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6ng9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b4ab49b6707685341f4480554ba33f3dce9687661a3ed08ca228914a6821703\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b4ab49b6707685341f4480554ba33f3dce9687661a3ed08ca228914a6821703\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6ng9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce7a090aac10049685ef8de3f6533a3a7c61f8fe5cc79f2af17f6f36d0e90168\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce7a090aac10049685ef8de3f6533a3a7c61f8fe5cc79f2af17f6f36d0e90168\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6ng9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d53782a307c740b991c352671c3365951fb623f61ade4b061452b63f19b5e606\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d53782a307c740b991c352671c3365951fb623f61ade4b061452b63f19b5e606\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6ng9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce2a601938d1e214fdd14de097a4d95fdf3ac619b03bc2d269a9edf192fa940c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce2a601938d1e214fdd14de097a4d95fdf3ac619b03bc2d269a9edf192fa940c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6ng9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e219f09fdaf4fbce2d0c7f6d7dd746f27e424ef649941221dcd487c414e9eabd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e219f09fdaf4fbce2d0c7f6d7dd746f27e424ef649941221dcd487c414e9eabd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6ng9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bd697c5a7e9cd31c3ef75163507c789373f10f4ae458f55bf9bdfe318eda6e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bd697c5a7e9cd31c3ef75163507c789373f10f4ae458f55bf9bdfe318eda6e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6ng9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kl8k6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:50:03Z is after 2025-08-24T17:21:41Z" Nov 24 08:50:03 crc kubenswrapper[4886]: I1124 08:50:03.631972 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vcq8j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6279457-41d0-46e2-9a21-1d3c74311083\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35b8da4abfec501a571590893bbace52b9e8453890065d2a01b0da8c5b8f9aaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mvm6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db48bd367bf72ecf426c27f7afe1df9a346b1d8bb8fe198a14399f47699d47b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mvm6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-vcq8j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:50:03Z is after 2025-08-24T17:21:41Z" Nov 24 08:50:03 crc kubenswrapper[4886]: I1124 08:50:03.639970 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:03 crc kubenswrapper[4886]: I1124 08:50:03.640005 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:03 crc kubenswrapper[4886]: I1124 08:50:03.640013 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:03 crc kubenswrapper[4886]: I1124 08:50:03.640029 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:03 crc kubenswrapper[4886]: I1124 08:50:03.640039 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:03Z","lastTransitionTime":"2025-11-24T08:50:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:03 crc kubenswrapper[4886]: I1124 08:50:03.646269 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:50:03Z is after 2025-08-24T17:21:41Z" Nov 24 08:50:03 crc kubenswrapper[4886]: I1124 08:50:03.656234 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zc46q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23cb993e-0360-4449-b604-8ddd825a6502\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://008c7d8517b1c3c5f875d9e73315ceb9936936a1d977422f16bf2aaba7e3f4dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8hf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b6a02c10c81171f23bf0e623a3f86710da37f6dd62f885c0366830a60b2be34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8hf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zc46q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:50:03Z is after 2025-08-24T17:21:41Z" Nov 24 08:50:03 crc kubenswrapper[4886]: I1124 08:50:03.671634 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2dk8j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d515fec-60f3-4bf7-9ba4-697bb691b670\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ead2821f6b71571212807c6f2984d541faf5143c28fee48594b19e08bd91d085\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6pfs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2dk8j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:50:03Z is after 2025-08-24T17:21:41Z" Nov 24 08:50:03 crc kubenswrapper[4886]: I1124 08:50:03.741416 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:03 crc kubenswrapper[4886]: I1124 08:50:03.741458 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:03 crc kubenswrapper[4886]: I1124 08:50:03.741466 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:03 crc kubenswrapper[4886]: I1124 08:50:03.741481 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:03 crc kubenswrapper[4886]: I1124 08:50:03.741491 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:03Z","lastTransitionTime":"2025-11-24T08:50:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:03 crc kubenswrapper[4886]: I1124 08:50:03.843465 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:03 crc kubenswrapper[4886]: I1124 08:50:03.843495 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:03 crc kubenswrapper[4886]: I1124 08:50:03.843503 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:03 crc kubenswrapper[4886]: I1124 08:50:03.843517 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:03 crc kubenswrapper[4886]: I1124 08:50:03.843525 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:03Z","lastTransitionTime":"2025-11-24T08:50:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:03 crc kubenswrapper[4886]: I1124 08:50:03.848036 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 08:50:03 crc kubenswrapper[4886]: E1124 08:50:03.848124 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 08:50:03 crc kubenswrapper[4886]: I1124 08:50:03.848312 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 08:50:03 crc kubenswrapper[4886]: E1124 08:50:03.848366 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 08:50:03 crc kubenswrapper[4886]: I1124 08:50:03.945931 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:03 crc kubenswrapper[4886]: I1124 08:50:03.945969 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:03 crc kubenswrapper[4886]: I1124 08:50:03.945980 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:03 crc kubenswrapper[4886]: I1124 08:50:03.946000 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:03 crc kubenswrapper[4886]: I1124 08:50:03.946010 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:03Z","lastTransitionTime":"2025-11-24T08:50:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:04 crc kubenswrapper[4886]: I1124 08:50:04.049007 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:04 crc kubenswrapper[4886]: I1124 08:50:04.049131 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:04 crc kubenswrapper[4886]: I1124 08:50:04.049187 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:04 crc kubenswrapper[4886]: I1124 08:50:04.049216 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:04 crc kubenswrapper[4886]: I1124 08:50:04.049233 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:04Z","lastTransitionTime":"2025-11-24T08:50:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:04 crc kubenswrapper[4886]: I1124 08:50:04.151942 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:04 crc kubenswrapper[4886]: I1124 08:50:04.151976 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:04 crc kubenswrapper[4886]: I1124 08:50:04.151987 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:04 crc kubenswrapper[4886]: I1124 08:50:04.152006 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:04 crc kubenswrapper[4886]: I1124 08:50:04.152018 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:04Z","lastTransitionTime":"2025-11-24T08:50:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:04 crc kubenswrapper[4886]: I1124 08:50:04.254331 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:04 crc kubenswrapper[4886]: I1124 08:50:04.254403 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:04 crc kubenswrapper[4886]: I1124 08:50:04.254426 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:04 crc kubenswrapper[4886]: I1124 08:50:04.254457 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:04 crc kubenswrapper[4886]: I1124 08:50:04.254480 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:04Z","lastTransitionTime":"2025-11-24T08:50:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:04 crc kubenswrapper[4886]: I1124 08:50:04.356948 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:04 crc kubenswrapper[4886]: I1124 08:50:04.356986 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:04 crc kubenswrapper[4886]: I1124 08:50:04.356995 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:04 crc kubenswrapper[4886]: I1124 08:50:04.357008 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:04 crc kubenswrapper[4886]: I1124 08:50:04.357017 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:04Z","lastTransitionTime":"2025-11-24T08:50:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:04 crc kubenswrapper[4886]: I1124 08:50:04.459458 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:04 crc kubenswrapper[4886]: I1124 08:50:04.459503 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:04 crc kubenswrapper[4886]: I1124 08:50:04.459513 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:04 crc kubenswrapper[4886]: I1124 08:50:04.459530 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:04 crc kubenswrapper[4886]: I1124 08:50:04.459542 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:04Z","lastTransitionTime":"2025-11-24T08:50:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:04 crc kubenswrapper[4886]: I1124 08:50:04.561798 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:04 crc kubenswrapper[4886]: I1124 08:50:04.561854 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:04 crc kubenswrapper[4886]: I1124 08:50:04.561869 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:04 crc kubenswrapper[4886]: I1124 08:50:04.561883 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:04 crc kubenswrapper[4886]: I1124 08:50:04.561892 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:04Z","lastTransitionTime":"2025-11-24T08:50:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:04 crc kubenswrapper[4886]: I1124 08:50:04.663573 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:04 crc kubenswrapper[4886]: I1124 08:50:04.663612 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:04 crc kubenswrapper[4886]: I1124 08:50:04.663619 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:04 crc kubenswrapper[4886]: I1124 08:50:04.663634 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:04 crc kubenswrapper[4886]: I1124 08:50:04.663643 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:04Z","lastTransitionTime":"2025-11-24T08:50:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:04 crc kubenswrapper[4886]: I1124 08:50:04.765961 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:04 crc kubenswrapper[4886]: I1124 08:50:04.766000 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:04 crc kubenswrapper[4886]: I1124 08:50:04.766010 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:04 crc kubenswrapper[4886]: I1124 08:50:04.766026 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:04 crc kubenswrapper[4886]: I1124 08:50:04.766037 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:04Z","lastTransitionTime":"2025-11-24T08:50:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:04 crc kubenswrapper[4886]: I1124 08:50:04.848551 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-fkfxv" Nov 24 08:50:04 crc kubenswrapper[4886]: I1124 08:50:04.848601 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 08:50:04 crc kubenswrapper[4886]: E1124 08:50:04.848703 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-fkfxv" podUID="7844f778-bcd5-40fe-ad92-0cc0fcd6c5d7" Nov 24 08:50:04 crc kubenswrapper[4886]: E1124 08:50:04.848765 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 08:50:04 crc kubenswrapper[4886]: I1124 08:50:04.860604 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-82v6c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5d34f7b-b75b-4572-87ca-a01c66ba67b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9fe889c11fc75d5c449c6facbeb67f8fc2d60ffedbdf1433b72147cdabd353e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8qmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-82v6c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:50:04Z is after 2025-08-24T17:21:41Z" Nov 24 08:50:04 crc kubenswrapper[4886]: I1124 08:50:04.868924 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:04 crc kubenswrapper[4886]: I1124 08:50:04.868974 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:04 crc kubenswrapper[4886]: I1124 08:50:04.868986 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:04 crc kubenswrapper[4886]: I1124 08:50:04.869008 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:04 crc kubenswrapper[4886]: I1124 08:50:04.869022 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:04Z","lastTransitionTime":"2025-11-24T08:50:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:04 crc kubenswrapper[4886]: I1124 08:50:04.872733 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56bddcbe3378ccc43de3046352a1284d9590790973736e218e891b804bfe1ff8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:50:04Z is after 2025-08-24T17:21:41Z" Nov 24 08:50:04 crc kubenswrapper[4886]: I1124 08:50:04.885360 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:50:04Z is after 2025-08-24T17:21:41Z" Nov 24 08:50:04 crc kubenswrapper[4886]: I1124 08:50:04.894655 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-fkfxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7844f778-bcd5-40fe-ad92-0cc0fcd6c5d7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9ckc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9ckc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:39Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-fkfxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:50:04Z is after 2025-08-24T17:21:41Z" Nov 24 08:50:04 crc kubenswrapper[4886]: I1124 08:50:04.905305 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb6a474a-7674-4280-8da4-784cc3c4fc67\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e99af72fba7cd02a022aede4ff904f6dc0d263429468b460f3886cbbf8767e9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3a5e301ad7f424b4db6c7c2d4054973bf707f225de81d58a13077e1a130fe59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://879cdf4650f88cd23cfed38dabd912619f3c95ab3c254c0eb893eb4cbb2f8c5c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://243dc8c38e86fbd96d073b5dc2edaf3f378f052563abfedf86cd78e6270f11b1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:04Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:50:04Z is after 2025-08-24T17:21:41Z" Nov 24 08:50:04 crc kubenswrapper[4886]: I1124 08:50:04.916221 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://336d2d1c7e69b9e2c481681be12af7bfd17b8517e7fd97a67b3ca82dbc84ac35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c72af163fd4d5f2dc2a642984873ee83c62e9f1f96fac318fd17d5a74e5ea82f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:50:04Z is after 2025-08-24T17:21:41Z" Nov 24 08:50:04 crc kubenswrapper[4886]: I1124 08:50:04.936183 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15edcf2f-5ba7-4db0-b44e-fac293a82688\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44ecd99cee98ca44de5a0af10ab152bc27aca5ab8f94d9d58b295804261846d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b4868ba55a55a9c6373e02adc88283d51196d5e19b554d0f351c59599731130\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0c4a9d27ad84a91160eed506a2a6b0b34968fc71063ce3493326092049d2bf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://98787b856c8680640854ec328aa9dc502d03f8617e5b19283b2bd03c3cca7988\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://98787b856c8680640854ec328aa9dc502d03f8617e5b19283b2bd03c3cca7988\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:05Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:04Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:50:04Z is after 2025-08-24T17:21:41Z" Nov 24 08:50:04 crc kubenswrapper[4886]: I1124 08:50:04.954488 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://736f509b74b3eb557d2963822fd7f3aa507fc34bc178d6b2c05e05dde6e2c88e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:50:04Z is after 2025-08-24T17:21:41Z" Nov 24 08:50:04 crc kubenswrapper[4886]: I1124 08:50:04.964825 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:50:04Z is after 2025-08-24T17:21:41Z" Nov 24 08:50:04 crc kubenswrapper[4886]: I1124 08:50:04.970728 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:04 crc kubenswrapper[4886]: I1124 08:50:04.970762 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:04 crc kubenswrapper[4886]: I1124 08:50:04.970774 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:04 crc kubenswrapper[4886]: I1124 08:50:04.970790 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:04 crc kubenswrapper[4886]: I1124 08:50:04.970801 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:04Z","lastTransitionTime":"2025-11-24T08:50:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:04 crc kubenswrapper[4886]: I1124 08:50:04.972740 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5g6ld" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82737edd-859f-4e06-8559-47375deb3a1a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f1c6253e8a3c02a22a64ad8913937a50b00f99c7c9da201aa0dd154175dc24c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ffhbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5g6ld\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:50:04Z is after 2025-08-24T17:21:41Z" Nov 24 08:50:04 crc kubenswrapper[4886]: I1124 08:50:04.985193 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kl8k6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b578bbaf-7246-42d9-9d2d-346bd1da2c41\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d401bfaa5ee27091fc815de2752da57dcce2799f3ff3f6c88faacbe02565324\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6ng9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b4ab49b6707685341f4480554ba33f3dce9687661a3ed08ca228914a6821703\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b4ab49b6707685341f4480554ba33f3dce9687661a3ed08ca228914a6821703\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6ng9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce7a090aac10049685ef8de3f6533a3a7c61f8fe5cc79f2af17f6f36d0e90168\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce7a090aac10049685ef8de3f6533a3a7c61f8fe5cc79f2af17f6f36d0e90168\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6ng9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d53782a307c740b991c352671c3365951fb623f61ade4b061452b63f19b5e606\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d53782a307c740b991c352671c3365951fb623f61ade4b061452b63f19b5e606\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6ng9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce2a601938d1e214fdd14de097a4d95fdf3ac619b03bc2d269a9edf192fa940c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce2a601938d1e214fdd14de097a4d95fdf3ac619b03bc2d269a9edf192fa940c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6ng9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e219f09fdaf4fbce2d0c7f6d7dd746f27e424ef649941221dcd487c414e9eabd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e219f09fdaf4fbce2d0c7f6d7dd746f27e424ef649941221dcd487c414e9eabd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6ng9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bd697c5a7e9cd31c3ef75163507c789373f10f4ae458f55bf9bdfe318eda6e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bd697c5a7e9cd31c3ef75163507c789373f10f4ae458f55bf9bdfe318eda6e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6ng9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kl8k6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:50:04Z is after 2025-08-24T17:21:41Z" Nov 24 08:50:05 crc kubenswrapper[4886]: I1124 08:50:05.004087 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-657wc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03f9078c-6b20-46d5-ae2a-2eb20e236769\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f59f3a090f9824d71c23fcb40dde8c9d34fa8116c17ba79a45c005b2719f3dc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8c129c56a2c69042516d9caa6abdf4095aefde8fdaef4953a8eed353940be89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f504f7f421bd783399a6d08fa713f0a885d8edd5f5245933a98b6830c5573d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51cd23eb1b9056a44b5716ce542fafec8e3ebde494f136655a2a5d838ec90ab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://48584e4993cf06ba4b29a0732136f0f86c23d81adea76e945015ffafb462819d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://97477bfe13de2286806b9a8e3c723e3828c1b0ef182329585eaef507dd0c36f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76a9b348444e1c0367116f0081d701af938e94b1cc0731ae7fe4294b12666b87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4abc505b9f103f78af1be41a40b7527bf35d25bc6c6e521c6e344290ff1d77b0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T08:49:40Z\\\",\\\"message\\\":\\\"ices.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-route-controller-manager/route-controller-manager_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-route-controller-manager/route-controller-manager\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.239\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI1124 08:49:40.973903 6381 obj_retry.go:386] Retry successful for *v1.Pod openshift-etcd/etcd-crc after 0 failed attempt(s)\\\\nF1124 08:49:40.973905 6381 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:40Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://76a9b348444e1c0367116f0081d701af938e94b1cc0731ae7fe4294b12666b87\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T08:49:59Z\\\",\\\"message\\\":\\\"s_controller.go:360] Finished syncing service ovn-kubernetes-control-plane on namespace openshift-ovn-kubernetes for network=default : 11.501µs\\\\nI1124 08:49:59.095768 6567 obj_retry.go:303] Retry object setup: *v1.Pod openshift-ovn-kubernetes/ovnkube-node-657wc\\\\nI1124 08:49:59.095697 6567 ovnkube_controller.go:1292] Config duration recorder: kind/namespace/name pod/openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-z89jx. OVN-Kubernetes controller took 0.000443563 seconds. No OVN measurement.\\\\nI1124 08:49:59.095828 6567 obj_retry.go:365] Adding new object: *v1.Pod openshift-ovn-kubernetes/ovnkube-node-657wc\\\\nI1124 08:49:59.095838 6567 services_controller.go:356] Processing sync for service openshift-controller-manager/controller-manager for network=default\\\\nF1124 08:49:59.095843 6567 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webh\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://070c77876ce83a479d0ab423d8107a14d506c655b476b2719766d78ad3c88dc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53fba806955f4d7e98c5a2c830bc39cc6f9ac2a6248099aa13da6dff5a0cde76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53fba806955f4d7e98c5a2c830bc39cc6f9ac2a6248099aa13da6dff5a0cde76\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-657wc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:50:05Z is after 2025-08-24T17:21:41Z" Nov 24 08:50:05 crc kubenswrapper[4886]: I1124 08:50:05.021932 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1de93d0-350b-4f10-a50d-d4ca45b47a33\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6cae19a77290a6aa02abc366057b242de03f58a183d136dc19248cdfa2c0fa75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e34e380e0dd3b6788878e07e1d497d003445742fb3e976e78f116fd775fb0ae8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aa26820df6b9179aede13abc90067edc933b9bae6f11a6a1c32cf1a3ee494c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66381e4a8de0eb3ec683f0643c26082dac4f16a6e152164bf9a8b14449a23b0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52f6359fcbe915c66d972a4cdf89646528d2c1d5099c96cb3436e2da54d1099a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ee3470d2b357bc4a39b9fa7da8800930ea9f4d4a3926aa9d2000be8ca2da241\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ee3470d2b357bc4a39b9fa7da8800930ea9f4d4a3926aa9d2000be8ca2da241\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74bf2e186184e37d22515fc9a765b4ff0530354eb4b6af7679cf6f046651f9e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74bf2e186184e37d22515fc9a765b4ff0530354eb4b6af7679cf6f046651f9e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1794ee7f2b8bf736d63385a3755d5a50e5f05cc74470ed04c3f21054476ff5f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1794ee7f2b8bf736d63385a3755d5a50e5f05cc74470ed04c3f21054476ff5f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:04Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:50:05Z is after 2025-08-24T17:21:41Z" Nov 24 08:50:05 crc kubenswrapper[4886]: I1124 08:50:05.036387 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b56e78f-e569-4f01-9a0f-6b9db3b505d7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:50:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:50:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4cd419a96834f5093a6a3bad2d2b99f9ab6f4abdbe5d7fddadf1505a0fd1f3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff0e663f6feec9c77de1cd993c443c9b3df3492612c554cc8b95c8e4e4a026b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://146e753c49089634c3d3105a3debf9737245745d8ed9aee61ae2cf4f56ae37ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f13b61e62e7d9dc91305545f1843a74d2b10a9df1f207cddb9947059876a6d36\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6d0c617fe424a39e246d154cd6c0d8dc451c251e24dffaf45d644354e94e01e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"message\\\":\\\" 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1124 08:49:25.277531 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1124 08:49:25.277556 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1124 08:49:25.277707 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1124 08:49:25.277722 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1124 08:49:25.278962 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-505038271/tls.crt::/tmp/serving-cert-505038271/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1763974159\\\\\\\\\\\\\\\" (2025-11-24 08:49:18 +0000 UTC to 2025-12-24 08:49:19 +0000 UTC (now=2025-11-24 08:49:25.278915173 +0000 UTC))\\\\\\\"\\\\nI1124 08:49:25.279136 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1763974165\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1763974164\\\\\\\\\\\\\\\" (2025-11-24 07:49:24 +0000 UTC to 2026-11-24 07:49:24 +0000 UTC (now=2025-11-24 08:49:25.279105228 +0000 UTC))\\\\\\\"\\\\nI1124 08:49:25.279186 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1124 08:49:25.279230 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1124 08:49:25.279272 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-505038271/tls.crt::/tmp/serving-cert-505038271/tls.key\\\\\\\"\\\\nI1124 08:49:25.279422 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF1124 08:49:25.273255 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://97e022f9b9279047d700f687148beaeef5ce4f72de418be1bdf0edde4e5211da\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc5ed0e2c8f6299305d6c26194c4b43e6e884785effd5ae99254221d738e7662\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc5ed0e2c8f6299305d6c26194c4b43e6e884785effd5ae99254221d738e7662\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:50:05Z is after 2025-08-24T17:21:41Z" Nov 24 08:50:05 crc kubenswrapper[4886]: I1124 08:50:05.048573 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vcq8j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6279457-41d0-46e2-9a21-1d3c74311083\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35b8da4abfec501a571590893bbace52b9e8453890065d2a01b0da8c5b8f9aaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mvm6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db48bd367bf72ecf426c27f7afe1df9a346b1d8bb8fe198a14399f47699d47b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mvm6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-vcq8j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:50:05Z is after 2025-08-24T17:21:41Z" Nov 24 08:50:05 crc kubenswrapper[4886]: I1124 08:50:05.059668 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zc46q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23cb993e-0360-4449-b604-8ddd825a6502\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://008c7d8517b1c3c5f875d9e73315ceb9936936a1d977422f16bf2aaba7e3f4dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8hf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b6a02c10c81171f23bf0e623a3f86710da37f6dd62f885c0366830a60b2be34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8hf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zc46q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:50:05Z is after 2025-08-24T17:21:41Z" Nov 24 08:50:05 crc kubenswrapper[4886]: I1124 08:50:05.073229 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2dk8j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d515fec-60f3-4bf7-9ba4-697bb691b670\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ead2821f6b71571212807c6f2984d541faf5143c28fee48594b19e08bd91d085\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6pfs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2dk8j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:50:05Z is after 2025-08-24T17:21:41Z" Nov 24 08:50:05 crc kubenswrapper[4886]: I1124 08:50:05.074122 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:05 crc kubenswrapper[4886]: I1124 08:50:05.074186 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:05 crc kubenswrapper[4886]: I1124 08:50:05.074198 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:05 crc kubenswrapper[4886]: I1124 08:50:05.074216 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:05 crc kubenswrapper[4886]: I1124 08:50:05.074226 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:05Z","lastTransitionTime":"2025-11-24T08:50:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:05 crc kubenswrapper[4886]: I1124 08:50:05.087612 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:50:05Z is after 2025-08-24T17:21:41Z" Nov 24 08:50:05 crc kubenswrapper[4886]: I1124 08:50:05.177197 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:05 crc kubenswrapper[4886]: I1124 08:50:05.177237 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:05 crc kubenswrapper[4886]: I1124 08:50:05.177248 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:05 crc kubenswrapper[4886]: I1124 08:50:05.177266 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:05 crc kubenswrapper[4886]: I1124 08:50:05.177277 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:05Z","lastTransitionTime":"2025-11-24T08:50:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:05 crc kubenswrapper[4886]: I1124 08:50:05.280769 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:05 crc kubenswrapper[4886]: I1124 08:50:05.280826 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:05 crc kubenswrapper[4886]: I1124 08:50:05.280836 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:05 crc kubenswrapper[4886]: I1124 08:50:05.280854 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:05 crc kubenswrapper[4886]: I1124 08:50:05.280866 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:05Z","lastTransitionTime":"2025-11-24T08:50:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:05 crc kubenswrapper[4886]: I1124 08:50:05.383835 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:05 crc kubenswrapper[4886]: I1124 08:50:05.383873 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:05 crc kubenswrapper[4886]: I1124 08:50:05.383883 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:05 crc kubenswrapper[4886]: I1124 08:50:05.383898 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:05 crc kubenswrapper[4886]: I1124 08:50:05.383916 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:05Z","lastTransitionTime":"2025-11-24T08:50:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:05 crc kubenswrapper[4886]: I1124 08:50:05.486959 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:05 crc kubenswrapper[4886]: I1124 08:50:05.487003 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:05 crc kubenswrapper[4886]: I1124 08:50:05.487014 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:05 crc kubenswrapper[4886]: I1124 08:50:05.487033 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:05 crc kubenswrapper[4886]: I1124 08:50:05.487044 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:05Z","lastTransitionTime":"2025-11-24T08:50:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:05 crc kubenswrapper[4886]: I1124 08:50:05.591075 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:05 crc kubenswrapper[4886]: I1124 08:50:05.591172 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:05 crc kubenswrapper[4886]: I1124 08:50:05.591186 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:05 crc kubenswrapper[4886]: I1124 08:50:05.591206 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:05 crc kubenswrapper[4886]: I1124 08:50:05.591219 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:05Z","lastTransitionTime":"2025-11-24T08:50:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:05 crc kubenswrapper[4886]: I1124 08:50:05.693503 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:05 crc kubenswrapper[4886]: I1124 08:50:05.693535 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:05 crc kubenswrapper[4886]: I1124 08:50:05.693542 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:05 crc kubenswrapper[4886]: I1124 08:50:05.693556 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:05 crc kubenswrapper[4886]: I1124 08:50:05.693564 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:05Z","lastTransitionTime":"2025-11-24T08:50:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:05 crc kubenswrapper[4886]: I1124 08:50:05.796185 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:05 crc kubenswrapper[4886]: I1124 08:50:05.796238 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:05 crc kubenswrapper[4886]: I1124 08:50:05.796250 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:05 crc kubenswrapper[4886]: I1124 08:50:05.796290 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:05 crc kubenswrapper[4886]: I1124 08:50:05.796303 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:05Z","lastTransitionTime":"2025-11-24T08:50:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:05 crc kubenswrapper[4886]: I1124 08:50:05.849030 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 08:50:05 crc kubenswrapper[4886]: I1124 08:50:05.849030 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 08:50:05 crc kubenswrapper[4886]: E1124 08:50:05.849335 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 08:50:05 crc kubenswrapper[4886]: E1124 08:50:05.849572 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 08:50:05 crc kubenswrapper[4886]: I1124 08:50:05.899426 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:05 crc kubenswrapper[4886]: I1124 08:50:05.899481 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:05 crc kubenswrapper[4886]: I1124 08:50:05.899492 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:05 crc kubenswrapper[4886]: I1124 08:50:05.899508 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:05 crc kubenswrapper[4886]: I1124 08:50:05.899558 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:05Z","lastTransitionTime":"2025-11-24T08:50:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:06 crc kubenswrapper[4886]: I1124 08:50:06.002431 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:06 crc kubenswrapper[4886]: I1124 08:50:06.002471 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:06 crc kubenswrapper[4886]: I1124 08:50:06.002481 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:06 crc kubenswrapper[4886]: I1124 08:50:06.002496 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:06 crc kubenswrapper[4886]: I1124 08:50:06.002507 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:06Z","lastTransitionTime":"2025-11-24T08:50:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:06 crc kubenswrapper[4886]: I1124 08:50:06.105366 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:06 crc kubenswrapper[4886]: I1124 08:50:06.105406 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:06 crc kubenswrapper[4886]: I1124 08:50:06.105414 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:06 crc kubenswrapper[4886]: I1124 08:50:06.105426 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:06 crc kubenswrapper[4886]: I1124 08:50:06.105436 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:06Z","lastTransitionTime":"2025-11-24T08:50:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:06 crc kubenswrapper[4886]: I1124 08:50:06.208791 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:06 crc kubenswrapper[4886]: I1124 08:50:06.208842 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:06 crc kubenswrapper[4886]: I1124 08:50:06.208854 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:06 crc kubenswrapper[4886]: I1124 08:50:06.208873 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:06 crc kubenswrapper[4886]: I1124 08:50:06.208885 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:06Z","lastTransitionTime":"2025-11-24T08:50:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:06 crc kubenswrapper[4886]: I1124 08:50:06.317464 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:06 crc kubenswrapper[4886]: I1124 08:50:06.317573 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:06 crc kubenswrapper[4886]: I1124 08:50:06.317598 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:06 crc kubenswrapper[4886]: I1124 08:50:06.317655 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:06 crc kubenswrapper[4886]: I1124 08:50:06.317672 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:06Z","lastTransitionTime":"2025-11-24T08:50:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:06 crc kubenswrapper[4886]: I1124 08:50:06.419700 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:06 crc kubenswrapper[4886]: I1124 08:50:06.419740 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:06 crc kubenswrapper[4886]: I1124 08:50:06.419750 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:06 crc kubenswrapper[4886]: I1124 08:50:06.419765 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:06 crc kubenswrapper[4886]: I1124 08:50:06.419774 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:06Z","lastTransitionTime":"2025-11-24T08:50:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:06 crc kubenswrapper[4886]: I1124 08:50:06.523125 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:06 crc kubenswrapper[4886]: I1124 08:50:06.523187 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:06 crc kubenswrapper[4886]: I1124 08:50:06.523204 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:06 crc kubenswrapper[4886]: I1124 08:50:06.523238 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:06 crc kubenswrapper[4886]: I1124 08:50:06.523249 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:06Z","lastTransitionTime":"2025-11-24T08:50:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:06 crc kubenswrapper[4886]: I1124 08:50:06.626072 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:06 crc kubenswrapper[4886]: I1124 08:50:06.626137 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:06 crc kubenswrapper[4886]: I1124 08:50:06.626190 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:06 crc kubenswrapper[4886]: I1124 08:50:06.626215 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:06 crc kubenswrapper[4886]: I1124 08:50:06.626230 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:06Z","lastTransitionTime":"2025-11-24T08:50:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:06 crc kubenswrapper[4886]: I1124 08:50:06.729918 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:06 crc kubenswrapper[4886]: I1124 08:50:06.729957 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:06 crc kubenswrapper[4886]: I1124 08:50:06.729965 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:06 crc kubenswrapper[4886]: I1124 08:50:06.729978 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:06 crc kubenswrapper[4886]: I1124 08:50:06.729987 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:06Z","lastTransitionTime":"2025-11-24T08:50:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:06 crc kubenswrapper[4886]: I1124 08:50:06.833207 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:06 crc kubenswrapper[4886]: I1124 08:50:06.833259 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:06 crc kubenswrapper[4886]: I1124 08:50:06.833270 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:06 crc kubenswrapper[4886]: I1124 08:50:06.833293 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:06 crc kubenswrapper[4886]: I1124 08:50:06.833308 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:06Z","lastTransitionTime":"2025-11-24T08:50:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:06 crc kubenswrapper[4886]: I1124 08:50:06.848592 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-fkfxv" Nov 24 08:50:06 crc kubenswrapper[4886]: I1124 08:50:06.848634 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 08:50:06 crc kubenswrapper[4886]: E1124 08:50:06.848917 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-fkfxv" podUID="7844f778-bcd5-40fe-ad92-0cc0fcd6c5d7" Nov 24 08:50:06 crc kubenswrapper[4886]: E1124 08:50:06.849218 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 08:50:06 crc kubenswrapper[4886]: I1124 08:50:06.936101 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:06 crc kubenswrapper[4886]: I1124 08:50:06.936193 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:06 crc kubenswrapper[4886]: I1124 08:50:06.936203 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:06 crc kubenswrapper[4886]: I1124 08:50:06.936219 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:06 crc kubenswrapper[4886]: I1124 08:50:06.936229 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:06Z","lastTransitionTime":"2025-11-24T08:50:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:07 crc kubenswrapper[4886]: I1124 08:50:07.073681 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:07 crc kubenswrapper[4886]: I1124 08:50:07.073738 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:07 crc kubenswrapper[4886]: I1124 08:50:07.073753 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:07 crc kubenswrapper[4886]: I1124 08:50:07.073771 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:07 crc kubenswrapper[4886]: I1124 08:50:07.073784 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:07Z","lastTransitionTime":"2025-11-24T08:50:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:07 crc kubenswrapper[4886]: I1124 08:50:07.176530 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:07 crc kubenswrapper[4886]: I1124 08:50:07.176566 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:07 crc kubenswrapper[4886]: I1124 08:50:07.176576 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:07 crc kubenswrapper[4886]: I1124 08:50:07.176593 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:07 crc kubenswrapper[4886]: I1124 08:50:07.176604 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:07Z","lastTransitionTime":"2025-11-24T08:50:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:07 crc kubenswrapper[4886]: I1124 08:50:07.279435 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:07 crc kubenswrapper[4886]: I1124 08:50:07.279484 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:07 crc kubenswrapper[4886]: I1124 08:50:07.279497 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:07 crc kubenswrapper[4886]: I1124 08:50:07.279516 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:07 crc kubenswrapper[4886]: I1124 08:50:07.279528 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:07Z","lastTransitionTime":"2025-11-24T08:50:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:07 crc kubenswrapper[4886]: I1124 08:50:07.382325 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:07 crc kubenswrapper[4886]: I1124 08:50:07.382380 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:07 crc kubenswrapper[4886]: I1124 08:50:07.382389 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:07 crc kubenswrapper[4886]: I1124 08:50:07.382403 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:07 crc kubenswrapper[4886]: I1124 08:50:07.382413 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:07Z","lastTransitionTime":"2025-11-24T08:50:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:07 crc kubenswrapper[4886]: I1124 08:50:07.484798 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:07 crc kubenswrapper[4886]: I1124 08:50:07.484859 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:07 crc kubenswrapper[4886]: I1124 08:50:07.484871 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:07 crc kubenswrapper[4886]: I1124 08:50:07.484890 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:07 crc kubenswrapper[4886]: I1124 08:50:07.484901 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:07Z","lastTransitionTime":"2025-11-24T08:50:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:07 crc kubenswrapper[4886]: I1124 08:50:07.587094 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:07 crc kubenswrapper[4886]: I1124 08:50:07.587146 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:07 crc kubenswrapper[4886]: I1124 08:50:07.587212 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:07 crc kubenswrapper[4886]: I1124 08:50:07.587242 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:07 crc kubenswrapper[4886]: I1124 08:50:07.587266 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:07Z","lastTransitionTime":"2025-11-24T08:50:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:07 crc kubenswrapper[4886]: I1124 08:50:07.689651 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:07 crc kubenswrapper[4886]: I1124 08:50:07.689713 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:07 crc kubenswrapper[4886]: I1124 08:50:07.689740 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:07 crc kubenswrapper[4886]: I1124 08:50:07.689771 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:07 crc kubenswrapper[4886]: I1124 08:50:07.689788 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:07Z","lastTransitionTime":"2025-11-24T08:50:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:07 crc kubenswrapper[4886]: I1124 08:50:07.791922 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:07 crc kubenswrapper[4886]: I1124 08:50:07.791955 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:07 crc kubenswrapper[4886]: I1124 08:50:07.791963 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:07 crc kubenswrapper[4886]: I1124 08:50:07.791980 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:07 crc kubenswrapper[4886]: I1124 08:50:07.791995 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:07Z","lastTransitionTime":"2025-11-24T08:50:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:07 crc kubenswrapper[4886]: I1124 08:50:07.848716 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 08:50:07 crc kubenswrapper[4886]: I1124 08:50:07.848847 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 08:50:07 crc kubenswrapper[4886]: E1124 08:50:07.848982 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 08:50:07 crc kubenswrapper[4886]: E1124 08:50:07.848851 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 08:50:07 crc kubenswrapper[4886]: I1124 08:50:07.894016 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:07 crc kubenswrapper[4886]: I1124 08:50:07.894050 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:07 crc kubenswrapper[4886]: I1124 08:50:07.894058 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:07 crc kubenswrapper[4886]: I1124 08:50:07.894073 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:07 crc kubenswrapper[4886]: I1124 08:50:07.894081 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:07Z","lastTransitionTime":"2025-11-24T08:50:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:07 crc kubenswrapper[4886]: I1124 08:50:07.997118 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:07 crc kubenswrapper[4886]: I1124 08:50:07.997263 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:07 crc kubenswrapper[4886]: I1124 08:50:07.997284 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:07 crc kubenswrapper[4886]: I1124 08:50:07.997309 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:07 crc kubenswrapper[4886]: I1124 08:50:07.997325 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:07Z","lastTransitionTime":"2025-11-24T08:50:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:08 crc kubenswrapper[4886]: I1124 08:50:08.100418 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:08 crc kubenswrapper[4886]: I1124 08:50:08.100468 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:08 crc kubenswrapper[4886]: I1124 08:50:08.100480 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:08 crc kubenswrapper[4886]: I1124 08:50:08.100501 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:08 crc kubenswrapper[4886]: I1124 08:50:08.100516 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:08Z","lastTransitionTime":"2025-11-24T08:50:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:08 crc kubenswrapper[4886]: I1124 08:50:08.204083 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:08 crc kubenswrapper[4886]: I1124 08:50:08.204140 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:08 crc kubenswrapper[4886]: I1124 08:50:08.204178 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:08 crc kubenswrapper[4886]: I1124 08:50:08.204205 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:08 crc kubenswrapper[4886]: I1124 08:50:08.204223 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:08Z","lastTransitionTime":"2025-11-24T08:50:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:08 crc kubenswrapper[4886]: I1124 08:50:08.306748 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:08 crc kubenswrapper[4886]: I1124 08:50:08.306794 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:08 crc kubenswrapper[4886]: I1124 08:50:08.306807 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:08 crc kubenswrapper[4886]: I1124 08:50:08.306825 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:08 crc kubenswrapper[4886]: I1124 08:50:08.306834 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:08Z","lastTransitionTime":"2025-11-24T08:50:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:08 crc kubenswrapper[4886]: I1124 08:50:08.408820 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:08 crc kubenswrapper[4886]: I1124 08:50:08.408857 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:08 crc kubenswrapper[4886]: I1124 08:50:08.408878 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:08 crc kubenswrapper[4886]: I1124 08:50:08.408894 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:08 crc kubenswrapper[4886]: I1124 08:50:08.408905 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:08Z","lastTransitionTime":"2025-11-24T08:50:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:08 crc kubenswrapper[4886]: I1124 08:50:08.511501 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:08 crc kubenswrapper[4886]: I1124 08:50:08.511833 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:08 crc kubenswrapper[4886]: I1124 08:50:08.511953 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:08 crc kubenswrapper[4886]: I1124 08:50:08.512046 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:08 crc kubenswrapper[4886]: I1124 08:50:08.512129 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:08Z","lastTransitionTime":"2025-11-24T08:50:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:08 crc kubenswrapper[4886]: I1124 08:50:08.615040 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:08 crc kubenswrapper[4886]: I1124 08:50:08.615436 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:08 crc kubenswrapper[4886]: I1124 08:50:08.615598 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:08 crc kubenswrapper[4886]: I1124 08:50:08.615774 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:08 crc kubenswrapper[4886]: I1124 08:50:08.615893 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:08Z","lastTransitionTime":"2025-11-24T08:50:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:08 crc kubenswrapper[4886]: I1124 08:50:08.718909 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:08 crc kubenswrapper[4886]: I1124 08:50:08.718960 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:08 crc kubenswrapper[4886]: I1124 08:50:08.718974 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:08 crc kubenswrapper[4886]: I1124 08:50:08.718993 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:08 crc kubenswrapper[4886]: I1124 08:50:08.719007 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:08Z","lastTransitionTime":"2025-11-24T08:50:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:08 crc kubenswrapper[4886]: I1124 08:50:08.821851 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:08 crc kubenswrapper[4886]: I1124 08:50:08.821896 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:08 crc kubenswrapper[4886]: I1124 08:50:08.821907 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:08 crc kubenswrapper[4886]: I1124 08:50:08.821925 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:08 crc kubenswrapper[4886]: I1124 08:50:08.821935 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:08Z","lastTransitionTime":"2025-11-24T08:50:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:08 crc kubenswrapper[4886]: I1124 08:50:08.848181 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-fkfxv" Nov 24 08:50:08 crc kubenswrapper[4886]: I1124 08:50:08.848266 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 08:50:08 crc kubenswrapper[4886]: E1124 08:50:08.848350 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-fkfxv" podUID="7844f778-bcd5-40fe-ad92-0cc0fcd6c5d7" Nov 24 08:50:08 crc kubenswrapper[4886]: E1124 08:50:08.848411 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 08:50:08 crc kubenswrapper[4886]: I1124 08:50:08.924594 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:08 crc kubenswrapper[4886]: I1124 08:50:08.924864 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:08 crc kubenswrapper[4886]: I1124 08:50:08.924929 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:08 crc kubenswrapper[4886]: I1124 08:50:08.925013 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:08 crc kubenswrapper[4886]: I1124 08:50:08.925100 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:08Z","lastTransitionTime":"2025-11-24T08:50:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:09 crc kubenswrapper[4886]: I1124 08:50:09.027876 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:09 crc kubenswrapper[4886]: I1124 08:50:09.027929 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:09 crc kubenswrapper[4886]: I1124 08:50:09.027939 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:09 crc kubenswrapper[4886]: I1124 08:50:09.027958 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:09 crc kubenswrapper[4886]: I1124 08:50:09.027967 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:09Z","lastTransitionTime":"2025-11-24T08:50:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:09 crc kubenswrapper[4886]: I1124 08:50:09.157766 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:09 crc kubenswrapper[4886]: I1124 08:50:09.157819 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:09 crc kubenswrapper[4886]: I1124 08:50:09.157827 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:09 crc kubenswrapper[4886]: I1124 08:50:09.157848 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:09 crc kubenswrapper[4886]: I1124 08:50:09.157860 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:09Z","lastTransitionTime":"2025-11-24T08:50:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:09 crc kubenswrapper[4886]: I1124 08:50:09.260885 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:09 crc kubenswrapper[4886]: I1124 08:50:09.260942 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:09 crc kubenswrapper[4886]: I1124 08:50:09.260955 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:09 crc kubenswrapper[4886]: I1124 08:50:09.260976 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:09 crc kubenswrapper[4886]: I1124 08:50:09.260988 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:09Z","lastTransitionTime":"2025-11-24T08:50:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:09 crc kubenswrapper[4886]: I1124 08:50:09.364896 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:09 crc kubenswrapper[4886]: I1124 08:50:09.364945 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:09 crc kubenswrapper[4886]: I1124 08:50:09.364958 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:09 crc kubenswrapper[4886]: I1124 08:50:09.364979 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:09 crc kubenswrapper[4886]: I1124 08:50:09.364993 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:09Z","lastTransitionTime":"2025-11-24T08:50:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:09 crc kubenswrapper[4886]: I1124 08:50:09.468827 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:09 crc kubenswrapper[4886]: I1124 08:50:09.468877 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:09 crc kubenswrapper[4886]: I1124 08:50:09.468890 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:09 crc kubenswrapper[4886]: I1124 08:50:09.468912 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:09 crc kubenswrapper[4886]: I1124 08:50:09.468928 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:09Z","lastTransitionTime":"2025-11-24T08:50:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:09 crc kubenswrapper[4886]: I1124 08:50:09.572122 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:09 crc kubenswrapper[4886]: I1124 08:50:09.572181 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:09 crc kubenswrapper[4886]: I1124 08:50:09.572191 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:09 crc kubenswrapper[4886]: I1124 08:50:09.572210 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:09 crc kubenswrapper[4886]: I1124 08:50:09.572221 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:09Z","lastTransitionTime":"2025-11-24T08:50:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:09 crc kubenswrapper[4886]: I1124 08:50:09.675752 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:09 crc kubenswrapper[4886]: I1124 08:50:09.675791 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:09 crc kubenswrapper[4886]: I1124 08:50:09.675804 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:09 crc kubenswrapper[4886]: I1124 08:50:09.675819 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:09 crc kubenswrapper[4886]: I1124 08:50:09.675829 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:09Z","lastTransitionTime":"2025-11-24T08:50:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:09 crc kubenswrapper[4886]: I1124 08:50:09.778050 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:09 crc kubenswrapper[4886]: I1124 08:50:09.778078 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:09 crc kubenswrapper[4886]: I1124 08:50:09.778087 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:09 crc kubenswrapper[4886]: I1124 08:50:09.778099 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:09 crc kubenswrapper[4886]: I1124 08:50:09.778109 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:09Z","lastTransitionTime":"2025-11-24T08:50:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:09 crc kubenswrapper[4886]: I1124 08:50:09.848611 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 08:50:09 crc kubenswrapper[4886]: E1124 08:50:09.848806 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 08:50:09 crc kubenswrapper[4886]: I1124 08:50:09.849079 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 08:50:09 crc kubenswrapper[4886]: E1124 08:50:09.849180 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 08:50:09 crc kubenswrapper[4886]: I1124 08:50:09.881047 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:09 crc kubenswrapper[4886]: I1124 08:50:09.881123 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:09 crc kubenswrapper[4886]: I1124 08:50:09.881139 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:09 crc kubenswrapper[4886]: I1124 08:50:09.881185 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:09 crc kubenswrapper[4886]: I1124 08:50:09.881201 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:09Z","lastTransitionTime":"2025-11-24T08:50:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:09 crc kubenswrapper[4886]: I1124 08:50:09.984293 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:09 crc kubenswrapper[4886]: I1124 08:50:09.984976 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:09 crc kubenswrapper[4886]: I1124 08:50:09.985046 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:09 crc kubenswrapper[4886]: I1124 08:50:09.985144 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:09 crc kubenswrapper[4886]: I1124 08:50:09.985262 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:09Z","lastTransitionTime":"2025-11-24T08:50:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:10 crc kubenswrapper[4886]: I1124 08:50:10.089105 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:10 crc kubenswrapper[4886]: I1124 08:50:10.089203 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:10 crc kubenswrapper[4886]: I1124 08:50:10.089217 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:10 crc kubenswrapper[4886]: I1124 08:50:10.089239 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:10 crc kubenswrapper[4886]: I1124 08:50:10.089252 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:10Z","lastTransitionTime":"2025-11-24T08:50:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:10 crc kubenswrapper[4886]: I1124 08:50:10.192662 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:10 crc kubenswrapper[4886]: I1124 08:50:10.193240 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:10 crc kubenswrapper[4886]: I1124 08:50:10.193331 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:10 crc kubenswrapper[4886]: I1124 08:50:10.193417 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:10 crc kubenswrapper[4886]: I1124 08:50:10.193483 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:10Z","lastTransitionTime":"2025-11-24T08:50:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:10 crc kubenswrapper[4886]: I1124 08:50:10.296900 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:10 crc kubenswrapper[4886]: I1124 08:50:10.297455 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:10 crc kubenswrapper[4886]: I1124 08:50:10.297547 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:10 crc kubenswrapper[4886]: I1124 08:50:10.297652 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:10 crc kubenswrapper[4886]: I1124 08:50:10.297717 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:10Z","lastTransitionTime":"2025-11-24T08:50:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:10 crc kubenswrapper[4886]: I1124 08:50:10.400309 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:10 crc kubenswrapper[4886]: I1124 08:50:10.400385 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:10 crc kubenswrapper[4886]: I1124 08:50:10.400402 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:10 crc kubenswrapper[4886]: I1124 08:50:10.400426 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:10 crc kubenswrapper[4886]: I1124 08:50:10.400439 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:10Z","lastTransitionTime":"2025-11-24T08:50:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:10 crc kubenswrapper[4886]: I1124 08:50:10.502840 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:10 crc kubenswrapper[4886]: I1124 08:50:10.502877 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:10 crc kubenswrapper[4886]: I1124 08:50:10.502888 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:10 crc kubenswrapper[4886]: I1124 08:50:10.502907 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:10 crc kubenswrapper[4886]: I1124 08:50:10.502918 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:10Z","lastTransitionTime":"2025-11-24T08:50:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:10 crc kubenswrapper[4886]: I1124 08:50:10.605609 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:10 crc kubenswrapper[4886]: I1124 08:50:10.605649 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:10 crc kubenswrapper[4886]: I1124 08:50:10.605663 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:10 crc kubenswrapper[4886]: I1124 08:50:10.605682 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:10 crc kubenswrapper[4886]: I1124 08:50:10.605692 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:10Z","lastTransitionTime":"2025-11-24T08:50:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:10 crc kubenswrapper[4886]: I1124 08:50:10.708796 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:10 crc kubenswrapper[4886]: I1124 08:50:10.708842 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:10 crc kubenswrapper[4886]: I1124 08:50:10.708850 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:10 crc kubenswrapper[4886]: I1124 08:50:10.708867 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:10 crc kubenswrapper[4886]: I1124 08:50:10.708878 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:10Z","lastTransitionTime":"2025-11-24T08:50:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:10 crc kubenswrapper[4886]: I1124 08:50:10.812357 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:10 crc kubenswrapper[4886]: I1124 08:50:10.812615 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:10 crc kubenswrapper[4886]: I1124 08:50:10.812628 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:10 crc kubenswrapper[4886]: I1124 08:50:10.812646 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:10 crc kubenswrapper[4886]: I1124 08:50:10.812663 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:10Z","lastTransitionTime":"2025-11-24T08:50:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:10 crc kubenswrapper[4886]: I1124 08:50:10.849432 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 08:50:10 crc kubenswrapper[4886]: E1124 08:50:10.849622 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 08:50:10 crc kubenswrapper[4886]: I1124 08:50:10.849908 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-fkfxv" Nov 24 08:50:10 crc kubenswrapper[4886]: E1124 08:50:10.850186 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-fkfxv" podUID="7844f778-bcd5-40fe-ad92-0cc0fcd6c5d7" Nov 24 08:50:10 crc kubenswrapper[4886]: I1124 08:50:10.914986 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:10 crc kubenswrapper[4886]: I1124 08:50:10.915028 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:10 crc kubenswrapper[4886]: I1124 08:50:10.915037 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:10 crc kubenswrapper[4886]: I1124 08:50:10.915055 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:10 crc kubenswrapper[4886]: I1124 08:50:10.915068 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:10Z","lastTransitionTime":"2025-11-24T08:50:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:11 crc kubenswrapper[4886]: I1124 08:50:11.017510 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:11 crc kubenswrapper[4886]: I1124 08:50:11.017554 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:11 crc kubenswrapper[4886]: I1124 08:50:11.017562 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:11 crc kubenswrapper[4886]: I1124 08:50:11.017580 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:11 crc kubenswrapper[4886]: I1124 08:50:11.017588 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:11Z","lastTransitionTime":"2025-11-24T08:50:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:11 crc kubenswrapper[4886]: I1124 08:50:11.120512 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:11 crc kubenswrapper[4886]: I1124 08:50:11.120555 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:11 crc kubenswrapper[4886]: I1124 08:50:11.120568 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:11 crc kubenswrapper[4886]: I1124 08:50:11.120599 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:11 crc kubenswrapper[4886]: I1124 08:50:11.120613 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:11Z","lastTransitionTime":"2025-11-24T08:50:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:11 crc kubenswrapper[4886]: I1124 08:50:11.223629 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:11 crc kubenswrapper[4886]: I1124 08:50:11.223698 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:11 crc kubenswrapper[4886]: I1124 08:50:11.223708 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:11 crc kubenswrapper[4886]: I1124 08:50:11.223726 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:11 crc kubenswrapper[4886]: I1124 08:50:11.223735 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:11Z","lastTransitionTime":"2025-11-24T08:50:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:11 crc kubenswrapper[4886]: I1124 08:50:11.326701 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:11 crc kubenswrapper[4886]: I1124 08:50:11.326761 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:11 crc kubenswrapper[4886]: I1124 08:50:11.326770 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:11 crc kubenswrapper[4886]: I1124 08:50:11.326790 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:11 crc kubenswrapper[4886]: I1124 08:50:11.326802 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:11Z","lastTransitionTime":"2025-11-24T08:50:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:11 crc kubenswrapper[4886]: I1124 08:50:11.430514 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:11 crc kubenswrapper[4886]: I1124 08:50:11.430578 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:11 crc kubenswrapper[4886]: I1124 08:50:11.430593 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:11 crc kubenswrapper[4886]: I1124 08:50:11.430624 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:11 crc kubenswrapper[4886]: I1124 08:50:11.430648 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:11Z","lastTransitionTime":"2025-11-24T08:50:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:11 crc kubenswrapper[4886]: I1124 08:50:11.534017 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:11 crc kubenswrapper[4886]: I1124 08:50:11.534090 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:11 crc kubenswrapper[4886]: I1124 08:50:11.534116 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:11 crc kubenswrapper[4886]: I1124 08:50:11.534139 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:11 crc kubenswrapper[4886]: I1124 08:50:11.534182 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:11Z","lastTransitionTime":"2025-11-24T08:50:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:11 crc kubenswrapper[4886]: I1124 08:50:11.637445 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:11 crc kubenswrapper[4886]: I1124 08:50:11.637486 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:11 crc kubenswrapper[4886]: I1124 08:50:11.637497 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:11 crc kubenswrapper[4886]: I1124 08:50:11.637525 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:11 crc kubenswrapper[4886]: I1124 08:50:11.637538 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:11Z","lastTransitionTime":"2025-11-24T08:50:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:11 crc kubenswrapper[4886]: I1124 08:50:11.737005 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7844f778-bcd5-40fe-ad92-0cc0fcd6c5d7-metrics-certs\") pod \"network-metrics-daemon-fkfxv\" (UID: \"7844f778-bcd5-40fe-ad92-0cc0fcd6c5d7\") " pod="openshift-multus/network-metrics-daemon-fkfxv" Nov 24 08:50:11 crc kubenswrapper[4886]: E1124 08:50:11.737234 4886 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 24 08:50:11 crc kubenswrapper[4886]: E1124 08:50:11.737314 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7844f778-bcd5-40fe-ad92-0cc0fcd6c5d7-metrics-certs podName:7844f778-bcd5-40fe-ad92-0cc0fcd6c5d7 nodeName:}" failed. No retries permitted until 2025-11-24 08:50:43.737290242 +0000 UTC m=+99.624028377 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/7844f778-bcd5-40fe-ad92-0cc0fcd6c5d7-metrics-certs") pod "network-metrics-daemon-fkfxv" (UID: "7844f778-bcd5-40fe-ad92-0cc0fcd6c5d7") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 24 08:50:11 crc kubenswrapper[4886]: I1124 08:50:11.743123 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:11 crc kubenswrapper[4886]: I1124 08:50:11.743193 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:11 crc kubenswrapper[4886]: I1124 08:50:11.743206 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:11 crc kubenswrapper[4886]: I1124 08:50:11.743231 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:11 crc kubenswrapper[4886]: I1124 08:50:11.743252 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:11Z","lastTransitionTime":"2025-11-24T08:50:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:11 crc kubenswrapper[4886]: I1124 08:50:11.849075 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 08:50:11 crc kubenswrapper[4886]: E1124 08:50:11.849304 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 08:50:11 crc kubenswrapper[4886]: I1124 08:50:11.849416 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 08:50:11 crc kubenswrapper[4886]: E1124 08:50:11.849554 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 08:50:11 crc kubenswrapper[4886]: I1124 08:50:11.852786 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:11 crc kubenswrapper[4886]: I1124 08:50:11.852814 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:11 crc kubenswrapper[4886]: I1124 08:50:11.852824 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:11 crc kubenswrapper[4886]: I1124 08:50:11.852841 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:11 crc kubenswrapper[4886]: I1124 08:50:11.852857 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:11Z","lastTransitionTime":"2025-11-24T08:50:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:11 crc kubenswrapper[4886]: I1124 08:50:11.955884 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:11 crc kubenswrapper[4886]: I1124 08:50:11.955917 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:11 crc kubenswrapper[4886]: I1124 08:50:11.955925 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:11 crc kubenswrapper[4886]: I1124 08:50:11.955955 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:11 crc kubenswrapper[4886]: I1124 08:50:11.955964 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:11Z","lastTransitionTime":"2025-11-24T08:50:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:12 crc kubenswrapper[4886]: I1124 08:50:12.058513 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:12 crc kubenswrapper[4886]: I1124 08:50:12.058568 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:12 crc kubenswrapper[4886]: I1124 08:50:12.058581 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:12 crc kubenswrapper[4886]: I1124 08:50:12.058599 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:12 crc kubenswrapper[4886]: I1124 08:50:12.058610 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:12Z","lastTransitionTime":"2025-11-24T08:50:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:12 crc kubenswrapper[4886]: I1124 08:50:12.161938 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:12 crc kubenswrapper[4886]: I1124 08:50:12.161989 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:12 crc kubenswrapper[4886]: I1124 08:50:12.162002 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:12 crc kubenswrapper[4886]: I1124 08:50:12.162024 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:12 crc kubenswrapper[4886]: I1124 08:50:12.162034 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:12Z","lastTransitionTime":"2025-11-24T08:50:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:12 crc kubenswrapper[4886]: I1124 08:50:12.205901 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:12 crc kubenswrapper[4886]: I1124 08:50:12.205969 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:12 crc kubenswrapper[4886]: I1124 08:50:12.205981 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:12 crc kubenswrapper[4886]: I1124 08:50:12.206002 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:12 crc kubenswrapper[4886]: I1124 08:50:12.206019 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:12Z","lastTransitionTime":"2025-11-24T08:50:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:12 crc kubenswrapper[4886]: E1124 08:50:12.221970 4886 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T08:50:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T08:50:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T08:50:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T08:50:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T08:50:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T08:50:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T08:50:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T08:50:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bf6e1d17-5641-40b5-abdc-9697895ace84\\\",\\\"systemUUID\\\":\\\"b95cd08d-0a26-454e-842e-e33553e0c6a8\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:50:12Z is after 2025-08-24T17:21:41Z" Nov 24 08:50:12 crc kubenswrapper[4886]: I1124 08:50:12.227555 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:12 crc kubenswrapper[4886]: I1124 08:50:12.227602 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:12 crc kubenswrapper[4886]: I1124 08:50:12.227617 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:12 crc kubenswrapper[4886]: I1124 08:50:12.227641 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:12 crc kubenswrapper[4886]: I1124 08:50:12.227656 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:12Z","lastTransitionTime":"2025-11-24T08:50:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:12 crc kubenswrapper[4886]: E1124 08:50:12.247340 4886 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T08:50:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T08:50:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T08:50:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T08:50:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T08:50:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T08:50:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T08:50:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T08:50:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bf6e1d17-5641-40b5-abdc-9697895ace84\\\",\\\"systemUUID\\\":\\\"b95cd08d-0a26-454e-842e-e33553e0c6a8\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:50:12Z is after 2025-08-24T17:21:41Z" Nov 24 08:50:12 crc kubenswrapper[4886]: I1124 08:50:12.252516 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:12 crc kubenswrapper[4886]: I1124 08:50:12.252550 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:12 crc kubenswrapper[4886]: I1124 08:50:12.252560 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:12 crc kubenswrapper[4886]: I1124 08:50:12.252578 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:12 crc kubenswrapper[4886]: I1124 08:50:12.252589 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:12Z","lastTransitionTime":"2025-11-24T08:50:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:12 crc kubenswrapper[4886]: E1124 08:50:12.268288 4886 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T08:50:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T08:50:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T08:50:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T08:50:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T08:50:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T08:50:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T08:50:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T08:50:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bf6e1d17-5641-40b5-abdc-9697895ace84\\\",\\\"systemUUID\\\":\\\"b95cd08d-0a26-454e-842e-e33553e0c6a8\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:50:12Z is after 2025-08-24T17:21:41Z" Nov 24 08:50:12 crc kubenswrapper[4886]: I1124 08:50:12.273186 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:12 crc kubenswrapper[4886]: I1124 08:50:12.273249 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:12 crc kubenswrapper[4886]: I1124 08:50:12.273268 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:12 crc kubenswrapper[4886]: I1124 08:50:12.273296 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:12 crc kubenswrapper[4886]: I1124 08:50:12.273330 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:12Z","lastTransitionTime":"2025-11-24T08:50:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:12 crc kubenswrapper[4886]: E1124 08:50:12.286911 4886 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T08:50:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T08:50:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T08:50:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T08:50:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T08:50:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T08:50:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T08:50:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T08:50:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bf6e1d17-5641-40b5-abdc-9697895ace84\\\",\\\"systemUUID\\\":\\\"b95cd08d-0a26-454e-842e-e33553e0c6a8\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:50:12Z is after 2025-08-24T17:21:41Z" Nov 24 08:50:12 crc kubenswrapper[4886]: I1124 08:50:12.291573 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:12 crc kubenswrapper[4886]: I1124 08:50:12.291639 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:12 crc kubenswrapper[4886]: I1124 08:50:12.291651 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:12 crc kubenswrapper[4886]: I1124 08:50:12.291672 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:12 crc kubenswrapper[4886]: I1124 08:50:12.291690 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:12Z","lastTransitionTime":"2025-11-24T08:50:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:12 crc kubenswrapper[4886]: E1124 08:50:12.305206 4886 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T08:50:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T08:50:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T08:50:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T08:50:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T08:50:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T08:50:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T08:50:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T08:50:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bf6e1d17-5641-40b5-abdc-9697895ace84\\\",\\\"systemUUID\\\":\\\"b95cd08d-0a26-454e-842e-e33553e0c6a8\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:50:12Z is after 2025-08-24T17:21:41Z" Nov 24 08:50:12 crc kubenswrapper[4886]: E1124 08:50:12.305418 4886 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 24 08:50:12 crc kubenswrapper[4886]: I1124 08:50:12.307455 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:12 crc kubenswrapper[4886]: I1124 08:50:12.307500 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:12 crc kubenswrapper[4886]: I1124 08:50:12.307509 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:12 crc kubenswrapper[4886]: I1124 08:50:12.307527 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:12 crc kubenswrapper[4886]: I1124 08:50:12.307540 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:12Z","lastTransitionTime":"2025-11-24T08:50:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:12 crc kubenswrapper[4886]: I1124 08:50:12.410163 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:12 crc kubenswrapper[4886]: I1124 08:50:12.410206 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:12 crc kubenswrapper[4886]: I1124 08:50:12.410219 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:12 crc kubenswrapper[4886]: I1124 08:50:12.410242 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:12 crc kubenswrapper[4886]: I1124 08:50:12.410259 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:12Z","lastTransitionTime":"2025-11-24T08:50:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:12 crc kubenswrapper[4886]: I1124 08:50:12.513596 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:12 crc kubenswrapper[4886]: I1124 08:50:12.513645 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:12 crc kubenswrapper[4886]: I1124 08:50:12.513655 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:12 crc kubenswrapper[4886]: I1124 08:50:12.513677 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:12 crc kubenswrapper[4886]: I1124 08:50:12.513694 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:12Z","lastTransitionTime":"2025-11-24T08:50:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:12 crc kubenswrapper[4886]: I1124 08:50:12.616707 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:12 crc kubenswrapper[4886]: I1124 08:50:12.616755 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:12 crc kubenswrapper[4886]: I1124 08:50:12.616765 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:12 crc kubenswrapper[4886]: I1124 08:50:12.616783 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:12 crc kubenswrapper[4886]: I1124 08:50:12.616794 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:12Z","lastTransitionTime":"2025-11-24T08:50:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:12 crc kubenswrapper[4886]: I1124 08:50:12.719139 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:12 crc kubenswrapper[4886]: I1124 08:50:12.719229 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:12 crc kubenswrapper[4886]: I1124 08:50:12.719244 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:12 crc kubenswrapper[4886]: I1124 08:50:12.719267 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:12 crc kubenswrapper[4886]: I1124 08:50:12.719284 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:12Z","lastTransitionTime":"2025-11-24T08:50:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:12 crc kubenswrapper[4886]: I1124 08:50:12.822840 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:12 crc kubenswrapper[4886]: I1124 08:50:12.823104 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:12 crc kubenswrapper[4886]: I1124 08:50:12.823203 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:12 crc kubenswrapper[4886]: I1124 08:50:12.823301 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:12 crc kubenswrapper[4886]: I1124 08:50:12.823372 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:12Z","lastTransitionTime":"2025-11-24T08:50:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:12 crc kubenswrapper[4886]: I1124 08:50:12.848165 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-fkfxv" Nov 24 08:50:12 crc kubenswrapper[4886]: I1124 08:50:12.848378 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 08:50:12 crc kubenswrapper[4886]: E1124 08:50:12.848545 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-fkfxv" podUID="7844f778-bcd5-40fe-ad92-0cc0fcd6c5d7" Nov 24 08:50:12 crc kubenswrapper[4886]: E1124 08:50:12.848688 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 08:50:12 crc kubenswrapper[4886]: I1124 08:50:12.925594 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:12 crc kubenswrapper[4886]: I1124 08:50:12.925648 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:12 crc kubenswrapper[4886]: I1124 08:50:12.925659 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:12 crc kubenswrapper[4886]: I1124 08:50:12.925679 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:12 crc kubenswrapper[4886]: I1124 08:50:12.925692 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:12Z","lastTransitionTime":"2025-11-24T08:50:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:13 crc kubenswrapper[4886]: I1124 08:50:13.028564 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:13 crc kubenswrapper[4886]: I1124 08:50:13.028608 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:13 crc kubenswrapper[4886]: I1124 08:50:13.028617 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:13 crc kubenswrapper[4886]: I1124 08:50:13.028635 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:13 crc kubenswrapper[4886]: I1124 08:50:13.028644 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:13Z","lastTransitionTime":"2025-11-24T08:50:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:13 crc kubenswrapper[4886]: I1124 08:50:13.131907 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:13 crc kubenswrapper[4886]: I1124 08:50:13.131960 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:13 crc kubenswrapper[4886]: I1124 08:50:13.131971 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:13 crc kubenswrapper[4886]: I1124 08:50:13.131993 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:13 crc kubenswrapper[4886]: I1124 08:50:13.132006 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:13Z","lastTransitionTime":"2025-11-24T08:50:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:13 crc kubenswrapper[4886]: I1124 08:50:13.235211 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:13 crc kubenswrapper[4886]: I1124 08:50:13.235270 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:13 crc kubenswrapper[4886]: I1124 08:50:13.235283 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:13 crc kubenswrapper[4886]: I1124 08:50:13.235309 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:13 crc kubenswrapper[4886]: I1124 08:50:13.235326 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:13Z","lastTransitionTime":"2025-11-24T08:50:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:13 crc kubenswrapper[4886]: I1124 08:50:13.337811 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:13 crc kubenswrapper[4886]: I1124 08:50:13.337927 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:13 crc kubenswrapper[4886]: I1124 08:50:13.337942 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:13 crc kubenswrapper[4886]: I1124 08:50:13.337957 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:13 crc kubenswrapper[4886]: I1124 08:50:13.337967 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:13Z","lastTransitionTime":"2025-11-24T08:50:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:13 crc kubenswrapper[4886]: I1124 08:50:13.440892 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:13 crc kubenswrapper[4886]: I1124 08:50:13.440934 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:13 crc kubenswrapper[4886]: I1124 08:50:13.440977 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:13 crc kubenswrapper[4886]: I1124 08:50:13.441066 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:13 crc kubenswrapper[4886]: I1124 08:50:13.441079 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:13Z","lastTransitionTime":"2025-11-24T08:50:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:13 crc kubenswrapper[4886]: I1124 08:50:13.544479 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:13 crc kubenswrapper[4886]: I1124 08:50:13.544518 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:13 crc kubenswrapper[4886]: I1124 08:50:13.544529 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:13 crc kubenswrapper[4886]: I1124 08:50:13.544548 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:13 crc kubenswrapper[4886]: I1124 08:50:13.544560 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:13Z","lastTransitionTime":"2025-11-24T08:50:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:13 crc kubenswrapper[4886]: I1124 08:50:13.647919 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:13 crc kubenswrapper[4886]: I1124 08:50:13.647972 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:13 crc kubenswrapper[4886]: I1124 08:50:13.647986 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:13 crc kubenswrapper[4886]: I1124 08:50:13.648009 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:13 crc kubenswrapper[4886]: I1124 08:50:13.648022 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:13Z","lastTransitionTime":"2025-11-24T08:50:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:13 crc kubenswrapper[4886]: I1124 08:50:13.750938 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:13 crc kubenswrapper[4886]: I1124 08:50:13.750979 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:13 crc kubenswrapper[4886]: I1124 08:50:13.750989 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:13 crc kubenswrapper[4886]: I1124 08:50:13.751008 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:13 crc kubenswrapper[4886]: I1124 08:50:13.751021 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:13Z","lastTransitionTime":"2025-11-24T08:50:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:13 crc kubenswrapper[4886]: I1124 08:50:13.848566 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 08:50:13 crc kubenswrapper[4886]: I1124 08:50:13.848668 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 08:50:13 crc kubenswrapper[4886]: E1124 08:50:13.848711 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 08:50:13 crc kubenswrapper[4886]: E1124 08:50:13.848913 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 08:50:13 crc kubenswrapper[4886]: I1124 08:50:13.854585 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:13 crc kubenswrapper[4886]: I1124 08:50:13.854638 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:13 crc kubenswrapper[4886]: I1124 08:50:13.854649 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:13 crc kubenswrapper[4886]: I1124 08:50:13.854667 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:13 crc kubenswrapper[4886]: I1124 08:50:13.854680 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:13Z","lastTransitionTime":"2025-11-24T08:50:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:13 crc kubenswrapper[4886]: I1124 08:50:13.957658 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:13 crc kubenswrapper[4886]: I1124 08:50:13.957702 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:13 crc kubenswrapper[4886]: I1124 08:50:13.957728 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:13 crc kubenswrapper[4886]: I1124 08:50:13.957751 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:13 crc kubenswrapper[4886]: I1124 08:50:13.957763 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:13Z","lastTransitionTime":"2025-11-24T08:50:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:14 crc kubenswrapper[4886]: I1124 08:50:14.060887 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:14 crc kubenswrapper[4886]: I1124 08:50:14.060936 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:14 crc kubenswrapper[4886]: I1124 08:50:14.060947 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:14 crc kubenswrapper[4886]: I1124 08:50:14.060962 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:14 crc kubenswrapper[4886]: I1124 08:50:14.060974 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:14Z","lastTransitionTime":"2025-11-24T08:50:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:14 crc kubenswrapper[4886]: I1124 08:50:14.163930 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:14 crc kubenswrapper[4886]: I1124 08:50:14.163963 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:14 crc kubenswrapper[4886]: I1124 08:50:14.163971 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:14 crc kubenswrapper[4886]: I1124 08:50:14.163987 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:14 crc kubenswrapper[4886]: I1124 08:50:14.163997 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:14Z","lastTransitionTime":"2025-11-24T08:50:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:14 crc kubenswrapper[4886]: I1124 08:50:14.266727 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:14 crc kubenswrapper[4886]: I1124 08:50:14.266762 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:14 crc kubenswrapper[4886]: I1124 08:50:14.266771 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:14 crc kubenswrapper[4886]: I1124 08:50:14.266788 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:14 crc kubenswrapper[4886]: I1124 08:50:14.266798 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:14Z","lastTransitionTime":"2025-11-24T08:50:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:14 crc kubenswrapper[4886]: I1124 08:50:14.369350 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:14 crc kubenswrapper[4886]: I1124 08:50:14.369400 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:14 crc kubenswrapper[4886]: I1124 08:50:14.369413 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:14 crc kubenswrapper[4886]: I1124 08:50:14.369445 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:14 crc kubenswrapper[4886]: I1124 08:50:14.369456 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:14Z","lastTransitionTime":"2025-11-24T08:50:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:14 crc kubenswrapper[4886]: I1124 08:50:14.471844 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:14 crc kubenswrapper[4886]: I1124 08:50:14.471937 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:14 crc kubenswrapper[4886]: I1124 08:50:14.471957 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:14 crc kubenswrapper[4886]: I1124 08:50:14.472026 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:14 crc kubenswrapper[4886]: I1124 08:50:14.472047 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:14Z","lastTransitionTime":"2025-11-24T08:50:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:14 crc kubenswrapper[4886]: I1124 08:50:14.575141 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:14 crc kubenswrapper[4886]: I1124 08:50:14.575196 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:14 crc kubenswrapper[4886]: I1124 08:50:14.575205 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:14 crc kubenswrapper[4886]: I1124 08:50:14.575223 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:14 crc kubenswrapper[4886]: I1124 08:50:14.575234 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:14Z","lastTransitionTime":"2025-11-24T08:50:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:14 crc kubenswrapper[4886]: I1124 08:50:14.677179 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:14 crc kubenswrapper[4886]: I1124 08:50:14.677214 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:14 crc kubenswrapper[4886]: I1124 08:50:14.677224 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:14 crc kubenswrapper[4886]: I1124 08:50:14.677242 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:14 crc kubenswrapper[4886]: I1124 08:50:14.677257 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:14Z","lastTransitionTime":"2025-11-24T08:50:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:14 crc kubenswrapper[4886]: I1124 08:50:14.758194 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-2dk8j_5d515fec-60f3-4bf7-9ba4-697bb691b670/kube-multus/0.log" Nov 24 08:50:14 crc kubenswrapper[4886]: I1124 08:50:14.758262 4886 generic.go:334] "Generic (PLEG): container finished" podID="5d515fec-60f3-4bf7-9ba4-697bb691b670" containerID="ead2821f6b71571212807c6f2984d541faf5143c28fee48594b19e08bd91d085" exitCode=1 Nov 24 08:50:14 crc kubenswrapper[4886]: I1124 08:50:14.758298 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-2dk8j" event={"ID":"5d515fec-60f3-4bf7-9ba4-697bb691b670","Type":"ContainerDied","Data":"ead2821f6b71571212807c6f2984d541faf5143c28fee48594b19e08bd91d085"} Nov 24 08:50:14 crc kubenswrapper[4886]: I1124 08:50:14.758722 4886 scope.go:117] "RemoveContainer" containerID="ead2821f6b71571212807c6f2984d541faf5143c28fee48594b19e08bd91d085" Nov 24 08:50:14 crc kubenswrapper[4886]: I1124 08:50:14.777498 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:50:14Z is after 2025-08-24T17:21:41Z" Nov 24 08:50:14 crc kubenswrapper[4886]: I1124 08:50:14.781384 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:14 crc kubenswrapper[4886]: I1124 08:50:14.781465 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:14 crc kubenswrapper[4886]: I1124 08:50:14.781477 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:14 crc kubenswrapper[4886]: I1124 08:50:14.781504 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:14 crc kubenswrapper[4886]: I1124 08:50:14.781517 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:14Z","lastTransitionTime":"2025-11-24T08:50:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:14 crc kubenswrapper[4886]: I1124 08:50:14.791678 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-82v6c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5d34f7b-b75b-4572-87ca-a01c66ba67b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9fe889c11fc75d5c449c6facbeb67f8fc2d60ffedbdf1433b72147cdabd353e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8qmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-82v6c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:50:14Z is after 2025-08-24T17:21:41Z" Nov 24 08:50:14 crc kubenswrapper[4886]: I1124 08:50:14.807767 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56bddcbe3378ccc43de3046352a1284d9590790973736e218e891b804bfe1ff8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:50:14Z is after 2025-08-24T17:21:41Z" Nov 24 08:50:14 crc kubenswrapper[4886]: I1124 08:50:14.819856 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://336d2d1c7e69b9e2c481681be12af7bfd17b8517e7fd97a67b3ca82dbc84ac35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c72af163fd4d5f2dc2a642984873ee83c62e9f1f96fac318fd17d5a74e5ea82f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:50:14Z is after 2025-08-24T17:21:41Z" Nov 24 08:50:14 crc kubenswrapper[4886]: I1124 08:50:14.829515 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-fkfxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7844f778-bcd5-40fe-ad92-0cc0fcd6c5d7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9ckc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9ckc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:39Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-fkfxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:50:14Z is after 2025-08-24T17:21:41Z" Nov 24 08:50:14 crc kubenswrapper[4886]: I1124 08:50:14.843175 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb6a474a-7674-4280-8da4-784cc3c4fc67\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e99af72fba7cd02a022aede4ff904f6dc0d263429468b460f3886cbbf8767e9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3a5e301ad7f424b4db6c7c2d4054973bf707f225de81d58a13077e1a130fe59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://879cdf4650f88cd23cfed38dabd912619f3c95ab3c254c0eb893eb4cbb2f8c5c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://243dc8c38e86fbd96d073b5dc2edaf3f378f052563abfedf86cd78e6270f11b1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:04Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:50:14Z is after 2025-08-24T17:21:41Z" Nov 24 08:50:14 crc kubenswrapper[4886]: I1124 08:50:14.850271 4886 scope.go:117] "RemoveContainer" containerID="76a9b348444e1c0367116f0081d701af938e94b1cc0731ae7fe4294b12666b87" Nov 24 08:50:14 crc kubenswrapper[4886]: E1124 08:50:14.850538 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-657wc_openshift-ovn-kubernetes(03f9078c-6b20-46d5-ae2a-2eb20e236769)\"" pod="openshift-ovn-kubernetes/ovnkube-node-657wc" podUID="03f9078c-6b20-46d5-ae2a-2eb20e236769" Nov 24 08:50:14 crc kubenswrapper[4886]: I1124 08:50:14.850835 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-fkfxv" Nov 24 08:50:14 crc kubenswrapper[4886]: E1124 08:50:14.850941 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-fkfxv" podUID="7844f778-bcd5-40fe-ad92-0cc0fcd6c5d7" Nov 24 08:50:14 crc kubenswrapper[4886]: I1124 08:50:14.851374 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 08:50:14 crc kubenswrapper[4886]: E1124 08:50:14.851578 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 08:50:14 crc kubenswrapper[4886]: I1124 08:50:14.858173 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b56e78f-e569-4f01-9a0f-6b9db3b505d7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:50:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:50:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4cd419a96834f5093a6a3bad2d2b99f9ab6f4abdbe5d7fddadf1505a0fd1f3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff0e663f6feec9c77de1cd993c443c9b3df3492612c554cc8b95c8e4e4a026b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://146e753c49089634c3d3105a3debf9737245745d8ed9aee61ae2cf4f56ae37ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f13b61e62e7d9dc91305545f1843a74d2b10a9df1f207cddb9947059876a6d36\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6d0c617fe424a39e246d154cd6c0d8dc451c251e24dffaf45d644354e94e01e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"message\\\":\\\" 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1124 08:49:25.277531 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1124 08:49:25.277556 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1124 08:49:25.277707 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1124 08:49:25.277722 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1124 08:49:25.278962 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-505038271/tls.crt::/tmp/serving-cert-505038271/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1763974159\\\\\\\\\\\\\\\" (2025-11-24 08:49:18 +0000 UTC to 2025-12-24 08:49:19 +0000 UTC (now=2025-11-24 08:49:25.278915173 +0000 UTC))\\\\\\\"\\\\nI1124 08:49:25.279136 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1763974165\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1763974164\\\\\\\\\\\\\\\" (2025-11-24 07:49:24 +0000 UTC to 2026-11-24 07:49:24 +0000 UTC (now=2025-11-24 08:49:25.279105228 +0000 UTC))\\\\\\\"\\\\nI1124 08:49:25.279186 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1124 08:49:25.279230 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1124 08:49:25.279272 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-505038271/tls.crt::/tmp/serving-cert-505038271/tls.key\\\\\\\"\\\\nI1124 08:49:25.279422 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF1124 08:49:25.273255 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://97e022f9b9279047d700f687148beaeef5ce4f72de418be1bdf0edde4e5211da\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc5ed0e2c8f6299305d6c26194c4b43e6e884785effd5ae99254221d738e7662\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc5ed0e2c8f6299305d6c26194c4b43e6e884785effd5ae99254221d738e7662\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:50:14Z is after 2025-08-24T17:21:41Z" Nov 24 08:50:14 crc kubenswrapper[4886]: I1124 08:50:14.873297 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15edcf2f-5ba7-4db0-b44e-fac293a82688\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44ecd99cee98ca44de5a0af10ab152bc27aca5ab8f94d9d58b295804261846d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b4868ba55a55a9c6373e02adc88283d51196d5e19b554d0f351c59599731130\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0c4a9d27ad84a91160eed506a2a6b0b34968fc71063ce3493326092049d2bf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://98787b856c8680640854ec328aa9dc502d03f8617e5b19283b2bd03c3cca7988\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://98787b856c8680640854ec328aa9dc502d03f8617e5b19283b2bd03c3cca7988\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:05Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:04Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:50:14Z is after 2025-08-24T17:21:41Z" Nov 24 08:50:14 crc kubenswrapper[4886]: I1124 08:50:14.884313 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:14 crc kubenswrapper[4886]: I1124 08:50:14.884360 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:14 crc kubenswrapper[4886]: I1124 08:50:14.884377 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:14 crc kubenswrapper[4886]: I1124 08:50:14.884404 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:14 crc kubenswrapper[4886]: I1124 08:50:14.884417 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:14Z","lastTransitionTime":"2025-11-24T08:50:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:14 crc kubenswrapper[4886]: I1124 08:50:14.887490 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://736f509b74b3eb557d2963822fd7f3aa507fc34bc178d6b2c05e05dde6e2c88e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:50:14Z is after 2025-08-24T17:21:41Z" Nov 24 08:50:14 crc kubenswrapper[4886]: I1124 08:50:14.901242 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:50:14Z is after 2025-08-24T17:21:41Z" Nov 24 08:50:14 crc kubenswrapper[4886]: I1124 08:50:14.911552 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5g6ld" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82737edd-859f-4e06-8559-47375deb3a1a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f1c6253e8a3c02a22a64ad8913937a50b00f99c7c9da201aa0dd154175dc24c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ffhbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5g6ld\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:50:14Z is after 2025-08-24T17:21:41Z" Nov 24 08:50:14 crc kubenswrapper[4886]: I1124 08:50:14.933631 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kl8k6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b578bbaf-7246-42d9-9d2d-346bd1da2c41\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d401bfaa5ee27091fc815de2752da57dcce2799f3ff3f6c88faacbe02565324\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6ng9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b4ab49b6707685341f4480554ba33f3dce9687661a3ed08ca228914a6821703\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b4ab49b6707685341f4480554ba33f3dce9687661a3ed08ca228914a6821703\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6ng9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce7a090aac10049685ef8de3f6533a3a7c61f8fe5cc79f2af17f6f36d0e90168\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce7a090aac10049685ef8de3f6533a3a7c61f8fe5cc79f2af17f6f36d0e90168\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6ng9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d53782a307c740b991c352671c3365951fb623f61ade4b061452b63f19b5e606\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d53782a307c740b991c352671c3365951fb623f61ade4b061452b63f19b5e606\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6ng9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce2a601938d1e214fdd14de097a4d95fdf3ac619b03bc2d269a9edf192fa940c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce2a601938d1e214fdd14de097a4d95fdf3ac619b03bc2d269a9edf192fa940c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6ng9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e219f09fdaf4fbce2d0c7f6d7dd746f27e424ef649941221dcd487c414e9eabd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e219f09fdaf4fbce2d0c7f6d7dd746f27e424ef649941221dcd487c414e9eabd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6ng9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bd697c5a7e9cd31c3ef75163507c789373f10f4ae458f55bf9bdfe318eda6e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bd697c5a7e9cd31c3ef75163507c789373f10f4ae458f55bf9bdfe318eda6e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6ng9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kl8k6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:50:14Z is after 2025-08-24T17:21:41Z" Nov 24 08:50:14 crc kubenswrapper[4886]: I1124 08:50:14.960359 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-657wc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03f9078c-6b20-46d5-ae2a-2eb20e236769\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f59f3a090f9824d71c23fcb40dde8c9d34fa8116c17ba79a45c005b2719f3dc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8c129c56a2c69042516d9caa6abdf4095aefde8fdaef4953a8eed353940be89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f504f7f421bd783399a6d08fa713f0a885d8edd5f5245933a98b6830c5573d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51cd23eb1b9056a44b5716ce542fafec8e3ebde494f136655a2a5d838ec90ab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://48584e4993cf06ba4b29a0732136f0f86c23d81adea76e945015ffafb462819d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://97477bfe13de2286806b9a8e3c723e3828c1b0ef182329585eaef507dd0c36f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76a9b348444e1c0367116f0081d701af938e94b1cc0731ae7fe4294b12666b87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4abc505b9f103f78af1be41a40b7527bf35d25bc6c6e521c6e344290ff1d77b0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T08:49:40Z\\\",\\\"message\\\":\\\"ices.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-route-controller-manager/route-controller-manager_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-route-controller-manager/route-controller-manager\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.239\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI1124 08:49:40.973903 6381 obj_retry.go:386] Retry successful for *v1.Pod openshift-etcd/etcd-crc after 0 failed attempt(s)\\\\nF1124 08:49:40.973905 6381 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:40Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://76a9b348444e1c0367116f0081d701af938e94b1cc0731ae7fe4294b12666b87\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T08:49:59Z\\\",\\\"message\\\":\\\"s_controller.go:360] Finished syncing service ovn-kubernetes-control-plane on namespace openshift-ovn-kubernetes for network=default : 11.501µs\\\\nI1124 08:49:59.095768 6567 obj_retry.go:303] Retry object setup: *v1.Pod openshift-ovn-kubernetes/ovnkube-node-657wc\\\\nI1124 08:49:59.095697 6567 ovnkube_controller.go:1292] Config duration recorder: kind/namespace/name pod/openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-z89jx. OVN-Kubernetes controller took 0.000443563 seconds. No OVN measurement.\\\\nI1124 08:49:59.095828 6567 obj_retry.go:365] Adding new object: *v1.Pod openshift-ovn-kubernetes/ovnkube-node-657wc\\\\nI1124 08:49:59.095838 6567 services_controller.go:356] Processing sync for service openshift-controller-manager/controller-manager for network=default\\\\nF1124 08:49:59.095843 6567 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webh\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://070c77876ce83a479d0ab423d8107a14d506c655b476b2719766d78ad3c88dc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53fba806955f4d7e98c5a2c830bc39cc6f9ac2a6248099aa13da6dff5a0cde76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53fba806955f4d7e98c5a2c830bc39cc6f9ac2a6248099aa13da6dff5a0cde76\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-657wc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:50:14Z is after 2025-08-24T17:21:41Z" Nov 24 08:50:14 crc kubenswrapper[4886]: I1124 08:50:14.979968 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1de93d0-350b-4f10-a50d-d4ca45b47a33\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6cae19a77290a6aa02abc366057b242de03f58a183d136dc19248cdfa2c0fa75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e34e380e0dd3b6788878e07e1d497d003445742fb3e976e78f116fd775fb0ae8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aa26820df6b9179aede13abc90067edc933b9bae6f11a6a1c32cf1a3ee494c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66381e4a8de0eb3ec683f0643c26082dac4f16a6e152164bf9a8b14449a23b0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52f6359fcbe915c66d972a4cdf89646528d2c1d5099c96cb3436e2da54d1099a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ee3470d2b357bc4a39b9fa7da8800930ea9f4d4a3926aa9d2000be8ca2da241\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ee3470d2b357bc4a39b9fa7da8800930ea9f4d4a3926aa9d2000be8ca2da241\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74bf2e186184e37d22515fc9a765b4ff0530354eb4b6af7679cf6f046651f9e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74bf2e186184e37d22515fc9a765b4ff0530354eb4b6af7679cf6f046651f9e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1794ee7f2b8bf736d63385a3755d5a50e5f05cc74470ed04c3f21054476ff5f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1794ee7f2b8bf736d63385a3755d5a50e5f05cc74470ed04c3f21054476ff5f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:04Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:50:14Z is after 2025-08-24T17:21:41Z" Nov 24 08:50:14 crc kubenswrapper[4886]: I1124 08:50:14.987012 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:14 crc kubenswrapper[4886]: I1124 08:50:14.987074 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:14 crc kubenswrapper[4886]: I1124 08:50:14.987089 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:14 crc kubenswrapper[4886]: I1124 08:50:14.987109 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:14 crc kubenswrapper[4886]: I1124 08:50:14.987122 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:14Z","lastTransitionTime":"2025-11-24T08:50:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:14 crc kubenswrapper[4886]: I1124 08:50:14.991220 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vcq8j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6279457-41d0-46e2-9a21-1d3c74311083\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35b8da4abfec501a571590893bbace52b9e8453890065d2a01b0da8c5b8f9aaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mvm6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db48bd367bf72ecf426c27f7afe1df9a346b1d8bb8fe198a14399f47699d47b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mvm6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-vcq8j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:50:14Z is after 2025-08-24T17:21:41Z" Nov 24 08:50:15 crc kubenswrapper[4886]: I1124 08:50:15.004120 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:50:15Z is after 2025-08-24T17:21:41Z" Nov 24 08:50:15 crc kubenswrapper[4886]: I1124 08:50:15.014960 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zc46q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23cb993e-0360-4449-b604-8ddd825a6502\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://008c7d8517b1c3c5f875d9e73315ceb9936936a1d977422f16bf2aaba7e3f4dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8hf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b6a02c10c81171f23bf0e623a3f86710da37f6dd62f885c0366830a60b2be34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8hf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zc46q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:50:15Z is after 2025-08-24T17:21:41Z" Nov 24 08:50:15 crc kubenswrapper[4886]: I1124 08:50:15.029393 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2dk8j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d515fec-60f3-4bf7-9ba4-697bb691b670\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:50:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:50:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ead2821f6b71571212807c6f2984d541faf5143c28fee48594b19e08bd91d085\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ead2821f6b71571212807c6f2984d541faf5143c28fee48594b19e08bd91d085\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T08:50:13Z\\\",\\\"message\\\":\\\"2025-11-24T08:49:28+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_be9db12b-72b2-4aa1-a7ac-01b80a43728e\\\\n2025-11-24T08:49:28+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_be9db12b-72b2-4aa1-a7ac-01b80a43728e to /host/opt/cni/bin/\\\\n2025-11-24T08:49:28Z [verbose] multus-daemon started\\\\n2025-11-24T08:49:28Z [verbose] Readiness Indicator file check\\\\n2025-11-24T08:50:13Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6pfs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2dk8j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:50:15Z is after 2025-08-24T17:21:41Z" Nov 24 08:50:15 crc kubenswrapper[4886]: I1124 08:50:15.043072 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:50:15Z is after 2025-08-24T17:21:41Z" Nov 24 08:50:15 crc kubenswrapper[4886]: I1124 08:50:15.057108 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zc46q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23cb993e-0360-4449-b604-8ddd825a6502\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://008c7d8517b1c3c5f875d9e73315ceb9936936a1d977422f16bf2aaba7e3f4dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8hf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b6a02c10c81171f23bf0e623a3f86710da37f6dd62f885c0366830a60b2be34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8hf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zc46q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:50:15Z is after 2025-08-24T17:21:41Z" Nov 24 08:50:15 crc kubenswrapper[4886]: I1124 08:50:15.069687 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2dk8j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d515fec-60f3-4bf7-9ba4-697bb691b670\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:50:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:50:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ead2821f6b71571212807c6f2984d541faf5143c28fee48594b19e08bd91d085\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ead2821f6b71571212807c6f2984d541faf5143c28fee48594b19e08bd91d085\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T08:50:13Z\\\",\\\"message\\\":\\\"2025-11-24T08:49:28+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_be9db12b-72b2-4aa1-a7ac-01b80a43728e\\\\n2025-11-24T08:49:28+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_be9db12b-72b2-4aa1-a7ac-01b80a43728e to /host/opt/cni/bin/\\\\n2025-11-24T08:49:28Z [verbose] multus-daemon started\\\\n2025-11-24T08:49:28Z [verbose] Readiness Indicator file check\\\\n2025-11-24T08:50:13Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6pfs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2dk8j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:50:15Z is after 2025-08-24T17:21:41Z" Nov 24 08:50:15 crc kubenswrapper[4886]: I1124 08:50:15.089236 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56bddcbe3378ccc43de3046352a1284d9590790973736e218e891b804bfe1ff8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:50:15Z is after 2025-08-24T17:21:41Z" Nov 24 08:50:15 crc kubenswrapper[4886]: I1124 08:50:15.089941 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:15 crc kubenswrapper[4886]: I1124 08:50:15.089988 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:15 crc kubenswrapper[4886]: I1124 08:50:15.089997 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:15 crc kubenswrapper[4886]: I1124 08:50:15.090015 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:15 crc kubenswrapper[4886]: I1124 08:50:15.090024 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:15Z","lastTransitionTime":"2025-11-24T08:50:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:15 crc kubenswrapper[4886]: I1124 08:50:15.101073 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:50:15Z is after 2025-08-24T17:21:41Z" Nov 24 08:50:15 crc kubenswrapper[4886]: I1124 08:50:15.109942 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-82v6c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5d34f7b-b75b-4572-87ca-a01c66ba67b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9fe889c11fc75d5c449c6facbeb67f8fc2d60ffedbdf1433b72147cdabd353e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8qmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-82v6c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:50:15Z is after 2025-08-24T17:21:41Z" Nov 24 08:50:15 crc kubenswrapper[4886]: I1124 08:50:15.120698 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb6a474a-7674-4280-8da4-784cc3c4fc67\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e99af72fba7cd02a022aede4ff904f6dc0d263429468b460f3886cbbf8767e9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3a5e301ad7f424b4db6c7c2d4054973bf707f225de81d58a13077e1a130fe59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://879cdf4650f88cd23cfed38dabd912619f3c95ab3c254c0eb893eb4cbb2f8c5c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://243dc8c38e86fbd96d073b5dc2edaf3f378f052563abfedf86cd78e6270f11b1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:04Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:50:15Z is after 2025-08-24T17:21:41Z" Nov 24 08:50:15 crc kubenswrapper[4886]: I1124 08:50:15.131319 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://336d2d1c7e69b9e2c481681be12af7bfd17b8517e7fd97a67b3ca82dbc84ac35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c72af163fd4d5f2dc2a642984873ee83c62e9f1f96fac318fd17d5a74e5ea82f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:50:15Z is after 2025-08-24T17:21:41Z" Nov 24 08:50:15 crc kubenswrapper[4886]: I1124 08:50:15.139899 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-fkfxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7844f778-bcd5-40fe-ad92-0cc0fcd6c5d7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9ckc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9ckc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:39Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-fkfxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:50:15Z is after 2025-08-24T17:21:41Z" Nov 24 08:50:15 crc kubenswrapper[4886]: I1124 08:50:15.162215 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-657wc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03f9078c-6b20-46d5-ae2a-2eb20e236769\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f59f3a090f9824d71c23fcb40dde8c9d34fa8116c17ba79a45c005b2719f3dc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8c129c56a2c69042516d9caa6abdf4095aefde8fdaef4953a8eed353940be89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f504f7f421bd783399a6d08fa713f0a885d8edd5f5245933a98b6830c5573d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51cd23eb1b9056a44b5716ce542fafec8e3ebde494f136655a2a5d838ec90ab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://48584e4993cf06ba4b29a0732136f0f86c23d81adea76e945015ffafb462819d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://97477bfe13de2286806b9a8e3c723e3828c1b0ef182329585eaef507dd0c36f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76a9b348444e1c0367116f0081d701af938e94b1cc0731ae7fe4294b12666b87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://76a9b348444e1c0367116f0081d701af938e94b1cc0731ae7fe4294b12666b87\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T08:49:59Z\\\",\\\"message\\\":\\\"s_controller.go:360] Finished syncing service ovn-kubernetes-control-plane on namespace openshift-ovn-kubernetes for network=default : 11.501µs\\\\nI1124 08:49:59.095768 6567 obj_retry.go:303] Retry object setup: *v1.Pod openshift-ovn-kubernetes/ovnkube-node-657wc\\\\nI1124 08:49:59.095697 6567 ovnkube_controller.go:1292] Config duration recorder: kind/namespace/name pod/openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-z89jx. OVN-Kubernetes controller took 0.000443563 seconds. No OVN measurement.\\\\nI1124 08:49:59.095828 6567 obj_retry.go:365] Adding new object: *v1.Pod openshift-ovn-kubernetes/ovnkube-node-657wc\\\\nI1124 08:49:59.095838 6567 services_controller.go:356] Processing sync for service openshift-controller-manager/controller-manager for network=default\\\\nF1124 08:49:59.095843 6567 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webh\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:56Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-657wc_openshift-ovn-kubernetes(03f9078c-6b20-46d5-ae2a-2eb20e236769)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://070c77876ce83a479d0ab423d8107a14d506c655b476b2719766d78ad3c88dc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53fba806955f4d7e98c5a2c830bc39cc6f9ac2a6248099aa13da6dff5a0cde76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53fba806955f4d7e98c5a2c830bc39cc6f9ac2a6248099aa13da6dff5a0cde76\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-657wc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:50:15Z is after 2025-08-24T17:21:41Z" Nov 24 08:50:15 crc kubenswrapper[4886]: I1124 08:50:15.183391 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1de93d0-350b-4f10-a50d-d4ca45b47a33\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6cae19a77290a6aa02abc366057b242de03f58a183d136dc19248cdfa2c0fa75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e34e380e0dd3b6788878e07e1d497d003445742fb3e976e78f116fd775fb0ae8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aa26820df6b9179aede13abc90067edc933b9bae6f11a6a1c32cf1a3ee494c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66381e4a8de0eb3ec683f0643c26082dac4f16a6e152164bf9a8b14449a23b0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52f6359fcbe915c66d972a4cdf89646528d2c1d5099c96cb3436e2da54d1099a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ee3470d2b357bc4a39b9fa7da8800930ea9f4d4a3926aa9d2000be8ca2da241\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ee3470d2b357bc4a39b9fa7da8800930ea9f4d4a3926aa9d2000be8ca2da241\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74bf2e186184e37d22515fc9a765b4ff0530354eb4b6af7679cf6f046651f9e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74bf2e186184e37d22515fc9a765b4ff0530354eb4b6af7679cf6f046651f9e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1794ee7f2b8bf736d63385a3755d5a50e5f05cc74470ed04c3f21054476ff5f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1794ee7f2b8bf736d63385a3755d5a50e5f05cc74470ed04c3f21054476ff5f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:04Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:50:15Z is after 2025-08-24T17:21:41Z" Nov 24 08:50:15 crc kubenswrapper[4886]: I1124 08:50:15.192764 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:15 crc kubenswrapper[4886]: I1124 08:50:15.192811 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:15 crc kubenswrapper[4886]: I1124 08:50:15.192823 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:15 crc kubenswrapper[4886]: I1124 08:50:15.192841 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:15 crc kubenswrapper[4886]: I1124 08:50:15.192854 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:15Z","lastTransitionTime":"2025-11-24T08:50:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:15 crc kubenswrapper[4886]: I1124 08:50:15.197709 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b56e78f-e569-4f01-9a0f-6b9db3b505d7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:50:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:50:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4cd419a96834f5093a6a3bad2d2b99f9ab6f4abdbe5d7fddadf1505a0fd1f3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff0e663f6feec9c77de1cd993c443c9b3df3492612c554cc8b95c8e4e4a026b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://146e753c49089634c3d3105a3debf9737245745d8ed9aee61ae2cf4f56ae37ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f13b61e62e7d9dc91305545f1843a74d2b10a9df1f207cddb9947059876a6d36\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6d0c617fe424a39e246d154cd6c0d8dc451c251e24dffaf45d644354e94e01e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"message\\\":\\\" 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1124 08:49:25.277531 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1124 08:49:25.277556 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1124 08:49:25.277707 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1124 08:49:25.277722 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1124 08:49:25.278962 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-505038271/tls.crt::/tmp/serving-cert-505038271/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1763974159\\\\\\\\\\\\\\\" (2025-11-24 08:49:18 +0000 UTC to 2025-12-24 08:49:19 +0000 UTC (now=2025-11-24 08:49:25.278915173 +0000 UTC))\\\\\\\"\\\\nI1124 08:49:25.279136 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1763974165\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1763974164\\\\\\\\\\\\\\\" (2025-11-24 07:49:24 +0000 UTC to 2026-11-24 07:49:24 +0000 UTC (now=2025-11-24 08:49:25.279105228 +0000 UTC))\\\\\\\"\\\\nI1124 08:49:25.279186 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1124 08:49:25.279230 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1124 08:49:25.279272 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-505038271/tls.crt::/tmp/serving-cert-505038271/tls.key\\\\\\\"\\\\nI1124 08:49:25.279422 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF1124 08:49:25.273255 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://97e022f9b9279047d700f687148beaeef5ce4f72de418be1bdf0edde4e5211da\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc5ed0e2c8f6299305d6c26194c4b43e6e884785effd5ae99254221d738e7662\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc5ed0e2c8f6299305d6c26194c4b43e6e884785effd5ae99254221d738e7662\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:50:15Z is after 2025-08-24T17:21:41Z" Nov 24 08:50:15 crc kubenswrapper[4886]: I1124 08:50:15.211592 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15edcf2f-5ba7-4db0-b44e-fac293a82688\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44ecd99cee98ca44de5a0af10ab152bc27aca5ab8f94d9d58b295804261846d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b4868ba55a55a9c6373e02adc88283d51196d5e19b554d0f351c59599731130\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0c4a9d27ad84a91160eed506a2a6b0b34968fc71063ce3493326092049d2bf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://98787b856c8680640854ec328aa9dc502d03f8617e5b19283b2bd03c3cca7988\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://98787b856c8680640854ec328aa9dc502d03f8617e5b19283b2bd03c3cca7988\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:05Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:04Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:50:15Z is after 2025-08-24T17:21:41Z" Nov 24 08:50:15 crc kubenswrapper[4886]: I1124 08:50:15.225530 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://736f509b74b3eb557d2963822fd7f3aa507fc34bc178d6b2c05e05dde6e2c88e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:50:15Z is after 2025-08-24T17:21:41Z" Nov 24 08:50:15 crc kubenswrapper[4886]: I1124 08:50:15.240853 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:50:15Z is after 2025-08-24T17:21:41Z" Nov 24 08:50:15 crc kubenswrapper[4886]: I1124 08:50:15.251944 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5g6ld" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82737edd-859f-4e06-8559-47375deb3a1a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f1c6253e8a3c02a22a64ad8913937a50b00f99c7c9da201aa0dd154175dc24c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ffhbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5g6ld\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:50:15Z is after 2025-08-24T17:21:41Z" Nov 24 08:50:15 crc kubenswrapper[4886]: I1124 08:50:15.268988 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kl8k6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b578bbaf-7246-42d9-9d2d-346bd1da2c41\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d401bfaa5ee27091fc815de2752da57dcce2799f3ff3f6c88faacbe02565324\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6ng9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b4ab49b6707685341f4480554ba33f3dce9687661a3ed08ca228914a6821703\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b4ab49b6707685341f4480554ba33f3dce9687661a3ed08ca228914a6821703\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6ng9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce7a090aac10049685ef8de3f6533a3a7c61f8fe5cc79f2af17f6f36d0e90168\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce7a090aac10049685ef8de3f6533a3a7c61f8fe5cc79f2af17f6f36d0e90168\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6ng9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d53782a307c740b991c352671c3365951fb623f61ade4b061452b63f19b5e606\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d53782a307c740b991c352671c3365951fb623f61ade4b061452b63f19b5e606\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6ng9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce2a601938d1e214fdd14de097a4d95fdf3ac619b03bc2d269a9edf192fa940c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce2a601938d1e214fdd14de097a4d95fdf3ac619b03bc2d269a9edf192fa940c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6ng9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e219f09fdaf4fbce2d0c7f6d7dd746f27e424ef649941221dcd487c414e9eabd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e219f09fdaf4fbce2d0c7f6d7dd746f27e424ef649941221dcd487c414e9eabd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6ng9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bd697c5a7e9cd31c3ef75163507c789373f10f4ae458f55bf9bdfe318eda6e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bd697c5a7e9cd31c3ef75163507c789373f10f4ae458f55bf9bdfe318eda6e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6ng9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kl8k6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:50:15Z is after 2025-08-24T17:21:41Z" Nov 24 08:50:15 crc kubenswrapper[4886]: I1124 08:50:15.283480 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vcq8j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6279457-41d0-46e2-9a21-1d3c74311083\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35b8da4abfec501a571590893bbace52b9e8453890065d2a01b0da8c5b8f9aaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mvm6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db48bd367bf72ecf426c27f7afe1df9a346b1d8bb8fe198a14399f47699d47b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mvm6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-vcq8j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:50:15Z is after 2025-08-24T17:21:41Z" Nov 24 08:50:15 crc kubenswrapper[4886]: I1124 08:50:15.294875 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:15 crc kubenswrapper[4886]: I1124 08:50:15.294911 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:15 crc kubenswrapper[4886]: I1124 08:50:15.294921 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:15 crc kubenswrapper[4886]: I1124 08:50:15.294940 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:15 crc kubenswrapper[4886]: I1124 08:50:15.294950 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:15Z","lastTransitionTime":"2025-11-24T08:50:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:15 crc kubenswrapper[4886]: I1124 08:50:15.300700 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b56e78f-e569-4f01-9a0f-6b9db3b505d7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:50:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:50:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4cd419a96834f5093a6a3bad2d2b99f9ab6f4abdbe5d7fddadf1505a0fd1f3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff0e663f6feec9c77de1cd993c443c9b3df3492612c554cc8b95c8e4e4a026b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://146e753c49089634c3d3105a3debf9737245745d8ed9aee61ae2cf4f56ae37ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f13b61e62e7d9dc91305545f1843a74d2b10a9df1f207cddb9947059876a6d36\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6d0c617fe424a39e246d154cd6c0d8dc451c251e24dffaf45d644354e94e01e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"message\\\":\\\" 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1124 08:49:25.277531 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1124 08:49:25.277556 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1124 08:49:25.277707 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1124 08:49:25.277722 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1124 08:49:25.278962 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-505038271/tls.crt::/tmp/serving-cert-505038271/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1763974159\\\\\\\\\\\\\\\" (2025-11-24 08:49:18 +0000 UTC to 2025-12-24 08:49:19 +0000 UTC (now=2025-11-24 08:49:25.278915173 +0000 UTC))\\\\\\\"\\\\nI1124 08:49:25.279136 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1763974165\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1763974164\\\\\\\\\\\\\\\" (2025-11-24 07:49:24 +0000 UTC to 2026-11-24 07:49:24 +0000 UTC (now=2025-11-24 08:49:25.279105228 +0000 UTC))\\\\\\\"\\\\nI1124 08:49:25.279186 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1124 08:49:25.279230 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1124 08:49:25.279272 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-505038271/tls.crt::/tmp/serving-cert-505038271/tls.key\\\\\\\"\\\\nI1124 08:49:25.279422 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF1124 08:49:25.273255 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://97e022f9b9279047d700f687148beaeef5ce4f72de418be1bdf0edde4e5211da\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc5ed0e2c8f6299305d6c26194c4b43e6e884785effd5ae99254221d738e7662\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc5ed0e2c8f6299305d6c26194c4b43e6e884785effd5ae99254221d738e7662\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:50:15Z is after 2025-08-24T17:21:41Z" Nov 24 08:50:15 crc kubenswrapper[4886]: I1124 08:50:15.313481 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15edcf2f-5ba7-4db0-b44e-fac293a82688\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44ecd99cee98ca44de5a0af10ab152bc27aca5ab8f94d9d58b295804261846d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b4868ba55a55a9c6373e02adc88283d51196d5e19b554d0f351c59599731130\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0c4a9d27ad84a91160eed506a2a6b0b34968fc71063ce3493326092049d2bf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://98787b856c8680640854ec328aa9dc502d03f8617e5b19283b2bd03c3cca7988\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://98787b856c8680640854ec328aa9dc502d03f8617e5b19283b2bd03c3cca7988\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:05Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:04Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:50:15Z is after 2025-08-24T17:21:41Z" Nov 24 08:50:15 crc kubenswrapper[4886]: I1124 08:50:15.327203 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://736f509b74b3eb557d2963822fd7f3aa507fc34bc178d6b2c05e05dde6e2c88e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:50:15Z is after 2025-08-24T17:21:41Z" Nov 24 08:50:15 crc kubenswrapper[4886]: I1124 08:50:15.343315 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:50:15Z is after 2025-08-24T17:21:41Z" Nov 24 08:50:15 crc kubenswrapper[4886]: I1124 08:50:15.356795 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5g6ld" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82737edd-859f-4e06-8559-47375deb3a1a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f1c6253e8a3c02a22a64ad8913937a50b00f99c7c9da201aa0dd154175dc24c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ffhbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5g6ld\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:50:15Z is after 2025-08-24T17:21:41Z" Nov 24 08:50:15 crc kubenswrapper[4886]: I1124 08:50:15.372837 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kl8k6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b578bbaf-7246-42d9-9d2d-346bd1da2c41\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d401bfaa5ee27091fc815de2752da57dcce2799f3ff3f6c88faacbe02565324\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6ng9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b4ab49b6707685341f4480554ba33f3dce9687661a3ed08ca228914a6821703\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b4ab49b6707685341f4480554ba33f3dce9687661a3ed08ca228914a6821703\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6ng9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce7a090aac10049685ef8de3f6533a3a7c61f8fe5cc79f2af17f6f36d0e90168\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce7a090aac10049685ef8de3f6533a3a7c61f8fe5cc79f2af17f6f36d0e90168\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6ng9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d53782a307c740b991c352671c3365951fb623f61ade4b061452b63f19b5e606\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d53782a307c740b991c352671c3365951fb623f61ade4b061452b63f19b5e606\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6ng9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce2a601938d1e214fdd14de097a4d95fdf3ac619b03bc2d269a9edf192fa940c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce2a601938d1e214fdd14de097a4d95fdf3ac619b03bc2d269a9edf192fa940c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6ng9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e219f09fdaf4fbce2d0c7f6d7dd746f27e424ef649941221dcd487c414e9eabd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e219f09fdaf4fbce2d0c7f6d7dd746f27e424ef649941221dcd487c414e9eabd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6ng9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bd697c5a7e9cd31c3ef75163507c789373f10f4ae458f55bf9bdfe318eda6e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bd697c5a7e9cd31c3ef75163507c789373f10f4ae458f55bf9bdfe318eda6e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6ng9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kl8k6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:50:15Z is after 2025-08-24T17:21:41Z" Nov 24 08:50:15 crc kubenswrapper[4886]: I1124 08:50:15.395688 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-657wc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03f9078c-6b20-46d5-ae2a-2eb20e236769\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f59f3a090f9824d71c23fcb40dde8c9d34fa8116c17ba79a45c005b2719f3dc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8c129c56a2c69042516d9caa6abdf4095aefde8fdaef4953a8eed353940be89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f504f7f421bd783399a6d08fa713f0a885d8edd5f5245933a98b6830c5573d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51cd23eb1b9056a44b5716ce542fafec8e3ebde494f136655a2a5d838ec90ab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://48584e4993cf06ba4b29a0732136f0f86c23d81adea76e945015ffafb462819d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://97477bfe13de2286806b9a8e3c723e3828c1b0ef182329585eaef507dd0c36f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76a9b348444e1c0367116f0081d701af938e94b1cc0731ae7fe4294b12666b87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://76a9b348444e1c0367116f0081d701af938e94b1cc0731ae7fe4294b12666b87\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T08:49:59Z\\\",\\\"message\\\":\\\"s_controller.go:360] Finished syncing service ovn-kubernetes-control-plane on namespace openshift-ovn-kubernetes for network=default : 11.501µs\\\\nI1124 08:49:59.095768 6567 obj_retry.go:303] Retry object setup: *v1.Pod openshift-ovn-kubernetes/ovnkube-node-657wc\\\\nI1124 08:49:59.095697 6567 ovnkube_controller.go:1292] Config duration recorder: kind/namespace/name pod/openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-z89jx. OVN-Kubernetes controller took 0.000443563 seconds. No OVN measurement.\\\\nI1124 08:49:59.095828 6567 obj_retry.go:365] Adding new object: *v1.Pod openshift-ovn-kubernetes/ovnkube-node-657wc\\\\nI1124 08:49:59.095838 6567 services_controller.go:356] Processing sync for service openshift-controller-manager/controller-manager for network=default\\\\nF1124 08:49:59.095843 6567 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webh\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:56Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-657wc_openshift-ovn-kubernetes(03f9078c-6b20-46d5-ae2a-2eb20e236769)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://070c77876ce83a479d0ab423d8107a14d506c655b476b2719766d78ad3c88dc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53fba806955f4d7e98c5a2c830bc39cc6f9ac2a6248099aa13da6dff5a0cde76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53fba806955f4d7e98c5a2c830bc39cc6f9ac2a6248099aa13da6dff5a0cde76\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-657wc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:50:15Z is after 2025-08-24T17:21:41Z" Nov 24 08:50:15 crc kubenswrapper[4886]: I1124 08:50:15.397391 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:15 crc kubenswrapper[4886]: I1124 08:50:15.397417 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:15 crc kubenswrapper[4886]: I1124 08:50:15.397426 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:15 crc kubenswrapper[4886]: I1124 08:50:15.397442 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:15 crc kubenswrapper[4886]: I1124 08:50:15.397456 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:15Z","lastTransitionTime":"2025-11-24T08:50:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:15 crc kubenswrapper[4886]: I1124 08:50:15.434386 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1de93d0-350b-4f10-a50d-d4ca45b47a33\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6cae19a77290a6aa02abc366057b242de03f58a183d136dc19248cdfa2c0fa75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e34e380e0dd3b6788878e07e1d497d003445742fb3e976e78f116fd775fb0ae8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aa26820df6b9179aede13abc90067edc933b9bae6f11a6a1c32cf1a3ee494c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66381e4a8de0eb3ec683f0643c26082dac4f16a6e152164bf9a8b14449a23b0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52f6359fcbe915c66d972a4cdf89646528d2c1d5099c96cb3436e2da54d1099a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ee3470d2b357bc4a39b9fa7da8800930ea9f4d4a3926aa9d2000be8ca2da241\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ee3470d2b357bc4a39b9fa7da8800930ea9f4d4a3926aa9d2000be8ca2da241\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74bf2e186184e37d22515fc9a765b4ff0530354eb4b6af7679cf6f046651f9e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74bf2e186184e37d22515fc9a765b4ff0530354eb4b6af7679cf6f046651f9e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1794ee7f2b8bf736d63385a3755d5a50e5f05cc74470ed04c3f21054476ff5f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1794ee7f2b8bf736d63385a3755d5a50e5f05cc74470ed04c3f21054476ff5f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:04Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:50:15Z is after 2025-08-24T17:21:41Z" Nov 24 08:50:15 crc kubenswrapper[4886]: I1124 08:50:15.455202 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vcq8j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6279457-41d0-46e2-9a21-1d3c74311083\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35b8da4abfec501a571590893bbace52b9e8453890065d2a01b0da8c5b8f9aaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mvm6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db48bd367bf72ecf426c27f7afe1df9a346b1d8bb8fe198a14399f47699d47b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mvm6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-vcq8j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:50:15Z is after 2025-08-24T17:21:41Z" Nov 24 08:50:15 crc kubenswrapper[4886]: I1124 08:50:15.478402 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:50:15Z is after 2025-08-24T17:21:41Z" Nov 24 08:50:15 crc kubenswrapper[4886]: I1124 08:50:15.492367 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zc46q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23cb993e-0360-4449-b604-8ddd825a6502\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://008c7d8517b1c3c5f875d9e73315ceb9936936a1d977422f16bf2aaba7e3f4dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8hf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b6a02c10c81171f23bf0e623a3f86710da37f6dd62f885c0366830a60b2be34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8hf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zc46q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:50:15Z is after 2025-08-24T17:21:41Z" Nov 24 08:50:15 crc kubenswrapper[4886]: I1124 08:50:15.499355 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:15 crc kubenswrapper[4886]: I1124 08:50:15.499425 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:15 crc kubenswrapper[4886]: I1124 08:50:15.499438 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:15 crc kubenswrapper[4886]: I1124 08:50:15.499456 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:15 crc kubenswrapper[4886]: I1124 08:50:15.499470 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:15Z","lastTransitionTime":"2025-11-24T08:50:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:15 crc kubenswrapper[4886]: I1124 08:50:15.506355 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2dk8j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d515fec-60f3-4bf7-9ba4-697bb691b670\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:50:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:50:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ead2821f6b71571212807c6f2984d541faf5143c28fee48594b19e08bd91d085\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ead2821f6b71571212807c6f2984d541faf5143c28fee48594b19e08bd91d085\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T08:50:13Z\\\",\\\"message\\\":\\\"2025-11-24T08:49:28+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_be9db12b-72b2-4aa1-a7ac-01b80a43728e\\\\n2025-11-24T08:49:28+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_be9db12b-72b2-4aa1-a7ac-01b80a43728e to /host/opt/cni/bin/\\\\n2025-11-24T08:49:28Z [verbose] multus-daemon started\\\\n2025-11-24T08:49:28Z [verbose] Readiness Indicator file check\\\\n2025-11-24T08:50:13Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6pfs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2dk8j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:50:15Z is after 2025-08-24T17:21:41Z" Nov 24 08:50:15 crc kubenswrapper[4886]: I1124 08:50:15.520195 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:50:15Z is after 2025-08-24T17:21:41Z" Nov 24 08:50:15 crc kubenswrapper[4886]: I1124 08:50:15.531031 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-82v6c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5d34f7b-b75b-4572-87ca-a01c66ba67b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9fe889c11fc75d5c449c6facbeb67f8fc2d60ffedbdf1433b72147cdabd353e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8qmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-82v6c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:50:15Z is after 2025-08-24T17:21:41Z" Nov 24 08:50:15 crc kubenswrapper[4886]: I1124 08:50:15.546406 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56bddcbe3378ccc43de3046352a1284d9590790973736e218e891b804bfe1ff8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:50:15Z is after 2025-08-24T17:21:41Z" Nov 24 08:50:15 crc kubenswrapper[4886]: I1124 08:50:15.558218 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://336d2d1c7e69b9e2c481681be12af7bfd17b8517e7fd97a67b3ca82dbc84ac35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c72af163fd4d5f2dc2a642984873ee83c62e9f1f96fac318fd17d5a74e5ea82f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:50:15Z is after 2025-08-24T17:21:41Z" Nov 24 08:50:15 crc kubenswrapper[4886]: I1124 08:50:15.569590 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-fkfxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7844f778-bcd5-40fe-ad92-0cc0fcd6c5d7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9ckc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9ckc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:39Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-fkfxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:50:15Z is after 2025-08-24T17:21:41Z" Nov 24 08:50:15 crc kubenswrapper[4886]: I1124 08:50:15.583321 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb6a474a-7674-4280-8da4-784cc3c4fc67\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e99af72fba7cd02a022aede4ff904f6dc0d263429468b460f3886cbbf8767e9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3a5e301ad7f424b4db6c7c2d4054973bf707f225de81d58a13077e1a130fe59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://879cdf4650f88cd23cfed38dabd912619f3c95ab3c254c0eb893eb4cbb2f8c5c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://243dc8c38e86fbd96d073b5dc2edaf3f378f052563abfedf86cd78e6270f11b1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:04Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:50:15Z is after 2025-08-24T17:21:41Z" Nov 24 08:50:15 crc kubenswrapper[4886]: I1124 08:50:15.602046 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:15 crc kubenswrapper[4886]: I1124 08:50:15.602091 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:15 crc kubenswrapper[4886]: I1124 08:50:15.602100 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:15 crc kubenswrapper[4886]: I1124 08:50:15.602117 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:15 crc kubenswrapper[4886]: I1124 08:50:15.602126 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:15Z","lastTransitionTime":"2025-11-24T08:50:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:15 crc kubenswrapper[4886]: I1124 08:50:15.704648 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:15 crc kubenswrapper[4886]: I1124 08:50:15.704685 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:15 crc kubenswrapper[4886]: I1124 08:50:15.704697 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:15 crc kubenswrapper[4886]: I1124 08:50:15.704714 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:15 crc kubenswrapper[4886]: I1124 08:50:15.704727 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:15Z","lastTransitionTime":"2025-11-24T08:50:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:15 crc kubenswrapper[4886]: I1124 08:50:15.764803 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-2dk8j_5d515fec-60f3-4bf7-9ba4-697bb691b670/kube-multus/0.log" Nov 24 08:50:15 crc kubenswrapper[4886]: I1124 08:50:15.764879 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-2dk8j" event={"ID":"5d515fec-60f3-4bf7-9ba4-697bb691b670","Type":"ContainerStarted","Data":"d14b62d61a68782ff616caa790ed7dd0213d43cd7f9efca2204e0f0bb8400d60"} Nov 24 08:50:15 crc kubenswrapper[4886]: I1124 08:50:15.780283 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:50:15Z is after 2025-08-24T17:21:41Z" Nov 24 08:50:15 crc kubenswrapper[4886]: I1124 08:50:15.790597 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-82v6c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5d34f7b-b75b-4572-87ca-a01c66ba67b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9fe889c11fc75d5c449c6facbeb67f8fc2d60ffedbdf1433b72147cdabd353e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8qmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-82v6c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:50:15Z is after 2025-08-24T17:21:41Z" Nov 24 08:50:15 crc kubenswrapper[4886]: I1124 08:50:15.803400 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56bddcbe3378ccc43de3046352a1284d9590790973736e218e891b804bfe1ff8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:50:15Z is after 2025-08-24T17:21:41Z" Nov 24 08:50:15 crc kubenswrapper[4886]: I1124 08:50:15.806520 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:15 crc kubenswrapper[4886]: I1124 08:50:15.806552 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:15 crc kubenswrapper[4886]: I1124 08:50:15.806562 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:15 crc kubenswrapper[4886]: I1124 08:50:15.806580 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:15 crc kubenswrapper[4886]: I1124 08:50:15.806594 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:15Z","lastTransitionTime":"2025-11-24T08:50:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:15 crc kubenswrapper[4886]: I1124 08:50:15.815400 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://336d2d1c7e69b9e2c481681be12af7bfd17b8517e7fd97a67b3ca82dbc84ac35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c72af163fd4d5f2dc2a642984873ee83c62e9f1f96fac318fd17d5a74e5ea82f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:50:15Z is after 2025-08-24T17:21:41Z" Nov 24 08:50:15 crc kubenswrapper[4886]: I1124 08:50:15.825738 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-fkfxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7844f778-bcd5-40fe-ad92-0cc0fcd6c5d7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9ckc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9ckc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:39Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-fkfxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:50:15Z is after 2025-08-24T17:21:41Z" Nov 24 08:50:15 crc kubenswrapper[4886]: I1124 08:50:15.839948 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb6a474a-7674-4280-8da4-784cc3c4fc67\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e99af72fba7cd02a022aede4ff904f6dc0d263429468b460f3886cbbf8767e9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3a5e301ad7f424b4db6c7c2d4054973bf707f225de81d58a13077e1a130fe59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://879cdf4650f88cd23cfed38dabd912619f3c95ab3c254c0eb893eb4cbb2f8c5c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://243dc8c38e86fbd96d073b5dc2edaf3f378f052563abfedf86cd78e6270f11b1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:04Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:50:15Z is after 2025-08-24T17:21:41Z" Nov 24 08:50:15 crc kubenswrapper[4886]: I1124 08:50:15.848786 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 08:50:15 crc kubenswrapper[4886]: I1124 08:50:15.848889 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 08:50:15 crc kubenswrapper[4886]: E1124 08:50:15.848977 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 08:50:15 crc kubenswrapper[4886]: E1124 08:50:15.849092 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 08:50:15 crc kubenswrapper[4886]: I1124 08:50:15.855812 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b56e78f-e569-4f01-9a0f-6b9db3b505d7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:50:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:50:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4cd419a96834f5093a6a3bad2d2b99f9ab6f4abdbe5d7fddadf1505a0fd1f3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff0e663f6feec9c77de1cd993c443c9b3df3492612c554cc8b95c8e4e4a026b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://146e753c49089634c3d3105a3debf9737245745d8ed9aee61ae2cf4f56ae37ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f13b61e62e7d9dc91305545f1843a74d2b10a9df1f207cddb9947059876a6d36\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6d0c617fe424a39e246d154cd6c0d8dc451c251e24dffaf45d644354e94e01e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"message\\\":\\\" 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1124 08:49:25.277531 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1124 08:49:25.277556 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1124 08:49:25.277707 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1124 08:49:25.277722 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1124 08:49:25.278962 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-505038271/tls.crt::/tmp/serving-cert-505038271/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1763974159\\\\\\\\\\\\\\\" (2025-11-24 08:49:18 +0000 UTC to 2025-12-24 08:49:19 +0000 UTC (now=2025-11-24 08:49:25.278915173 +0000 UTC))\\\\\\\"\\\\nI1124 08:49:25.279136 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1763974165\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1763974164\\\\\\\\\\\\\\\" (2025-11-24 07:49:24 +0000 UTC to 2026-11-24 07:49:24 +0000 UTC (now=2025-11-24 08:49:25.279105228 +0000 UTC))\\\\\\\"\\\\nI1124 08:49:25.279186 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1124 08:49:25.279230 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1124 08:49:25.279272 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-505038271/tls.crt::/tmp/serving-cert-505038271/tls.key\\\\\\\"\\\\nI1124 08:49:25.279422 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF1124 08:49:25.273255 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://97e022f9b9279047d700f687148beaeef5ce4f72de418be1bdf0edde4e5211da\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc5ed0e2c8f6299305d6c26194c4b43e6e884785effd5ae99254221d738e7662\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc5ed0e2c8f6299305d6c26194c4b43e6e884785effd5ae99254221d738e7662\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:50:15Z is after 2025-08-24T17:21:41Z" Nov 24 08:50:15 crc kubenswrapper[4886]: I1124 08:50:15.870521 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15edcf2f-5ba7-4db0-b44e-fac293a82688\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44ecd99cee98ca44de5a0af10ab152bc27aca5ab8f94d9d58b295804261846d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b4868ba55a55a9c6373e02adc88283d51196d5e19b554d0f351c59599731130\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0c4a9d27ad84a91160eed506a2a6b0b34968fc71063ce3493326092049d2bf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://98787b856c8680640854ec328aa9dc502d03f8617e5b19283b2bd03c3cca7988\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://98787b856c8680640854ec328aa9dc502d03f8617e5b19283b2bd03c3cca7988\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:05Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:04Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:50:15Z is after 2025-08-24T17:21:41Z" Nov 24 08:50:15 crc kubenswrapper[4886]: I1124 08:50:15.883200 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://736f509b74b3eb557d2963822fd7f3aa507fc34bc178d6b2c05e05dde6e2c88e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:50:15Z is after 2025-08-24T17:21:41Z" Nov 24 08:50:15 crc kubenswrapper[4886]: I1124 08:50:15.908862 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:15 crc kubenswrapper[4886]: I1124 08:50:15.908897 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:15 crc kubenswrapper[4886]: I1124 08:50:15.908907 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:15 crc kubenswrapper[4886]: I1124 08:50:15.908923 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:15 crc kubenswrapper[4886]: I1124 08:50:15.908935 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:15Z","lastTransitionTime":"2025-11-24T08:50:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:15 crc kubenswrapper[4886]: I1124 08:50:15.908997 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:50:15Z is after 2025-08-24T17:21:41Z" Nov 24 08:50:15 crc kubenswrapper[4886]: I1124 08:50:15.921179 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5g6ld" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82737edd-859f-4e06-8559-47375deb3a1a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f1c6253e8a3c02a22a64ad8913937a50b00f99c7c9da201aa0dd154175dc24c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ffhbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5g6ld\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:50:15Z is after 2025-08-24T17:21:41Z" Nov 24 08:50:15 crc kubenswrapper[4886]: I1124 08:50:15.938561 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kl8k6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b578bbaf-7246-42d9-9d2d-346bd1da2c41\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d401bfaa5ee27091fc815de2752da57dcce2799f3ff3f6c88faacbe02565324\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6ng9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b4ab49b6707685341f4480554ba33f3dce9687661a3ed08ca228914a6821703\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b4ab49b6707685341f4480554ba33f3dce9687661a3ed08ca228914a6821703\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6ng9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce7a090aac10049685ef8de3f6533a3a7c61f8fe5cc79f2af17f6f36d0e90168\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce7a090aac10049685ef8de3f6533a3a7c61f8fe5cc79f2af17f6f36d0e90168\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6ng9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d53782a307c740b991c352671c3365951fb623f61ade4b061452b63f19b5e606\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d53782a307c740b991c352671c3365951fb623f61ade4b061452b63f19b5e606\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6ng9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce2a601938d1e214fdd14de097a4d95fdf3ac619b03bc2d269a9edf192fa940c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce2a601938d1e214fdd14de097a4d95fdf3ac619b03bc2d269a9edf192fa940c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6ng9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e219f09fdaf4fbce2d0c7f6d7dd746f27e424ef649941221dcd487c414e9eabd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e219f09fdaf4fbce2d0c7f6d7dd746f27e424ef649941221dcd487c414e9eabd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6ng9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bd697c5a7e9cd31c3ef75163507c789373f10f4ae458f55bf9bdfe318eda6e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bd697c5a7e9cd31c3ef75163507c789373f10f4ae458f55bf9bdfe318eda6e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6ng9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kl8k6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:50:15Z is after 2025-08-24T17:21:41Z" Nov 24 08:50:15 crc kubenswrapper[4886]: I1124 08:50:15.960585 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-657wc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03f9078c-6b20-46d5-ae2a-2eb20e236769\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f59f3a090f9824d71c23fcb40dde8c9d34fa8116c17ba79a45c005b2719f3dc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8c129c56a2c69042516d9caa6abdf4095aefde8fdaef4953a8eed353940be89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f504f7f421bd783399a6d08fa713f0a885d8edd5f5245933a98b6830c5573d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51cd23eb1b9056a44b5716ce542fafec8e3ebde494f136655a2a5d838ec90ab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://48584e4993cf06ba4b29a0732136f0f86c23d81adea76e945015ffafb462819d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://97477bfe13de2286806b9a8e3c723e3828c1b0ef182329585eaef507dd0c36f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76a9b348444e1c0367116f0081d701af938e94b1cc0731ae7fe4294b12666b87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://76a9b348444e1c0367116f0081d701af938e94b1cc0731ae7fe4294b12666b87\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T08:49:59Z\\\",\\\"message\\\":\\\"s_controller.go:360] Finished syncing service ovn-kubernetes-control-plane on namespace openshift-ovn-kubernetes for network=default : 11.501µs\\\\nI1124 08:49:59.095768 6567 obj_retry.go:303] Retry object setup: *v1.Pod openshift-ovn-kubernetes/ovnkube-node-657wc\\\\nI1124 08:49:59.095697 6567 ovnkube_controller.go:1292] Config duration recorder: kind/namespace/name pod/openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-z89jx. OVN-Kubernetes controller took 0.000443563 seconds. No OVN measurement.\\\\nI1124 08:49:59.095828 6567 obj_retry.go:365] Adding new object: *v1.Pod openshift-ovn-kubernetes/ovnkube-node-657wc\\\\nI1124 08:49:59.095838 6567 services_controller.go:356] Processing sync for service openshift-controller-manager/controller-manager for network=default\\\\nF1124 08:49:59.095843 6567 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webh\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:56Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-657wc_openshift-ovn-kubernetes(03f9078c-6b20-46d5-ae2a-2eb20e236769)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://070c77876ce83a479d0ab423d8107a14d506c655b476b2719766d78ad3c88dc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53fba806955f4d7e98c5a2c830bc39cc6f9ac2a6248099aa13da6dff5a0cde76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53fba806955f4d7e98c5a2c830bc39cc6f9ac2a6248099aa13da6dff5a0cde76\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-657wc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:50:15Z is after 2025-08-24T17:21:41Z" Nov 24 08:50:15 crc kubenswrapper[4886]: I1124 08:50:15.983360 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1de93d0-350b-4f10-a50d-d4ca45b47a33\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6cae19a77290a6aa02abc366057b242de03f58a183d136dc19248cdfa2c0fa75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e34e380e0dd3b6788878e07e1d497d003445742fb3e976e78f116fd775fb0ae8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aa26820df6b9179aede13abc90067edc933b9bae6f11a6a1c32cf1a3ee494c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66381e4a8de0eb3ec683f0643c26082dac4f16a6e152164bf9a8b14449a23b0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52f6359fcbe915c66d972a4cdf89646528d2c1d5099c96cb3436e2da54d1099a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ee3470d2b357bc4a39b9fa7da8800930ea9f4d4a3926aa9d2000be8ca2da241\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ee3470d2b357bc4a39b9fa7da8800930ea9f4d4a3926aa9d2000be8ca2da241\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74bf2e186184e37d22515fc9a765b4ff0530354eb4b6af7679cf6f046651f9e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74bf2e186184e37d22515fc9a765b4ff0530354eb4b6af7679cf6f046651f9e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1794ee7f2b8bf736d63385a3755d5a50e5f05cc74470ed04c3f21054476ff5f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1794ee7f2b8bf736d63385a3755d5a50e5f05cc74470ed04c3f21054476ff5f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:04Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:50:15Z is after 2025-08-24T17:21:41Z" Nov 24 08:50:15 crc kubenswrapper[4886]: I1124 08:50:15.995622 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vcq8j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6279457-41d0-46e2-9a21-1d3c74311083\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35b8da4abfec501a571590893bbace52b9e8453890065d2a01b0da8c5b8f9aaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mvm6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db48bd367bf72ecf426c27f7afe1df9a346b1d8bb8fe198a14399f47699d47b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mvm6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-vcq8j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:50:15Z is after 2025-08-24T17:21:41Z" Nov 24 08:50:16 crc kubenswrapper[4886]: I1124 08:50:16.009083 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:50:16Z is after 2025-08-24T17:21:41Z" Nov 24 08:50:16 crc kubenswrapper[4886]: I1124 08:50:16.010867 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:16 crc kubenswrapper[4886]: I1124 08:50:16.010919 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:16 crc kubenswrapper[4886]: I1124 08:50:16.010932 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:16 crc kubenswrapper[4886]: I1124 08:50:16.010950 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:16 crc kubenswrapper[4886]: I1124 08:50:16.010963 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:16Z","lastTransitionTime":"2025-11-24T08:50:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:16 crc kubenswrapper[4886]: I1124 08:50:16.024616 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zc46q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23cb993e-0360-4449-b604-8ddd825a6502\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://008c7d8517b1c3c5f875d9e73315ceb9936936a1d977422f16bf2aaba7e3f4dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8hf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b6a02c10c81171f23bf0e623a3f86710da37f6dd62f885c0366830a60b2be34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8hf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zc46q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:50:16Z is after 2025-08-24T17:21:41Z" Nov 24 08:50:16 crc kubenswrapper[4886]: I1124 08:50:16.040250 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2dk8j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d515fec-60f3-4bf7-9ba4-697bb691b670\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:50:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:50:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d14b62d61a68782ff616caa790ed7dd0213d43cd7f9efca2204e0f0bb8400d60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ead2821f6b71571212807c6f2984d541faf5143c28fee48594b19e08bd91d085\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T08:50:13Z\\\",\\\"message\\\":\\\"2025-11-24T08:49:28+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_be9db12b-72b2-4aa1-a7ac-01b80a43728e\\\\n2025-11-24T08:49:28+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_be9db12b-72b2-4aa1-a7ac-01b80a43728e to /host/opt/cni/bin/\\\\n2025-11-24T08:49:28Z [verbose] multus-daemon started\\\\n2025-11-24T08:49:28Z [verbose] Readiness Indicator file check\\\\n2025-11-24T08:50:13Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:26Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:50:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6pfs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2dk8j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:50:16Z is after 2025-08-24T17:21:41Z" Nov 24 08:50:16 crc kubenswrapper[4886]: I1124 08:50:16.113968 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:16 crc kubenswrapper[4886]: I1124 08:50:16.114008 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:16 crc kubenswrapper[4886]: I1124 08:50:16.114016 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:16 crc kubenswrapper[4886]: I1124 08:50:16.114031 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:16 crc kubenswrapper[4886]: I1124 08:50:16.114040 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:16Z","lastTransitionTime":"2025-11-24T08:50:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:16 crc kubenswrapper[4886]: I1124 08:50:16.216485 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:16 crc kubenswrapper[4886]: I1124 08:50:16.216527 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:16 crc kubenswrapper[4886]: I1124 08:50:16.216538 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:16 crc kubenswrapper[4886]: I1124 08:50:16.216555 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:16 crc kubenswrapper[4886]: I1124 08:50:16.216566 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:16Z","lastTransitionTime":"2025-11-24T08:50:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:16 crc kubenswrapper[4886]: I1124 08:50:16.318565 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:16 crc kubenswrapper[4886]: I1124 08:50:16.318608 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:16 crc kubenswrapper[4886]: I1124 08:50:16.318625 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:16 crc kubenswrapper[4886]: I1124 08:50:16.318643 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:16 crc kubenswrapper[4886]: I1124 08:50:16.318654 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:16Z","lastTransitionTime":"2025-11-24T08:50:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:16 crc kubenswrapper[4886]: I1124 08:50:16.420805 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:16 crc kubenswrapper[4886]: I1124 08:50:16.420839 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:16 crc kubenswrapper[4886]: I1124 08:50:16.420848 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:16 crc kubenswrapper[4886]: I1124 08:50:16.420862 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:16 crc kubenswrapper[4886]: I1124 08:50:16.420872 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:16Z","lastTransitionTime":"2025-11-24T08:50:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:16 crc kubenswrapper[4886]: I1124 08:50:16.523638 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:16 crc kubenswrapper[4886]: I1124 08:50:16.523675 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:16 crc kubenswrapper[4886]: I1124 08:50:16.523687 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:16 crc kubenswrapper[4886]: I1124 08:50:16.523704 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:16 crc kubenswrapper[4886]: I1124 08:50:16.523716 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:16Z","lastTransitionTime":"2025-11-24T08:50:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:16 crc kubenswrapper[4886]: I1124 08:50:16.625501 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:16 crc kubenswrapper[4886]: I1124 08:50:16.625539 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:16 crc kubenswrapper[4886]: I1124 08:50:16.625549 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:16 crc kubenswrapper[4886]: I1124 08:50:16.625565 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:16 crc kubenswrapper[4886]: I1124 08:50:16.625574 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:16Z","lastTransitionTime":"2025-11-24T08:50:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:16 crc kubenswrapper[4886]: I1124 08:50:16.727557 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:16 crc kubenswrapper[4886]: I1124 08:50:16.727604 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:16 crc kubenswrapper[4886]: I1124 08:50:16.727614 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:16 crc kubenswrapper[4886]: I1124 08:50:16.727635 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:16 crc kubenswrapper[4886]: I1124 08:50:16.727649 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:16Z","lastTransitionTime":"2025-11-24T08:50:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:16 crc kubenswrapper[4886]: I1124 08:50:16.829898 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:16 crc kubenswrapper[4886]: I1124 08:50:16.829943 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:16 crc kubenswrapper[4886]: I1124 08:50:16.829953 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:16 crc kubenswrapper[4886]: I1124 08:50:16.829969 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:16 crc kubenswrapper[4886]: I1124 08:50:16.829979 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:16Z","lastTransitionTime":"2025-11-24T08:50:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:16 crc kubenswrapper[4886]: I1124 08:50:16.848132 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-fkfxv" Nov 24 08:50:16 crc kubenswrapper[4886]: I1124 08:50:16.848199 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 08:50:16 crc kubenswrapper[4886]: E1124 08:50:16.848275 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-fkfxv" podUID="7844f778-bcd5-40fe-ad92-0cc0fcd6c5d7" Nov 24 08:50:16 crc kubenswrapper[4886]: E1124 08:50:16.848355 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 08:50:16 crc kubenswrapper[4886]: I1124 08:50:16.931881 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:16 crc kubenswrapper[4886]: I1124 08:50:16.932117 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:16 crc kubenswrapper[4886]: I1124 08:50:16.932198 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:16 crc kubenswrapper[4886]: I1124 08:50:16.932270 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:16 crc kubenswrapper[4886]: I1124 08:50:16.932329 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:16Z","lastTransitionTime":"2025-11-24T08:50:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:17 crc kubenswrapper[4886]: I1124 08:50:17.034359 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:17 crc kubenswrapper[4886]: I1124 08:50:17.034408 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:17 crc kubenswrapper[4886]: I1124 08:50:17.034419 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:17 crc kubenswrapper[4886]: I1124 08:50:17.034438 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:17 crc kubenswrapper[4886]: I1124 08:50:17.034452 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:17Z","lastTransitionTime":"2025-11-24T08:50:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:17 crc kubenswrapper[4886]: I1124 08:50:17.136865 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:17 crc kubenswrapper[4886]: I1124 08:50:17.136906 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:17 crc kubenswrapper[4886]: I1124 08:50:17.136917 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:17 crc kubenswrapper[4886]: I1124 08:50:17.136933 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:17 crc kubenswrapper[4886]: I1124 08:50:17.136947 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:17Z","lastTransitionTime":"2025-11-24T08:50:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:17 crc kubenswrapper[4886]: I1124 08:50:17.243241 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:17 crc kubenswrapper[4886]: I1124 08:50:17.243291 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:17 crc kubenswrapper[4886]: I1124 08:50:17.244265 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:17 crc kubenswrapper[4886]: I1124 08:50:17.244303 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:17 crc kubenswrapper[4886]: I1124 08:50:17.244316 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:17Z","lastTransitionTime":"2025-11-24T08:50:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:17 crc kubenswrapper[4886]: I1124 08:50:17.346534 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:17 crc kubenswrapper[4886]: I1124 08:50:17.346576 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:17 crc kubenswrapper[4886]: I1124 08:50:17.346587 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:17 crc kubenswrapper[4886]: I1124 08:50:17.346604 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:17 crc kubenswrapper[4886]: I1124 08:50:17.346616 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:17Z","lastTransitionTime":"2025-11-24T08:50:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:17 crc kubenswrapper[4886]: I1124 08:50:17.449211 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:17 crc kubenswrapper[4886]: I1124 08:50:17.449265 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:17 crc kubenswrapper[4886]: I1124 08:50:17.449276 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:17 crc kubenswrapper[4886]: I1124 08:50:17.449299 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:17 crc kubenswrapper[4886]: I1124 08:50:17.449320 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:17Z","lastTransitionTime":"2025-11-24T08:50:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:17 crc kubenswrapper[4886]: I1124 08:50:17.551763 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:17 crc kubenswrapper[4886]: I1124 08:50:17.551815 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:17 crc kubenswrapper[4886]: I1124 08:50:17.551825 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:17 crc kubenswrapper[4886]: I1124 08:50:17.551842 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:17 crc kubenswrapper[4886]: I1124 08:50:17.551852 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:17Z","lastTransitionTime":"2025-11-24T08:50:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:17 crc kubenswrapper[4886]: I1124 08:50:17.654593 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:17 crc kubenswrapper[4886]: I1124 08:50:17.654662 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:17 crc kubenswrapper[4886]: I1124 08:50:17.654678 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:17 crc kubenswrapper[4886]: I1124 08:50:17.654710 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:17 crc kubenswrapper[4886]: I1124 08:50:17.654815 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:17Z","lastTransitionTime":"2025-11-24T08:50:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:17 crc kubenswrapper[4886]: I1124 08:50:17.757284 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:17 crc kubenswrapper[4886]: I1124 08:50:17.757327 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:17 crc kubenswrapper[4886]: I1124 08:50:17.757338 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:17 crc kubenswrapper[4886]: I1124 08:50:17.757357 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:17 crc kubenswrapper[4886]: I1124 08:50:17.757371 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:17Z","lastTransitionTime":"2025-11-24T08:50:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:17 crc kubenswrapper[4886]: I1124 08:50:17.848364 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 08:50:17 crc kubenswrapper[4886]: I1124 08:50:17.848407 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 08:50:17 crc kubenswrapper[4886]: E1124 08:50:17.849235 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 08:50:17 crc kubenswrapper[4886]: E1124 08:50:17.849340 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 08:50:17 crc kubenswrapper[4886]: I1124 08:50:17.859615 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:17 crc kubenswrapper[4886]: I1124 08:50:17.859666 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:17 crc kubenswrapper[4886]: I1124 08:50:17.859680 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:17 crc kubenswrapper[4886]: I1124 08:50:17.859700 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:17 crc kubenswrapper[4886]: I1124 08:50:17.859715 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:17Z","lastTransitionTime":"2025-11-24T08:50:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:17 crc kubenswrapper[4886]: I1124 08:50:17.962195 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:17 crc kubenswrapper[4886]: I1124 08:50:17.962240 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:17 crc kubenswrapper[4886]: I1124 08:50:17.962250 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:17 crc kubenswrapper[4886]: I1124 08:50:17.962266 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:17 crc kubenswrapper[4886]: I1124 08:50:17.962275 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:17Z","lastTransitionTime":"2025-11-24T08:50:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:18 crc kubenswrapper[4886]: I1124 08:50:18.065274 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:18 crc kubenswrapper[4886]: I1124 08:50:18.065320 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:18 crc kubenswrapper[4886]: I1124 08:50:18.065332 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:18 crc kubenswrapper[4886]: I1124 08:50:18.065354 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:18 crc kubenswrapper[4886]: I1124 08:50:18.065368 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:18Z","lastTransitionTime":"2025-11-24T08:50:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:18 crc kubenswrapper[4886]: I1124 08:50:18.167864 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:18 crc kubenswrapper[4886]: I1124 08:50:18.167904 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:18 crc kubenswrapper[4886]: I1124 08:50:18.167914 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:18 crc kubenswrapper[4886]: I1124 08:50:18.167932 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:18 crc kubenswrapper[4886]: I1124 08:50:18.167944 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:18Z","lastTransitionTime":"2025-11-24T08:50:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:18 crc kubenswrapper[4886]: I1124 08:50:18.270689 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:18 crc kubenswrapper[4886]: I1124 08:50:18.270738 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:18 crc kubenswrapper[4886]: I1124 08:50:18.270748 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:18 crc kubenswrapper[4886]: I1124 08:50:18.270766 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:18 crc kubenswrapper[4886]: I1124 08:50:18.270776 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:18Z","lastTransitionTime":"2025-11-24T08:50:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:18 crc kubenswrapper[4886]: I1124 08:50:18.373399 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:18 crc kubenswrapper[4886]: I1124 08:50:18.373449 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:18 crc kubenswrapper[4886]: I1124 08:50:18.373475 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:18 crc kubenswrapper[4886]: I1124 08:50:18.373496 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:18 crc kubenswrapper[4886]: I1124 08:50:18.373509 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:18Z","lastTransitionTime":"2025-11-24T08:50:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:18 crc kubenswrapper[4886]: I1124 08:50:18.477023 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:18 crc kubenswrapper[4886]: I1124 08:50:18.477065 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:18 crc kubenswrapper[4886]: I1124 08:50:18.477076 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:18 crc kubenswrapper[4886]: I1124 08:50:18.477094 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:18 crc kubenswrapper[4886]: I1124 08:50:18.477109 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:18Z","lastTransitionTime":"2025-11-24T08:50:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:18 crc kubenswrapper[4886]: I1124 08:50:18.579747 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:18 crc kubenswrapper[4886]: I1124 08:50:18.579792 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:18 crc kubenswrapper[4886]: I1124 08:50:18.579807 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:18 crc kubenswrapper[4886]: I1124 08:50:18.579830 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:18 crc kubenswrapper[4886]: I1124 08:50:18.579842 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:18Z","lastTransitionTime":"2025-11-24T08:50:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:18 crc kubenswrapper[4886]: I1124 08:50:18.683079 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:18 crc kubenswrapper[4886]: I1124 08:50:18.683200 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:18 crc kubenswrapper[4886]: I1124 08:50:18.683215 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:18 crc kubenswrapper[4886]: I1124 08:50:18.683239 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:18 crc kubenswrapper[4886]: I1124 08:50:18.683250 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:18Z","lastTransitionTime":"2025-11-24T08:50:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:18 crc kubenswrapper[4886]: I1124 08:50:18.785043 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:18 crc kubenswrapper[4886]: I1124 08:50:18.785082 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:18 crc kubenswrapper[4886]: I1124 08:50:18.785092 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:18 crc kubenswrapper[4886]: I1124 08:50:18.785106 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:18 crc kubenswrapper[4886]: I1124 08:50:18.785121 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:18Z","lastTransitionTime":"2025-11-24T08:50:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:18 crc kubenswrapper[4886]: I1124 08:50:18.848939 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 08:50:18 crc kubenswrapper[4886]: I1124 08:50:18.848988 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-fkfxv" Nov 24 08:50:18 crc kubenswrapper[4886]: E1124 08:50:18.849232 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 08:50:18 crc kubenswrapper[4886]: E1124 08:50:18.849455 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-fkfxv" podUID="7844f778-bcd5-40fe-ad92-0cc0fcd6c5d7" Nov 24 08:50:18 crc kubenswrapper[4886]: I1124 08:50:18.887674 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:18 crc kubenswrapper[4886]: I1124 08:50:18.887729 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:18 crc kubenswrapper[4886]: I1124 08:50:18.887741 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:18 crc kubenswrapper[4886]: I1124 08:50:18.887762 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:18 crc kubenswrapper[4886]: I1124 08:50:18.887779 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:18Z","lastTransitionTime":"2025-11-24T08:50:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:18 crc kubenswrapper[4886]: I1124 08:50:18.990466 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:18 crc kubenswrapper[4886]: I1124 08:50:18.990517 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:18 crc kubenswrapper[4886]: I1124 08:50:18.990530 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:18 crc kubenswrapper[4886]: I1124 08:50:18.990548 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:18 crc kubenswrapper[4886]: I1124 08:50:18.990560 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:18Z","lastTransitionTime":"2025-11-24T08:50:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:19 crc kubenswrapper[4886]: I1124 08:50:19.093573 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:19 crc kubenswrapper[4886]: I1124 08:50:19.093614 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:19 crc kubenswrapper[4886]: I1124 08:50:19.093623 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:19 crc kubenswrapper[4886]: I1124 08:50:19.093641 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:19 crc kubenswrapper[4886]: I1124 08:50:19.093651 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:19Z","lastTransitionTime":"2025-11-24T08:50:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:19 crc kubenswrapper[4886]: I1124 08:50:19.196892 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:19 crc kubenswrapper[4886]: I1124 08:50:19.196943 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:19 crc kubenswrapper[4886]: I1124 08:50:19.196953 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:19 crc kubenswrapper[4886]: I1124 08:50:19.196974 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:19 crc kubenswrapper[4886]: I1124 08:50:19.196986 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:19Z","lastTransitionTime":"2025-11-24T08:50:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:19 crc kubenswrapper[4886]: I1124 08:50:19.300078 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:19 crc kubenswrapper[4886]: I1124 08:50:19.300205 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:19 crc kubenswrapper[4886]: I1124 08:50:19.300234 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:19 crc kubenswrapper[4886]: I1124 08:50:19.300272 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:19 crc kubenswrapper[4886]: I1124 08:50:19.300297 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:19Z","lastTransitionTime":"2025-11-24T08:50:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:19 crc kubenswrapper[4886]: I1124 08:50:19.403687 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:19 crc kubenswrapper[4886]: I1124 08:50:19.403808 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:19 crc kubenswrapper[4886]: I1124 08:50:19.403829 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:19 crc kubenswrapper[4886]: I1124 08:50:19.403858 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:19 crc kubenswrapper[4886]: I1124 08:50:19.403880 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:19Z","lastTransitionTime":"2025-11-24T08:50:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:19 crc kubenswrapper[4886]: I1124 08:50:19.505742 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:19 crc kubenswrapper[4886]: I1124 08:50:19.505782 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:19 crc kubenswrapper[4886]: I1124 08:50:19.505791 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:19 crc kubenswrapper[4886]: I1124 08:50:19.505807 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:19 crc kubenswrapper[4886]: I1124 08:50:19.505817 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:19Z","lastTransitionTime":"2025-11-24T08:50:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:19 crc kubenswrapper[4886]: I1124 08:50:19.608294 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:19 crc kubenswrapper[4886]: I1124 08:50:19.608330 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:19 crc kubenswrapper[4886]: I1124 08:50:19.608339 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:19 crc kubenswrapper[4886]: I1124 08:50:19.608354 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:19 crc kubenswrapper[4886]: I1124 08:50:19.608366 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:19Z","lastTransitionTime":"2025-11-24T08:50:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:19 crc kubenswrapper[4886]: I1124 08:50:19.711004 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:19 crc kubenswrapper[4886]: I1124 08:50:19.711181 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:19 crc kubenswrapper[4886]: I1124 08:50:19.711196 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:19 crc kubenswrapper[4886]: I1124 08:50:19.711217 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:19 crc kubenswrapper[4886]: I1124 08:50:19.711229 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:19Z","lastTransitionTime":"2025-11-24T08:50:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:19 crc kubenswrapper[4886]: I1124 08:50:19.813930 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:19 crc kubenswrapper[4886]: I1124 08:50:19.813997 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:19 crc kubenswrapper[4886]: I1124 08:50:19.814007 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:19 crc kubenswrapper[4886]: I1124 08:50:19.814023 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:19 crc kubenswrapper[4886]: I1124 08:50:19.814032 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:19Z","lastTransitionTime":"2025-11-24T08:50:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:19 crc kubenswrapper[4886]: I1124 08:50:19.848910 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 08:50:19 crc kubenswrapper[4886]: E1124 08:50:19.849069 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 08:50:19 crc kubenswrapper[4886]: I1124 08:50:19.849132 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 08:50:19 crc kubenswrapper[4886]: E1124 08:50:19.849327 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 08:50:19 crc kubenswrapper[4886]: I1124 08:50:19.916674 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:19 crc kubenswrapper[4886]: I1124 08:50:19.916725 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:19 crc kubenswrapper[4886]: I1124 08:50:19.916737 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:19 crc kubenswrapper[4886]: I1124 08:50:19.916759 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:19 crc kubenswrapper[4886]: I1124 08:50:19.916772 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:19Z","lastTransitionTime":"2025-11-24T08:50:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:20 crc kubenswrapper[4886]: I1124 08:50:20.019286 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:20 crc kubenswrapper[4886]: I1124 08:50:20.019342 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:20 crc kubenswrapper[4886]: I1124 08:50:20.019351 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:20 crc kubenswrapper[4886]: I1124 08:50:20.019366 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:20 crc kubenswrapper[4886]: I1124 08:50:20.019376 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:20Z","lastTransitionTime":"2025-11-24T08:50:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:20 crc kubenswrapper[4886]: I1124 08:50:20.124666 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:20 crc kubenswrapper[4886]: I1124 08:50:20.124718 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:20 crc kubenswrapper[4886]: I1124 08:50:20.124741 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:20 crc kubenswrapper[4886]: I1124 08:50:20.124761 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:20 crc kubenswrapper[4886]: I1124 08:50:20.124775 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:20Z","lastTransitionTime":"2025-11-24T08:50:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:20 crc kubenswrapper[4886]: I1124 08:50:20.228139 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:20 crc kubenswrapper[4886]: I1124 08:50:20.228199 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:20 crc kubenswrapper[4886]: I1124 08:50:20.228210 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:20 crc kubenswrapper[4886]: I1124 08:50:20.228227 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:20 crc kubenswrapper[4886]: I1124 08:50:20.228240 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:20Z","lastTransitionTime":"2025-11-24T08:50:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:20 crc kubenswrapper[4886]: I1124 08:50:20.330769 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:20 crc kubenswrapper[4886]: I1124 08:50:20.330824 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:20 crc kubenswrapper[4886]: I1124 08:50:20.330839 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:20 crc kubenswrapper[4886]: I1124 08:50:20.330862 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:20 crc kubenswrapper[4886]: I1124 08:50:20.330876 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:20Z","lastTransitionTime":"2025-11-24T08:50:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:20 crc kubenswrapper[4886]: I1124 08:50:20.433130 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:20 crc kubenswrapper[4886]: I1124 08:50:20.433985 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:20 crc kubenswrapper[4886]: I1124 08:50:20.434023 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:20 crc kubenswrapper[4886]: I1124 08:50:20.434052 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:20 crc kubenswrapper[4886]: I1124 08:50:20.434072 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:20Z","lastTransitionTime":"2025-11-24T08:50:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:20 crc kubenswrapper[4886]: I1124 08:50:20.538099 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:20 crc kubenswrapper[4886]: I1124 08:50:20.538367 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:20 crc kubenswrapper[4886]: I1124 08:50:20.538386 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:20 crc kubenswrapper[4886]: I1124 08:50:20.538406 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:20 crc kubenswrapper[4886]: I1124 08:50:20.538416 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:20Z","lastTransitionTime":"2025-11-24T08:50:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:20 crc kubenswrapper[4886]: I1124 08:50:20.641942 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:20 crc kubenswrapper[4886]: I1124 08:50:20.641983 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:20 crc kubenswrapper[4886]: I1124 08:50:20.641993 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:20 crc kubenswrapper[4886]: I1124 08:50:20.642009 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:20 crc kubenswrapper[4886]: I1124 08:50:20.642019 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:20Z","lastTransitionTime":"2025-11-24T08:50:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:20 crc kubenswrapper[4886]: I1124 08:50:20.745109 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:20 crc kubenswrapper[4886]: I1124 08:50:20.745468 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:20 crc kubenswrapper[4886]: I1124 08:50:20.745488 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:20 crc kubenswrapper[4886]: I1124 08:50:20.745512 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:20 crc kubenswrapper[4886]: I1124 08:50:20.745529 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:20Z","lastTransitionTime":"2025-11-24T08:50:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:20 crc kubenswrapper[4886]: I1124 08:50:20.848302 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:20 crc kubenswrapper[4886]: I1124 08:50:20.848346 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 08:50:20 crc kubenswrapper[4886]: I1124 08:50:20.848316 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-fkfxv" Nov 24 08:50:20 crc kubenswrapper[4886]: I1124 08:50:20.848355 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:20 crc kubenswrapper[4886]: E1124 08:50:20.848472 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 08:50:20 crc kubenswrapper[4886]: I1124 08:50:20.848497 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:20 crc kubenswrapper[4886]: I1124 08:50:20.848519 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:20 crc kubenswrapper[4886]: I1124 08:50:20.848530 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:20Z","lastTransitionTime":"2025-11-24T08:50:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:20 crc kubenswrapper[4886]: E1124 08:50:20.848577 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-fkfxv" podUID="7844f778-bcd5-40fe-ad92-0cc0fcd6c5d7" Nov 24 08:50:20 crc kubenswrapper[4886]: I1124 08:50:20.951197 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:20 crc kubenswrapper[4886]: I1124 08:50:20.951273 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:20 crc kubenswrapper[4886]: I1124 08:50:20.951291 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:20 crc kubenswrapper[4886]: I1124 08:50:20.951792 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:20 crc kubenswrapper[4886]: I1124 08:50:20.951858 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:20Z","lastTransitionTime":"2025-11-24T08:50:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:21 crc kubenswrapper[4886]: I1124 08:50:21.055067 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:21 crc kubenswrapper[4886]: I1124 08:50:21.055122 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:21 crc kubenswrapper[4886]: I1124 08:50:21.055137 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:21 crc kubenswrapper[4886]: I1124 08:50:21.055187 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:21 crc kubenswrapper[4886]: I1124 08:50:21.055202 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:21Z","lastTransitionTime":"2025-11-24T08:50:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:21 crc kubenswrapper[4886]: I1124 08:50:21.158848 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:21 crc kubenswrapper[4886]: I1124 08:50:21.158896 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:21 crc kubenswrapper[4886]: I1124 08:50:21.158909 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:21 crc kubenswrapper[4886]: I1124 08:50:21.158930 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:21 crc kubenswrapper[4886]: I1124 08:50:21.158944 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:21Z","lastTransitionTime":"2025-11-24T08:50:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:21 crc kubenswrapper[4886]: I1124 08:50:21.262577 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:21 crc kubenswrapper[4886]: I1124 08:50:21.263133 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:21 crc kubenswrapper[4886]: I1124 08:50:21.263327 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:21 crc kubenswrapper[4886]: I1124 08:50:21.263443 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:21 crc kubenswrapper[4886]: I1124 08:50:21.263581 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:21Z","lastTransitionTime":"2025-11-24T08:50:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:21 crc kubenswrapper[4886]: I1124 08:50:21.367131 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:21 crc kubenswrapper[4886]: I1124 08:50:21.367210 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:21 crc kubenswrapper[4886]: I1124 08:50:21.367293 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:21 crc kubenswrapper[4886]: I1124 08:50:21.367318 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:21 crc kubenswrapper[4886]: I1124 08:50:21.367335 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:21Z","lastTransitionTime":"2025-11-24T08:50:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:21 crc kubenswrapper[4886]: I1124 08:50:21.471827 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:21 crc kubenswrapper[4886]: I1124 08:50:21.471881 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:21 crc kubenswrapper[4886]: I1124 08:50:21.471894 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:21 crc kubenswrapper[4886]: I1124 08:50:21.471917 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:21 crc kubenswrapper[4886]: I1124 08:50:21.471932 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:21Z","lastTransitionTime":"2025-11-24T08:50:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:21 crc kubenswrapper[4886]: I1124 08:50:21.575170 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:21 crc kubenswrapper[4886]: I1124 08:50:21.575471 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:21 crc kubenswrapper[4886]: I1124 08:50:21.575550 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:21 crc kubenswrapper[4886]: I1124 08:50:21.575632 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:21 crc kubenswrapper[4886]: I1124 08:50:21.575787 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:21Z","lastTransitionTime":"2025-11-24T08:50:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:21 crc kubenswrapper[4886]: I1124 08:50:21.678443 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:21 crc kubenswrapper[4886]: I1124 08:50:21.678649 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:21 crc kubenswrapper[4886]: I1124 08:50:21.678742 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:21 crc kubenswrapper[4886]: I1124 08:50:21.678821 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:21 crc kubenswrapper[4886]: I1124 08:50:21.678887 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:21Z","lastTransitionTime":"2025-11-24T08:50:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:21 crc kubenswrapper[4886]: I1124 08:50:21.781496 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:21 crc kubenswrapper[4886]: I1124 08:50:21.781566 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:21 crc kubenswrapper[4886]: I1124 08:50:21.781587 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:21 crc kubenswrapper[4886]: I1124 08:50:21.781643 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:21 crc kubenswrapper[4886]: I1124 08:50:21.781659 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:21Z","lastTransitionTime":"2025-11-24T08:50:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:21 crc kubenswrapper[4886]: I1124 08:50:21.849101 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 08:50:21 crc kubenswrapper[4886]: I1124 08:50:21.849210 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 08:50:21 crc kubenswrapper[4886]: E1124 08:50:21.849635 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 08:50:21 crc kubenswrapper[4886]: E1124 08:50:21.849840 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 08:50:21 crc kubenswrapper[4886]: I1124 08:50:21.884333 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:21 crc kubenswrapper[4886]: I1124 08:50:21.884379 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:21 crc kubenswrapper[4886]: I1124 08:50:21.884392 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:21 crc kubenswrapper[4886]: I1124 08:50:21.884411 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:21 crc kubenswrapper[4886]: I1124 08:50:21.884426 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:21Z","lastTransitionTime":"2025-11-24T08:50:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:21 crc kubenswrapper[4886]: I1124 08:50:21.987476 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:21 crc kubenswrapper[4886]: I1124 08:50:21.987760 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:21 crc kubenswrapper[4886]: I1124 08:50:21.987833 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:21 crc kubenswrapper[4886]: I1124 08:50:21.987910 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:21 crc kubenswrapper[4886]: I1124 08:50:21.987978 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:21Z","lastTransitionTime":"2025-11-24T08:50:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:22 crc kubenswrapper[4886]: I1124 08:50:22.091127 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:22 crc kubenswrapper[4886]: I1124 08:50:22.091191 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:22 crc kubenswrapper[4886]: I1124 08:50:22.091203 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:22 crc kubenswrapper[4886]: I1124 08:50:22.091219 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:22 crc kubenswrapper[4886]: I1124 08:50:22.091228 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:22Z","lastTransitionTime":"2025-11-24T08:50:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:22 crc kubenswrapper[4886]: I1124 08:50:22.194658 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:22 crc kubenswrapper[4886]: I1124 08:50:22.194710 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:22 crc kubenswrapper[4886]: I1124 08:50:22.194720 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:22 crc kubenswrapper[4886]: I1124 08:50:22.194738 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:22 crc kubenswrapper[4886]: I1124 08:50:22.194752 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:22Z","lastTransitionTime":"2025-11-24T08:50:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:22 crc kubenswrapper[4886]: I1124 08:50:22.297297 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:22 crc kubenswrapper[4886]: I1124 08:50:22.297356 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:22 crc kubenswrapper[4886]: I1124 08:50:22.297370 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:22 crc kubenswrapper[4886]: I1124 08:50:22.297388 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:22 crc kubenswrapper[4886]: I1124 08:50:22.297396 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:22Z","lastTransitionTime":"2025-11-24T08:50:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:22 crc kubenswrapper[4886]: I1124 08:50:22.400057 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:22 crc kubenswrapper[4886]: I1124 08:50:22.400097 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:22 crc kubenswrapper[4886]: I1124 08:50:22.400110 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:22 crc kubenswrapper[4886]: I1124 08:50:22.400130 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:22 crc kubenswrapper[4886]: I1124 08:50:22.400142 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:22Z","lastTransitionTime":"2025-11-24T08:50:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:22 crc kubenswrapper[4886]: I1124 08:50:22.502533 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:22 crc kubenswrapper[4886]: I1124 08:50:22.502588 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:22 crc kubenswrapper[4886]: I1124 08:50:22.502600 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:22 crc kubenswrapper[4886]: I1124 08:50:22.502619 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:22 crc kubenswrapper[4886]: I1124 08:50:22.502633 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:22Z","lastTransitionTime":"2025-11-24T08:50:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:22 crc kubenswrapper[4886]: I1124 08:50:22.503718 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:22 crc kubenswrapper[4886]: I1124 08:50:22.503783 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:22 crc kubenswrapper[4886]: I1124 08:50:22.503793 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:22 crc kubenswrapper[4886]: I1124 08:50:22.503809 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:22 crc kubenswrapper[4886]: I1124 08:50:22.503818 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:22Z","lastTransitionTime":"2025-11-24T08:50:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:22 crc kubenswrapper[4886]: E1124 08:50:22.517317 4886 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T08:50:22Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T08:50:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T08:50:22Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T08:50:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T08:50:22Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T08:50:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T08:50:22Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T08:50:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bf6e1d17-5641-40b5-abdc-9697895ace84\\\",\\\"systemUUID\\\":\\\"b95cd08d-0a26-454e-842e-e33553e0c6a8\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:50:22Z is after 2025-08-24T17:21:41Z" Nov 24 08:50:22 crc kubenswrapper[4886]: I1124 08:50:22.527052 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:22 crc kubenswrapper[4886]: I1124 08:50:22.527102 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:22 crc kubenswrapper[4886]: I1124 08:50:22.527113 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:22 crc kubenswrapper[4886]: I1124 08:50:22.527131 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:22 crc kubenswrapper[4886]: I1124 08:50:22.527143 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:22Z","lastTransitionTime":"2025-11-24T08:50:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:22 crc kubenswrapper[4886]: E1124 08:50:22.545052 4886 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T08:50:22Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T08:50:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T08:50:22Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T08:50:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T08:50:22Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T08:50:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T08:50:22Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T08:50:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bf6e1d17-5641-40b5-abdc-9697895ace84\\\",\\\"systemUUID\\\":\\\"b95cd08d-0a26-454e-842e-e33553e0c6a8\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:50:22Z is after 2025-08-24T17:21:41Z" Nov 24 08:50:22 crc kubenswrapper[4886]: I1124 08:50:22.550092 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:22 crc kubenswrapper[4886]: I1124 08:50:22.550179 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:22 crc kubenswrapper[4886]: I1124 08:50:22.550194 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:22 crc kubenswrapper[4886]: I1124 08:50:22.550214 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:22 crc kubenswrapper[4886]: I1124 08:50:22.550226 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:22Z","lastTransitionTime":"2025-11-24T08:50:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:22 crc kubenswrapper[4886]: E1124 08:50:22.566471 4886 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T08:50:22Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T08:50:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T08:50:22Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T08:50:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T08:50:22Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T08:50:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T08:50:22Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T08:50:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bf6e1d17-5641-40b5-abdc-9697895ace84\\\",\\\"systemUUID\\\":\\\"b95cd08d-0a26-454e-842e-e33553e0c6a8\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:50:22Z is after 2025-08-24T17:21:41Z" Nov 24 08:50:22 crc kubenswrapper[4886]: I1124 08:50:22.571091 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:22 crc kubenswrapper[4886]: I1124 08:50:22.571145 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:22 crc kubenswrapper[4886]: I1124 08:50:22.571182 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:22 crc kubenswrapper[4886]: I1124 08:50:22.571201 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:22 crc kubenswrapper[4886]: I1124 08:50:22.571214 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:22Z","lastTransitionTime":"2025-11-24T08:50:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:22 crc kubenswrapper[4886]: E1124 08:50:22.585551 4886 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T08:50:22Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T08:50:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T08:50:22Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T08:50:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T08:50:22Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T08:50:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T08:50:22Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T08:50:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bf6e1d17-5641-40b5-abdc-9697895ace84\\\",\\\"systemUUID\\\":\\\"b95cd08d-0a26-454e-842e-e33553e0c6a8\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:50:22Z is after 2025-08-24T17:21:41Z" Nov 24 08:50:22 crc kubenswrapper[4886]: I1124 08:50:22.589357 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:22 crc kubenswrapper[4886]: I1124 08:50:22.589558 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:22 crc kubenswrapper[4886]: I1124 08:50:22.589724 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:22 crc kubenswrapper[4886]: I1124 08:50:22.589823 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:22 crc kubenswrapper[4886]: I1124 08:50:22.589887 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:22Z","lastTransitionTime":"2025-11-24T08:50:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:22 crc kubenswrapper[4886]: E1124 08:50:22.602397 4886 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T08:50:22Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T08:50:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T08:50:22Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T08:50:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T08:50:22Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T08:50:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T08:50:22Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T08:50:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bf6e1d17-5641-40b5-abdc-9697895ace84\\\",\\\"systemUUID\\\":\\\"b95cd08d-0a26-454e-842e-e33553e0c6a8\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:50:22Z is after 2025-08-24T17:21:41Z" Nov 24 08:50:22 crc kubenswrapper[4886]: E1124 08:50:22.602555 4886 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 24 08:50:22 crc kubenswrapper[4886]: I1124 08:50:22.605318 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:22 crc kubenswrapper[4886]: I1124 08:50:22.605600 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:22 crc kubenswrapper[4886]: I1124 08:50:22.605731 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:22 crc kubenswrapper[4886]: I1124 08:50:22.605825 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:22 crc kubenswrapper[4886]: I1124 08:50:22.605910 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:22Z","lastTransitionTime":"2025-11-24T08:50:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:22 crc kubenswrapper[4886]: I1124 08:50:22.708369 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:22 crc kubenswrapper[4886]: I1124 08:50:22.708626 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:22 crc kubenswrapper[4886]: I1124 08:50:22.708711 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:22 crc kubenswrapper[4886]: I1124 08:50:22.708803 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:22 crc kubenswrapper[4886]: I1124 08:50:22.708893 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:22Z","lastTransitionTime":"2025-11-24T08:50:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:22 crc kubenswrapper[4886]: I1124 08:50:22.810864 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:22 crc kubenswrapper[4886]: I1124 08:50:22.810952 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:22 crc kubenswrapper[4886]: I1124 08:50:22.810975 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:22 crc kubenswrapper[4886]: I1124 08:50:22.811004 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:22 crc kubenswrapper[4886]: I1124 08:50:22.811024 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:22Z","lastTransitionTime":"2025-11-24T08:50:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:22 crc kubenswrapper[4886]: I1124 08:50:22.848108 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 08:50:22 crc kubenswrapper[4886]: E1124 08:50:22.848250 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 08:50:22 crc kubenswrapper[4886]: I1124 08:50:22.848108 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-fkfxv" Nov 24 08:50:22 crc kubenswrapper[4886]: E1124 08:50:22.848532 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-fkfxv" podUID="7844f778-bcd5-40fe-ad92-0cc0fcd6c5d7" Nov 24 08:50:22 crc kubenswrapper[4886]: I1124 08:50:22.912954 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:22 crc kubenswrapper[4886]: I1124 08:50:22.913010 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:22 crc kubenswrapper[4886]: I1124 08:50:22.913025 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:22 crc kubenswrapper[4886]: I1124 08:50:22.913047 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:22 crc kubenswrapper[4886]: I1124 08:50:22.913060 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:22Z","lastTransitionTime":"2025-11-24T08:50:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:23 crc kubenswrapper[4886]: I1124 08:50:23.016754 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:23 crc kubenswrapper[4886]: I1124 08:50:23.016822 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:23 crc kubenswrapper[4886]: I1124 08:50:23.016840 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:23 crc kubenswrapper[4886]: I1124 08:50:23.016912 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:23 crc kubenswrapper[4886]: I1124 08:50:23.016950 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:23Z","lastTransitionTime":"2025-11-24T08:50:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:23 crc kubenswrapper[4886]: I1124 08:50:23.120278 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:23 crc kubenswrapper[4886]: I1124 08:50:23.120343 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:23 crc kubenswrapper[4886]: I1124 08:50:23.120357 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:23 crc kubenswrapper[4886]: I1124 08:50:23.120379 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:23 crc kubenswrapper[4886]: I1124 08:50:23.120395 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:23Z","lastTransitionTime":"2025-11-24T08:50:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:23 crc kubenswrapper[4886]: I1124 08:50:23.223507 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:23 crc kubenswrapper[4886]: I1124 08:50:23.223550 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:23 crc kubenswrapper[4886]: I1124 08:50:23.223561 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:23 crc kubenswrapper[4886]: I1124 08:50:23.223578 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:23 crc kubenswrapper[4886]: I1124 08:50:23.223591 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:23Z","lastTransitionTime":"2025-11-24T08:50:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:23 crc kubenswrapper[4886]: I1124 08:50:23.326082 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:23 crc kubenswrapper[4886]: I1124 08:50:23.326190 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:23 crc kubenswrapper[4886]: I1124 08:50:23.326207 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:23 crc kubenswrapper[4886]: I1124 08:50:23.326225 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:23 crc kubenswrapper[4886]: I1124 08:50:23.326234 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:23Z","lastTransitionTime":"2025-11-24T08:50:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:23 crc kubenswrapper[4886]: I1124 08:50:23.429827 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:23 crc kubenswrapper[4886]: I1124 08:50:23.429890 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:23 crc kubenswrapper[4886]: I1124 08:50:23.429908 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:23 crc kubenswrapper[4886]: I1124 08:50:23.429935 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:23 crc kubenswrapper[4886]: I1124 08:50:23.429955 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:23Z","lastTransitionTime":"2025-11-24T08:50:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:23 crc kubenswrapper[4886]: I1124 08:50:23.532976 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:23 crc kubenswrapper[4886]: I1124 08:50:23.533026 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:23 crc kubenswrapper[4886]: I1124 08:50:23.533063 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:23 crc kubenswrapper[4886]: I1124 08:50:23.533086 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:23 crc kubenswrapper[4886]: I1124 08:50:23.533100 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:23Z","lastTransitionTime":"2025-11-24T08:50:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:23 crc kubenswrapper[4886]: I1124 08:50:23.635774 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:23 crc kubenswrapper[4886]: I1124 08:50:23.635824 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:23 crc kubenswrapper[4886]: I1124 08:50:23.635832 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:23 crc kubenswrapper[4886]: I1124 08:50:23.635850 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:23 crc kubenswrapper[4886]: I1124 08:50:23.635863 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:23Z","lastTransitionTime":"2025-11-24T08:50:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:23 crc kubenswrapper[4886]: I1124 08:50:23.737972 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:23 crc kubenswrapper[4886]: I1124 08:50:23.738022 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:23 crc kubenswrapper[4886]: I1124 08:50:23.738031 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:23 crc kubenswrapper[4886]: I1124 08:50:23.738049 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:23 crc kubenswrapper[4886]: I1124 08:50:23.738064 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:23Z","lastTransitionTime":"2025-11-24T08:50:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:23 crc kubenswrapper[4886]: I1124 08:50:23.841673 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:23 crc kubenswrapper[4886]: I1124 08:50:23.841736 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:23 crc kubenswrapper[4886]: I1124 08:50:23.841748 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:23 crc kubenswrapper[4886]: I1124 08:50:23.841768 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:23 crc kubenswrapper[4886]: I1124 08:50:23.841779 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:23Z","lastTransitionTime":"2025-11-24T08:50:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:23 crc kubenswrapper[4886]: I1124 08:50:23.849175 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 08:50:23 crc kubenswrapper[4886]: I1124 08:50:23.849221 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 08:50:23 crc kubenswrapper[4886]: E1124 08:50:23.849391 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 08:50:23 crc kubenswrapper[4886]: E1124 08:50:23.849584 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 08:50:23 crc kubenswrapper[4886]: I1124 08:50:23.944966 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:23 crc kubenswrapper[4886]: I1124 08:50:23.945010 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:23 crc kubenswrapper[4886]: I1124 08:50:23.945022 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:23 crc kubenswrapper[4886]: I1124 08:50:23.945043 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:23 crc kubenswrapper[4886]: I1124 08:50:23.945059 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:23Z","lastTransitionTime":"2025-11-24T08:50:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:24 crc kubenswrapper[4886]: I1124 08:50:24.048809 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:24 crc kubenswrapper[4886]: I1124 08:50:24.048878 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:24 crc kubenswrapper[4886]: I1124 08:50:24.048896 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:24 crc kubenswrapper[4886]: I1124 08:50:24.048921 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:24 crc kubenswrapper[4886]: I1124 08:50:24.048935 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:24Z","lastTransitionTime":"2025-11-24T08:50:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:24 crc kubenswrapper[4886]: I1124 08:50:24.152409 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:24 crc kubenswrapper[4886]: I1124 08:50:24.152478 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:24 crc kubenswrapper[4886]: I1124 08:50:24.152491 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:24 crc kubenswrapper[4886]: I1124 08:50:24.152513 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:24 crc kubenswrapper[4886]: I1124 08:50:24.152529 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:24Z","lastTransitionTime":"2025-11-24T08:50:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:24 crc kubenswrapper[4886]: I1124 08:50:24.255670 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:24 crc kubenswrapper[4886]: I1124 08:50:24.255718 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:24 crc kubenswrapper[4886]: I1124 08:50:24.255727 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:24 crc kubenswrapper[4886]: I1124 08:50:24.255745 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:24 crc kubenswrapper[4886]: I1124 08:50:24.255756 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:24Z","lastTransitionTime":"2025-11-24T08:50:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:24 crc kubenswrapper[4886]: I1124 08:50:24.359564 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:24 crc kubenswrapper[4886]: I1124 08:50:24.359614 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:24 crc kubenswrapper[4886]: I1124 08:50:24.359639 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:24 crc kubenswrapper[4886]: I1124 08:50:24.359661 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:24 crc kubenswrapper[4886]: I1124 08:50:24.359673 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:24Z","lastTransitionTime":"2025-11-24T08:50:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:24 crc kubenswrapper[4886]: I1124 08:50:24.465432 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:24 crc kubenswrapper[4886]: I1124 08:50:24.465576 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:24 crc kubenswrapper[4886]: I1124 08:50:24.465602 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:24 crc kubenswrapper[4886]: I1124 08:50:24.465637 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:24 crc kubenswrapper[4886]: I1124 08:50:24.465663 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:24Z","lastTransitionTime":"2025-11-24T08:50:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:24 crc kubenswrapper[4886]: I1124 08:50:24.567998 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:24 crc kubenswrapper[4886]: I1124 08:50:24.568043 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:24 crc kubenswrapper[4886]: I1124 08:50:24.568055 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:24 crc kubenswrapper[4886]: I1124 08:50:24.568073 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:24 crc kubenswrapper[4886]: I1124 08:50:24.568083 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:24Z","lastTransitionTime":"2025-11-24T08:50:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:24 crc kubenswrapper[4886]: I1124 08:50:24.670790 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:24 crc kubenswrapper[4886]: I1124 08:50:24.670829 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:24 crc kubenswrapper[4886]: I1124 08:50:24.670837 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:24 crc kubenswrapper[4886]: I1124 08:50:24.670855 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:24 crc kubenswrapper[4886]: I1124 08:50:24.670870 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:24Z","lastTransitionTime":"2025-11-24T08:50:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:24 crc kubenswrapper[4886]: I1124 08:50:24.774447 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:24 crc kubenswrapper[4886]: I1124 08:50:24.774505 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:24 crc kubenswrapper[4886]: I1124 08:50:24.774519 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:24 crc kubenswrapper[4886]: I1124 08:50:24.774537 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:24 crc kubenswrapper[4886]: I1124 08:50:24.774549 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:24Z","lastTransitionTime":"2025-11-24T08:50:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:24 crc kubenswrapper[4886]: I1124 08:50:24.849616 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-fkfxv" Nov 24 08:50:24 crc kubenswrapper[4886]: I1124 08:50:24.849711 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 08:50:24 crc kubenswrapper[4886]: E1124 08:50:24.849782 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-fkfxv" podUID="7844f778-bcd5-40fe-ad92-0cc0fcd6c5d7" Nov 24 08:50:24 crc kubenswrapper[4886]: E1124 08:50:24.849994 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 08:50:24 crc kubenswrapper[4886]: I1124 08:50:24.867417 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb6a474a-7674-4280-8da4-784cc3c4fc67\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e99af72fba7cd02a022aede4ff904f6dc0d263429468b460f3886cbbf8767e9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3a5e301ad7f424b4db6c7c2d4054973bf707f225de81d58a13077e1a130fe59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://879cdf4650f88cd23cfed38dabd912619f3c95ab3c254c0eb893eb4cbb2f8c5c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://243dc8c38e86fbd96d073b5dc2edaf3f378f052563abfedf86cd78e6270f11b1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:04Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:50:24Z is after 2025-08-24T17:21:41Z" Nov 24 08:50:24 crc kubenswrapper[4886]: I1124 08:50:24.877304 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:24 crc kubenswrapper[4886]: I1124 08:50:24.877363 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:24 crc kubenswrapper[4886]: I1124 08:50:24.877373 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:24 crc kubenswrapper[4886]: I1124 08:50:24.877395 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:24 crc kubenswrapper[4886]: I1124 08:50:24.877415 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:24Z","lastTransitionTime":"2025-11-24T08:50:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:24 crc kubenswrapper[4886]: I1124 08:50:24.882606 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://336d2d1c7e69b9e2c481681be12af7bfd17b8517e7fd97a67b3ca82dbc84ac35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c72af163fd4d5f2dc2a642984873ee83c62e9f1f96fac318fd17d5a74e5ea82f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:50:24Z is after 2025-08-24T17:21:41Z" Nov 24 08:50:24 crc kubenswrapper[4886]: I1124 08:50:24.894357 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-fkfxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7844f778-bcd5-40fe-ad92-0cc0fcd6c5d7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9ckc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9ckc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:39Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-fkfxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:50:24Z is after 2025-08-24T17:21:41Z" Nov 24 08:50:24 crc kubenswrapper[4886]: I1124 08:50:24.908644 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kl8k6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b578bbaf-7246-42d9-9d2d-346bd1da2c41\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d401bfaa5ee27091fc815de2752da57dcce2799f3ff3f6c88faacbe02565324\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6ng9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b4ab49b6707685341f4480554ba33f3dce9687661a3ed08ca228914a6821703\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b4ab49b6707685341f4480554ba33f3dce9687661a3ed08ca228914a6821703\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6ng9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce7a090aac10049685ef8de3f6533a3a7c61f8fe5cc79f2af17f6f36d0e90168\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce7a090aac10049685ef8de3f6533a3a7c61f8fe5cc79f2af17f6f36d0e90168\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6ng9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d53782a307c740b991c352671c3365951fb623f61ade4b061452b63f19b5e606\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d53782a307c740b991c352671c3365951fb623f61ade4b061452b63f19b5e606\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6ng9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce2a601938d1e214fdd14de097a4d95fdf3ac619b03bc2d269a9edf192fa940c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce2a601938d1e214fdd14de097a4d95fdf3ac619b03bc2d269a9edf192fa940c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6ng9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e219f09fdaf4fbce2d0c7f6d7dd746f27e424ef649941221dcd487c414e9eabd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e219f09fdaf4fbce2d0c7f6d7dd746f27e424ef649941221dcd487c414e9eabd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6ng9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bd697c5a7e9cd31c3ef75163507c789373f10f4ae458f55bf9bdfe318eda6e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bd697c5a7e9cd31c3ef75163507c789373f10f4ae458f55bf9bdfe318eda6e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6ng9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kl8k6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:50:24Z is after 2025-08-24T17:21:41Z" Nov 24 08:50:24 crc kubenswrapper[4886]: I1124 08:50:24.929933 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-657wc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03f9078c-6b20-46d5-ae2a-2eb20e236769\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f59f3a090f9824d71c23fcb40dde8c9d34fa8116c17ba79a45c005b2719f3dc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8c129c56a2c69042516d9caa6abdf4095aefde8fdaef4953a8eed353940be89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f504f7f421bd783399a6d08fa713f0a885d8edd5f5245933a98b6830c5573d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51cd23eb1b9056a44b5716ce542fafec8e3ebde494f136655a2a5d838ec90ab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://48584e4993cf06ba4b29a0732136f0f86c23d81adea76e945015ffafb462819d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://97477bfe13de2286806b9a8e3c723e3828c1b0ef182329585eaef507dd0c36f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76a9b348444e1c0367116f0081d701af938e94b1cc0731ae7fe4294b12666b87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://76a9b348444e1c0367116f0081d701af938e94b1cc0731ae7fe4294b12666b87\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T08:49:59Z\\\",\\\"message\\\":\\\"s_controller.go:360] Finished syncing service ovn-kubernetes-control-plane on namespace openshift-ovn-kubernetes for network=default : 11.501µs\\\\nI1124 08:49:59.095768 6567 obj_retry.go:303] Retry object setup: *v1.Pod openshift-ovn-kubernetes/ovnkube-node-657wc\\\\nI1124 08:49:59.095697 6567 ovnkube_controller.go:1292] Config duration recorder: kind/namespace/name pod/openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-z89jx. OVN-Kubernetes controller took 0.000443563 seconds. No OVN measurement.\\\\nI1124 08:49:59.095828 6567 obj_retry.go:365] Adding new object: *v1.Pod openshift-ovn-kubernetes/ovnkube-node-657wc\\\\nI1124 08:49:59.095838 6567 services_controller.go:356] Processing sync for service openshift-controller-manager/controller-manager for network=default\\\\nF1124 08:49:59.095843 6567 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webh\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:56Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-657wc_openshift-ovn-kubernetes(03f9078c-6b20-46d5-ae2a-2eb20e236769)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://070c77876ce83a479d0ab423d8107a14d506c655b476b2719766d78ad3c88dc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53fba806955f4d7e98c5a2c830bc39cc6f9ac2a6248099aa13da6dff5a0cde76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53fba806955f4d7e98c5a2c830bc39cc6f9ac2a6248099aa13da6dff5a0cde76\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-657wc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:50:24Z is after 2025-08-24T17:21:41Z" Nov 24 08:50:24 crc kubenswrapper[4886]: I1124 08:50:24.950635 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1de93d0-350b-4f10-a50d-d4ca45b47a33\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6cae19a77290a6aa02abc366057b242de03f58a183d136dc19248cdfa2c0fa75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e34e380e0dd3b6788878e07e1d497d003445742fb3e976e78f116fd775fb0ae8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aa26820df6b9179aede13abc90067edc933b9bae6f11a6a1c32cf1a3ee494c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66381e4a8de0eb3ec683f0643c26082dac4f16a6e152164bf9a8b14449a23b0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52f6359fcbe915c66d972a4cdf89646528d2c1d5099c96cb3436e2da54d1099a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ee3470d2b357bc4a39b9fa7da8800930ea9f4d4a3926aa9d2000be8ca2da241\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ee3470d2b357bc4a39b9fa7da8800930ea9f4d4a3926aa9d2000be8ca2da241\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74bf2e186184e37d22515fc9a765b4ff0530354eb4b6af7679cf6f046651f9e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74bf2e186184e37d22515fc9a765b4ff0530354eb4b6af7679cf6f046651f9e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1794ee7f2b8bf736d63385a3755d5a50e5f05cc74470ed04c3f21054476ff5f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1794ee7f2b8bf736d63385a3755d5a50e5f05cc74470ed04c3f21054476ff5f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:04Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:50:24Z is after 2025-08-24T17:21:41Z" Nov 24 08:50:24 crc kubenswrapper[4886]: I1124 08:50:24.963873 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b56e78f-e569-4f01-9a0f-6b9db3b505d7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:50:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:50:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4cd419a96834f5093a6a3bad2d2b99f9ab6f4abdbe5d7fddadf1505a0fd1f3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff0e663f6feec9c77de1cd993c443c9b3df3492612c554cc8b95c8e4e4a026b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://146e753c49089634c3d3105a3debf9737245745d8ed9aee61ae2cf4f56ae37ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f13b61e62e7d9dc91305545f1843a74d2b10a9df1f207cddb9947059876a6d36\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6d0c617fe424a39e246d154cd6c0d8dc451c251e24dffaf45d644354e94e01e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"message\\\":\\\" 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1124 08:49:25.277531 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1124 08:49:25.277556 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1124 08:49:25.277707 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1124 08:49:25.277722 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1124 08:49:25.278962 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-505038271/tls.crt::/tmp/serving-cert-505038271/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1763974159\\\\\\\\\\\\\\\" (2025-11-24 08:49:18 +0000 UTC to 2025-12-24 08:49:19 +0000 UTC (now=2025-11-24 08:49:25.278915173 +0000 UTC))\\\\\\\"\\\\nI1124 08:49:25.279136 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1763974165\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1763974164\\\\\\\\\\\\\\\" (2025-11-24 07:49:24 +0000 UTC to 2026-11-24 07:49:24 +0000 UTC (now=2025-11-24 08:49:25.279105228 +0000 UTC))\\\\\\\"\\\\nI1124 08:49:25.279186 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1124 08:49:25.279230 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1124 08:49:25.279272 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-505038271/tls.crt::/tmp/serving-cert-505038271/tls.key\\\\\\\"\\\\nI1124 08:49:25.279422 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF1124 08:49:25.273255 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://97e022f9b9279047d700f687148beaeef5ce4f72de418be1bdf0edde4e5211da\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc5ed0e2c8f6299305d6c26194c4b43e6e884785effd5ae99254221d738e7662\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc5ed0e2c8f6299305d6c26194c4b43e6e884785effd5ae99254221d738e7662\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:50:24Z is after 2025-08-24T17:21:41Z" Nov 24 08:50:24 crc kubenswrapper[4886]: I1124 08:50:24.975897 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15edcf2f-5ba7-4db0-b44e-fac293a82688\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44ecd99cee98ca44de5a0af10ab152bc27aca5ab8f94d9d58b295804261846d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b4868ba55a55a9c6373e02adc88283d51196d5e19b554d0f351c59599731130\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0c4a9d27ad84a91160eed506a2a6b0b34968fc71063ce3493326092049d2bf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://98787b856c8680640854ec328aa9dc502d03f8617e5b19283b2bd03c3cca7988\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://98787b856c8680640854ec328aa9dc502d03f8617e5b19283b2bd03c3cca7988\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:05Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:04Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:50:24Z is after 2025-08-24T17:21:41Z" Nov 24 08:50:24 crc kubenswrapper[4886]: I1124 08:50:24.980405 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:24 crc kubenswrapper[4886]: I1124 08:50:24.980484 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:24 crc kubenswrapper[4886]: I1124 08:50:24.980499 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:24 crc kubenswrapper[4886]: I1124 08:50:24.980520 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:24 crc kubenswrapper[4886]: I1124 08:50:24.980534 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:24Z","lastTransitionTime":"2025-11-24T08:50:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:24 crc kubenswrapper[4886]: I1124 08:50:24.990969 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://736f509b74b3eb557d2963822fd7f3aa507fc34bc178d6b2c05e05dde6e2c88e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:50:24Z is after 2025-08-24T17:21:41Z" Nov 24 08:50:25 crc kubenswrapper[4886]: I1124 08:50:25.004747 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:50:25Z is after 2025-08-24T17:21:41Z" Nov 24 08:50:25 crc kubenswrapper[4886]: I1124 08:50:25.022381 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5g6ld" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82737edd-859f-4e06-8559-47375deb3a1a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f1c6253e8a3c02a22a64ad8913937a50b00f99c7c9da201aa0dd154175dc24c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ffhbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5g6ld\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:50:25Z is after 2025-08-24T17:21:41Z" Nov 24 08:50:25 crc kubenswrapper[4886]: I1124 08:50:25.037034 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vcq8j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6279457-41d0-46e2-9a21-1d3c74311083\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35b8da4abfec501a571590893bbace52b9e8453890065d2a01b0da8c5b8f9aaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mvm6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db48bd367bf72ecf426c27f7afe1df9a346b1d8bb8fe198a14399f47699d47b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mvm6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-vcq8j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:50:25Z is after 2025-08-24T17:21:41Z" Nov 24 08:50:25 crc kubenswrapper[4886]: I1124 08:50:25.052499 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:50:25Z is after 2025-08-24T17:21:41Z" Nov 24 08:50:25 crc kubenswrapper[4886]: I1124 08:50:25.066083 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zc46q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23cb993e-0360-4449-b604-8ddd825a6502\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://008c7d8517b1c3c5f875d9e73315ceb9936936a1d977422f16bf2aaba7e3f4dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8hf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b6a02c10c81171f23bf0e623a3f86710da37f6dd62f885c0366830a60b2be34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8hf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zc46q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:50:25Z is after 2025-08-24T17:21:41Z" Nov 24 08:50:25 crc kubenswrapper[4886]: I1124 08:50:25.080900 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2dk8j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d515fec-60f3-4bf7-9ba4-697bb691b670\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:50:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:50:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d14b62d61a68782ff616caa790ed7dd0213d43cd7f9efca2204e0f0bb8400d60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ead2821f6b71571212807c6f2984d541faf5143c28fee48594b19e08bd91d085\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T08:50:13Z\\\",\\\"message\\\":\\\"2025-11-24T08:49:28+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_be9db12b-72b2-4aa1-a7ac-01b80a43728e\\\\n2025-11-24T08:49:28+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_be9db12b-72b2-4aa1-a7ac-01b80a43728e to /host/opt/cni/bin/\\\\n2025-11-24T08:49:28Z [verbose] multus-daemon started\\\\n2025-11-24T08:49:28Z [verbose] Readiness Indicator file check\\\\n2025-11-24T08:50:13Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:26Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:50:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6pfs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2dk8j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:50:25Z is after 2025-08-24T17:21:41Z" Nov 24 08:50:25 crc kubenswrapper[4886]: I1124 08:50:25.083718 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:25 crc kubenswrapper[4886]: I1124 08:50:25.083814 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:25 crc kubenswrapper[4886]: I1124 08:50:25.083872 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:25 crc kubenswrapper[4886]: I1124 08:50:25.083942 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:25 crc kubenswrapper[4886]: I1124 08:50:25.084001 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:25Z","lastTransitionTime":"2025-11-24T08:50:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:25 crc kubenswrapper[4886]: I1124 08:50:25.095562 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56bddcbe3378ccc43de3046352a1284d9590790973736e218e891b804bfe1ff8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:50:25Z is after 2025-08-24T17:21:41Z" Nov 24 08:50:25 crc kubenswrapper[4886]: I1124 08:50:25.109673 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:50:25Z is after 2025-08-24T17:21:41Z" Nov 24 08:50:25 crc kubenswrapper[4886]: I1124 08:50:25.123407 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-82v6c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5d34f7b-b75b-4572-87ca-a01c66ba67b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9fe889c11fc75d5c449c6facbeb67f8fc2d60ffedbdf1433b72147cdabd353e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8qmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-82v6c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:50:25Z is after 2025-08-24T17:21:41Z" Nov 24 08:50:25 crc kubenswrapper[4886]: I1124 08:50:25.186016 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:25 crc kubenswrapper[4886]: I1124 08:50:25.186324 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:25 crc kubenswrapper[4886]: I1124 08:50:25.186404 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:25 crc kubenswrapper[4886]: I1124 08:50:25.186491 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:25 crc kubenswrapper[4886]: I1124 08:50:25.186585 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:25Z","lastTransitionTime":"2025-11-24T08:50:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:25 crc kubenswrapper[4886]: I1124 08:50:25.289787 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:25 crc kubenswrapper[4886]: I1124 08:50:25.289841 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:25 crc kubenswrapper[4886]: I1124 08:50:25.289857 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:25 crc kubenswrapper[4886]: I1124 08:50:25.289878 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:25 crc kubenswrapper[4886]: I1124 08:50:25.289892 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:25Z","lastTransitionTime":"2025-11-24T08:50:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:25 crc kubenswrapper[4886]: I1124 08:50:25.392751 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:25 crc kubenswrapper[4886]: I1124 08:50:25.392791 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:25 crc kubenswrapper[4886]: I1124 08:50:25.392804 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:25 crc kubenswrapper[4886]: I1124 08:50:25.392821 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:25 crc kubenswrapper[4886]: I1124 08:50:25.392833 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:25Z","lastTransitionTime":"2025-11-24T08:50:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:25 crc kubenswrapper[4886]: I1124 08:50:25.494752 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:25 crc kubenswrapper[4886]: I1124 08:50:25.494843 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:25 crc kubenswrapper[4886]: I1124 08:50:25.494856 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:25 crc kubenswrapper[4886]: I1124 08:50:25.494873 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:25 crc kubenswrapper[4886]: I1124 08:50:25.494883 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:25Z","lastTransitionTime":"2025-11-24T08:50:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:25 crc kubenswrapper[4886]: I1124 08:50:25.597257 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:25 crc kubenswrapper[4886]: I1124 08:50:25.597304 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:25 crc kubenswrapper[4886]: I1124 08:50:25.597314 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:25 crc kubenswrapper[4886]: I1124 08:50:25.597331 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:25 crc kubenswrapper[4886]: I1124 08:50:25.597343 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:25Z","lastTransitionTime":"2025-11-24T08:50:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:25 crc kubenswrapper[4886]: I1124 08:50:25.700282 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:25 crc kubenswrapper[4886]: I1124 08:50:25.700336 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:25 crc kubenswrapper[4886]: I1124 08:50:25.700349 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:25 crc kubenswrapper[4886]: I1124 08:50:25.700394 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:25 crc kubenswrapper[4886]: I1124 08:50:25.700408 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:25Z","lastTransitionTime":"2025-11-24T08:50:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:25 crc kubenswrapper[4886]: I1124 08:50:25.803272 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:25 crc kubenswrapper[4886]: I1124 08:50:25.803327 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:25 crc kubenswrapper[4886]: I1124 08:50:25.803341 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:25 crc kubenswrapper[4886]: I1124 08:50:25.803361 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:25 crc kubenswrapper[4886]: I1124 08:50:25.803374 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:25Z","lastTransitionTime":"2025-11-24T08:50:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:25 crc kubenswrapper[4886]: I1124 08:50:25.848825 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 08:50:25 crc kubenswrapper[4886]: I1124 08:50:25.848944 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 08:50:25 crc kubenswrapper[4886]: E1124 08:50:25.849276 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 08:50:25 crc kubenswrapper[4886]: E1124 08:50:25.849386 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 08:50:25 crc kubenswrapper[4886]: I1124 08:50:25.849781 4886 scope.go:117] "RemoveContainer" containerID="76a9b348444e1c0367116f0081d701af938e94b1cc0731ae7fe4294b12666b87" Nov 24 08:50:25 crc kubenswrapper[4886]: I1124 08:50:25.907117 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:25 crc kubenswrapper[4886]: I1124 08:50:25.907457 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:25 crc kubenswrapper[4886]: I1124 08:50:25.907469 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:25 crc kubenswrapper[4886]: I1124 08:50:25.907485 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:25 crc kubenswrapper[4886]: I1124 08:50:25.907496 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:25Z","lastTransitionTime":"2025-11-24T08:50:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:26 crc kubenswrapper[4886]: I1124 08:50:26.011503 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:26 crc kubenswrapper[4886]: I1124 08:50:26.026615 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:26 crc kubenswrapper[4886]: I1124 08:50:26.026882 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:26 crc kubenswrapper[4886]: I1124 08:50:26.026992 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:26 crc kubenswrapper[4886]: I1124 08:50:26.027073 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:26Z","lastTransitionTime":"2025-11-24T08:50:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:26 crc kubenswrapper[4886]: I1124 08:50:26.129605 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:26 crc kubenswrapper[4886]: I1124 08:50:26.129833 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:26 crc kubenswrapper[4886]: I1124 08:50:26.129846 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:26 crc kubenswrapper[4886]: I1124 08:50:26.129860 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:26 crc kubenswrapper[4886]: I1124 08:50:26.129869 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:26Z","lastTransitionTime":"2025-11-24T08:50:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:26 crc kubenswrapper[4886]: I1124 08:50:26.234310 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:26 crc kubenswrapper[4886]: I1124 08:50:26.234371 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:26 crc kubenswrapper[4886]: I1124 08:50:26.234385 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:26 crc kubenswrapper[4886]: I1124 08:50:26.234409 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:26 crc kubenswrapper[4886]: I1124 08:50:26.234427 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:26Z","lastTransitionTime":"2025-11-24T08:50:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:26 crc kubenswrapper[4886]: I1124 08:50:26.336622 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:26 crc kubenswrapper[4886]: I1124 08:50:26.336657 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:26 crc kubenswrapper[4886]: I1124 08:50:26.336669 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:26 crc kubenswrapper[4886]: I1124 08:50:26.336684 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:26 crc kubenswrapper[4886]: I1124 08:50:26.336693 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:26Z","lastTransitionTime":"2025-11-24T08:50:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:26 crc kubenswrapper[4886]: I1124 08:50:26.439879 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:26 crc kubenswrapper[4886]: I1124 08:50:26.439919 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:26 crc kubenswrapper[4886]: I1124 08:50:26.439933 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:26 crc kubenswrapper[4886]: I1124 08:50:26.439951 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:26 crc kubenswrapper[4886]: I1124 08:50:26.439963 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:26Z","lastTransitionTime":"2025-11-24T08:50:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:26 crc kubenswrapper[4886]: I1124 08:50:26.542425 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:26 crc kubenswrapper[4886]: I1124 08:50:26.542487 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:26 crc kubenswrapper[4886]: I1124 08:50:26.542505 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:26 crc kubenswrapper[4886]: I1124 08:50:26.542542 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:26 crc kubenswrapper[4886]: I1124 08:50:26.542582 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:26Z","lastTransitionTime":"2025-11-24T08:50:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:26 crc kubenswrapper[4886]: I1124 08:50:26.645098 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:26 crc kubenswrapper[4886]: I1124 08:50:26.645150 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:26 crc kubenswrapper[4886]: I1124 08:50:26.645179 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:26 crc kubenswrapper[4886]: I1124 08:50:26.645195 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:26 crc kubenswrapper[4886]: I1124 08:50:26.645203 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:26Z","lastTransitionTime":"2025-11-24T08:50:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:26 crc kubenswrapper[4886]: I1124 08:50:26.747685 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:26 crc kubenswrapper[4886]: I1124 08:50:26.747714 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:26 crc kubenswrapper[4886]: I1124 08:50:26.747724 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:26 crc kubenswrapper[4886]: I1124 08:50:26.747741 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:26 crc kubenswrapper[4886]: I1124 08:50:26.747750 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:26Z","lastTransitionTime":"2025-11-24T08:50:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:26 crc kubenswrapper[4886]: I1124 08:50:26.803795 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-657wc_03f9078c-6b20-46d5-ae2a-2eb20e236769/ovnkube-controller/2.log" Nov 24 08:50:26 crc kubenswrapper[4886]: I1124 08:50:26.809891 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-657wc" event={"ID":"03f9078c-6b20-46d5-ae2a-2eb20e236769","Type":"ContainerStarted","Data":"7339f5b94dd48022cf3d53105e626424c485536d2deb094f3aba90f56c27c698"} Nov 24 08:50:26 crc kubenswrapper[4886]: I1124 08:50:26.848996 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 08:50:26 crc kubenswrapper[4886]: E1124 08:50:26.849171 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 08:50:26 crc kubenswrapper[4886]: I1124 08:50:26.849163 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-fkfxv" Nov 24 08:50:26 crc kubenswrapper[4886]: E1124 08:50:26.849424 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-fkfxv" podUID="7844f778-bcd5-40fe-ad92-0cc0fcd6c5d7" Nov 24 08:50:26 crc kubenswrapper[4886]: I1124 08:50:26.850788 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:26 crc kubenswrapper[4886]: I1124 08:50:26.850818 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:26 crc kubenswrapper[4886]: I1124 08:50:26.850829 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:26 crc kubenswrapper[4886]: I1124 08:50:26.850842 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:26 crc kubenswrapper[4886]: I1124 08:50:26.850853 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:26Z","lastTransitionTime":"2025-11-24T08:50:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:26 crc kubenswrapper[4886]: I1124 08:50:26.953270 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:26 crc kubenswrapper[4886]: I1124 08:50:26.953346 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:26 crc kubenswrapper[4886]: I1124 08:50:26.953358 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:26 crc kubenswrapper[4886]: I1124 08:50:26.953380 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:26 crc kubenswrapper[4886]: I1124 08:50:26.953396 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:26Z","lastTransitionTime":"2025-11-24T08:50:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:27 crc kubenswrapper[4886]: I1124 08:50:27.055651 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:27 crc kubenswrapper[4886]: I1124 08:50:27.055688 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:27 crc kubenswrapper[4886]: I1124 08:50:27.055700 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:27 crc kubenswrapper[4886]: I1124 08:50:27.055716 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:27 crc kubenswrapper[4886]: I1124 08:50:27.055725 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:27Z","lastTransitionTime":"2025-11-24T08:50:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:27 crc kubenswrapper[4886]: I1124 08:50:27.158560 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:27 crc kubenswrapper[4886]: I1124 08:50:27.158610 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:27 crc kubenswrapper[4886]: I1124 08:50:27.158621 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:27 crc kubenswrapper[4886]: I1124 08:50:27.158640 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:27 crc kubenswrapper[4886]: I1124 08:50:27.158649 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:27Z","lastTransitionTime":"2025-11-24T08:50:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:27 crc kubenswrapper[4886]: I1124 08:50:27.260691 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:27 crc kubenswrapper[4886]: I1124 08:50:27.260731 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:27 crc kubenswrapper[4886]: I1124 08:50:27.260740 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:27 crc kubenswrapper[4886]: I1124 08:50:27.260756 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:27 crc kubenswrapper[4886]: I1124 08:50:27.260765 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:27Z","lastTransitionTime":"2025-11-24T08:50:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:27 crc kubenswrapper[4886]: I1124 08:50:27.369449 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:27 crc kubenswrapper[4886]: I1124 08:50:27.369514 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:27 crc kubenswrapper[4886]: I1124 08:50:27.369531 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:27 crc kubenswrapper[4886]: I1124 08:50:27.369556 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:27 crc kubenswrapper[4886]: I1124 08:50:27.369570 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:27Z","lastTransitionTime":"2025-11-24T08:50:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:27 crc kubenswrapper[4886]: I1124 08:50:27.471839 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:27 crc kubenswrapper[4886]: I1124 08:50:27.471875 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:27 crc kubenswrapper[4886]: I1124 08:50:27.471884 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:27 crc kubenswrapper[4886]: I1124 08:50:27.471899 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:27 crc kubenswrapper[4886]: I1124 08:50:27.471910 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:27Z","lastTransitionTime":"2025-11-24T08:50:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:27 crc kubenswrapper[4886]: I1124 08:50:27.575261 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:27 crc kubenswrapper[4886]: I1124 08:50:27.575319 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:27 crc kubenswrapper[4886]: I1124 08:50:27.575332 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:27 crc kubenswrapper[4886]: I1124 08:50:27.575358 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:27 crc kubenswrapper[4886]: I1124 08:50:27.575372 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:27Z","lastTransitionTime":"2025-11-24T08:50:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:27 crc kubenswrapper[4886]: I1124 08:50:27.677761 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:27 crc kubenswrapper[4886]: I1124 08:50:27.677801 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:27 crc kubenswrapper[4886]: I1124 08:50:27.677812 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:27 crc kubenswrapper[4886]: I1124 08:50:27.677872 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:27 crc kubenswrapper[4886]: I1124 08:50:27.677886 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:27Z","lastTransitionTime":"2025-11-24T08:50:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:27 crc kubenswrapper[4886]: I1124 08:50:27.780415 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:27 crc kubenswrapper[4886]: I1124 08:50:27.780493 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:27 crc kubenswrapper[4886]: I1124 08:50:27.780508 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:27 crc kubenswrapper[4886]: I1124 08:50:27.780537 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:27 crc kubenswrapper[4886]: I1124 08:50:27.780554 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:27Z","lastTransitionTime":"2025-11-24T08:50:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:27 crc kubenswrapper[4886]: I1124 08:50:27.814713 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-657wc_03f9078c-6b20-46d5-ae2a-2eb20e236769/ovnkube-controller/3.log" Nov 24 08:50:27 crc kubenswrapper[4886]: I1124 08:50:27.815383 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-657wc_03f9078c-6b20-46d5-ae2a-2eb20e236769/ovnkube-controller/2.log" Nov 24 08:50:27 crc kubenswrapper[4886]: I1124 08:50:27.817917 4886 generic.go:334] "Generic (PLEG): container finished" podID="03f9078c-6b20-46d5-ae2a-2eb20e236769" containerID="7339f5b94dd48022cf3d53105e626424c485536d2deb094f3aba90f56c27c698" exitCode=1 Nov 24 08:50:27 crc kubenswrapper[4886]: I1124 08:50:27.817951 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-657wc" event={"ID":"03f9078c-6b20-46d5-ae2a-2eb20e236769","Type":"ContainerDied","Data":"7339f5b94dd48022cf3d53105e626424c485536d2deb094f3aba90f56c27c698"} Nov 24 08:50:27 crc kubenswrapper[4886]: I1124 08:50:27.817993 4886 scope.go:117] "RemoveContainer" containerID="76a9b348444e1c0367116f0081d701af938e94b1cc0731ae7fe4294b12666b87" Nov 24 08:50:27 crc kubenswrapper[4886]: I1124 08:50:27.818815 4886 scope.go:117] "RemoveContainer" containerID="7339f5b94dd48022cf3d53105e626424c485536d2deb094f3aba90f56c27c698" Nov 24 08:50:27 crc kubenswrapper[4886]: E1124 08:50:27.819116 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-657wc_openshift-ovn-kubernetes(03f9078c-6b20-46d5-ae2a-2eb20e236769)\"" pod="openshift-ovn-kubernetes/ovnkube-node-657wc" podUID="03f9078c-6b20-46d5-ae2a-2eb20e236769" Nov 24 08:50:27 crc kubenswrapper[4886]: I1124 08:50:27.832230 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb6a474a-7674-4280-8da4-784cc3c4fc67\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e99af72fba7cd02a022aede4ff904f6dc0d263429468b460f3886cbbf8767e9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3a5e301ad7f424b4db6c7c2d4054973bf707f225de81d58a13077e1a130fe59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://879cdf4650f88cd23cfed38dabd912619f3c95ab3c254c0eb893eb4cbb2f8c5c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://243dc8c38e86fbd96d073b5dc2edaf3f378f052563abfedf86cd78e6270f11b1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:04Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:50:27Z is after 2025-08-24T17:21:41Z" Nov 24 08:50:27 crc kubenswrapper[4886]: I1124 08:50:27.846961 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://336d2d1c7e69b9e2c481681be12af7bfd17b8517e7fd97a67b3ca82dbc84ac35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c72af163fd4d5f2dc2a642984873ee83c62e9f1f96fac318fd17d5a74e5ea82f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:50:27Z is after 2025-08-24T17:21:41Z" Nov 24 08:50:27 crc kubenswrapper[4886]: I1124 08:50:27.848321 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 08:50:27 crc kubenswrapper[4886]: E1124 08:50:27.848607 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 08:50:27 crc kubenswrapper[4886]: I1124 08:50:27.848591 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 08:50:27 crc kubenswrapper[4886]: E1124 08:50:27.848705 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 08:50:27 crc kubenswrapper[4886]: I1124 08:50:27.859664 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-fkfxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7844f778-bcd5-40fe-ad92-0cc0fcd6c5d7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9ckc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9ckc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:39Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-fkfxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:50:27Z is after 2025-08-24T17:21:41Z" Nov 24 08:50:27 crc kubenswrapper[4886]: I1124 08:50:27.871853 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5g6ld" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82737edd-859f-4e06-8559-47375deb3a1a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f1c6253e8a3c02a22a64ad8913937a50b00f99c7c9da201aa0dd154175dc24c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ffhbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5g6ld\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:50:27Z is after 2025-08-24T17:21:41Z" Nov 24 08:50:27 crc kubenswrapper[4886]: I1124 08:50:27.884794 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:27 crc kubenswrapper[4886]: I1124 08:50:27.884864 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:27 crc kubenswrapper[4886]: I1124 08:50:27.884878 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:27 crc kubenswrapper[4886]: I1124 08:50:27.884906 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:27 crc kubenswrapper[4886]: I1124 08:50:27.884922 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:27Z","lastTransitionTime":"2025-11-24T08:50:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:27 crc kubenswrapper[4886]: I1124 08:50:27.888956 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kl8k6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b578bbaf-7246-42d9-9d2d-346bd1da2c41\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d401bfaa5ee27091fc815de2752da57dcce2799f3ff3f6c88faacbe02565324\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6ng9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b4ab49b6707685341f4480554ba33f3dce9687661a3ed08ca228914a6821703\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b4ab49b6707685341f4480554ba33f3dce9687661a3ed08ca228914a6821703\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6ng9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce7a090aac10049685ef8de3f6533a3a7c61f8fe5cc79f2af17f6f36d0e90168\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce7a090aac10049685ef8de3f6533a3a7c61f8fe5cc79f2af17f6f36d0e90168\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6ng9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d53782a307c740b991c352671c3365951fb623f61ade4b061452b63f19b5e606\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d53782a307c740b991c352671c3365951fb623f61ade4b061452b63f19b5e606\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6ng9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce2a601938d1e214fdd14de097a4d95fdf3ac619b03bc2d269a9edf192fa940c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce2a601938d1e214fdd14de097a4d95fdf3ac619b03bc2d269a9edf192fa940c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6ng9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e219f09fdaf4fbce2d0c7f6d7dd746f27e424ef649941221dcd487c414e9eabd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e219f09fdaf4fbce2d0c7f6d7dd746f27e424ef649941221dcd487c414e9eabd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6ng9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bd697c5a7e9cd31c3ef75163507c789373f10f4ae458f55bf9bdfe318eda6e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bd697c5a7e9cd31c3ef75163507c789373f10f4ae458f55bf9bdfe318eda6e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6ng9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kl8k6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:50:27Z is after 2025-08-24T17:21:41Z" Nov 24 08:50:27 crc kubenswrapper[4886]: I1124 08:50:27.908223 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-657wc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03f9078c-6b20-46d5-ae2a-2eb20e236769\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f59f3a090f9824d71c23fcb40dde8c9d34fa8116c17ba79a45c005b2719f3dc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8c129c56a2c69042516d9caa6abdf4095aefde8fdaef4953a8eed353940be89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f504f7f421bd783399a6d08fa713f0a885d8edd5f5245933a98b6830c5573d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51cd23eb1b9056a44b5716ce542fafec8e3ebde494f136655a2a5d838ec90ab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://48584e4993cf06ba4b29a0732136f0f86c23d81adea76e945015ffafb462819d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://97477bfe13de2286806b9a8e3c723e3828c1b0ef182329585eaef507dd0c36f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7339f5b94dd48022cf3d53105e626424c485536d2deb094f3aba90f56c27c698\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://76a9b348444e1c0367116f0081d701af938e94b1cc0731ae7fe4294b12666b87\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T08:49:59Z\\\",\\\"message\\\":\\\"s_controller.go:360] Finished syncing service ovn-kubernetes-control-plane on namespace openshift-ovn-kubernetes for network=default : 11.501µs\\\\nI1124 08:49:59.095768 6567 obj_retry.go:303] Retry object setup: *v1.Pod openshift-ovn-kubernetes/ovnkube-node-657wc\\\\nI1124 08:49:59.095697 6567 ovnkube_controller.go:1292] Config duration recorder: kind/namespace/name pod/openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-z89jx. OVN-Kubernetes controller took 0.000443563 seconds. No OVN measurement.\\\\nI1124 08:49:59.095828 6567 obj_retry.go:365] Adding new object: *v1.Pod openshift-ovn-kubernetes/ovnkube-node-657wc\\\\nI1124 08:49:59.095838 6567 services_controller.go:356] Processing sync for service openshift-controller-manager/controller-manager for network=default\\\\nF1124 08:49:59.095843 6567 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webh\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:56Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7339f5b94dd48022cf3d53105e626424c485536d2deb094f3aba90f56c27c698\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T08:50:27Z\\\",\\\"message\\\":\\\"workPolicy event handler 4 for removal\\\\nI1124 08:50:27.556108 6936 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1124 08:50:27.556113 6936 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1124 08:50:27.556129 6936 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1124 08:50:27.556134 6936 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1124 08:50:27.556184 6936 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1124 08:50:27.556190 6936 handler.go:208] Removed *v1.Node event handler 2\\\\nI1124 08:50:27.556213 6936 handler.go:208] Removed *v1.Node event handler 7\\\\nI1124 08:50:27.556214 6936 factory.go:656] Stopping watch factory\\\\nI1124 08:50:27.556213 6936 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1124 08:50:27.556251 6936 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1124 08:50:27.556233 6936 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1124 08:50:27.556237 6936 ovnkube.go:599] Stopped ovnkube\\\\nI1124 08:50:27.556275 6936 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1124 08:50:27.556251 6936 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1124 08:50:27.556332 6936 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1124 08:50:27.556436 6936 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T08:50:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://070c77876ce83a479d0ab423d8107a14d506c655b476b2719766d78ad3c88dc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53fba806955f4d7e98c5a2c830bc39cc6f9ac2a6248099aa13da6dff5a0cde76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53fba806955f4d7e98c5a2c830bc39cc6f9ac2a6248099aa13da6dff5a0cde76\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-657wc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:50:27Z is after 2025-08-24T17:21:41Z" Nov 24 08:50:27 crc kubenswrapper[4886]: I1124 08:50:27.926536 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1de93d0-350b-4f10-a50d-d4ca45b47a33\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6cae19a77290a6aa02abc366057b242de03f58a183d136dc19248cdfa2c0fa75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e34e380e0dd3b6788878e07e1d497d003445742fb3e976e78f116fd775fb0ae8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aa26820df6b9179aede13abc90067edc933b9bae6f11a6a1c32cf1a3ee494c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66381e4a8de0eb3ec683f0643c26082dac4f16a6e152164bf9a8b14449a23b0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52f6359fcbe915c66d972a4cdf89646528d2c1d5099c96cb3436e2da54d1099a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ee3470d2b357bc4a39b9fa7da8800930ea9f4d4a3926aa9d2000be8ca2da241\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ee3470d2b357bc4a39b9fa7da8800930ea9f4d4a3926aa9d2000be8ca2da241\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74bf2e186184e37d22515fc9a765b4ff0530354eb4b6af7679cf6f046651f9e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74bf2e186184e37d22515fc9a765b4ff0530354eb4b6af7679cf6f046651f9e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1794ee7f2b8bf736d63385a3755d5a50e5f05cc74470ed04c3f21054476ff5f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1794ee7f2b8bf736d63385a3755d5a50e5f05cc74470ed04c3f21054476ff5f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:04Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:50:27Z is after 2025-08-24T17:21:41Z" Nov 24 08:50:27 crc kubenswrapper[4886]: I1124 08:50:27.939582 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b56e78f-e569-4f01-9a0f-6b9db3b505d7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:50:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:50:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4cd419a96834f5093a6a3bad2d2b99f9ab6f4abdbe5d7fddadf1505a0fd1f3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff0e663f6feec9c77de1cd993c443c9b3df3492612c554cc8b95c8e4e4a026b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://146e753c49089634c3d3105a3debf9737245745d8ed9aee61ae2cf4f56ae37ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f13b61e62e7d9dc91305545f1843a74d2b10a9df1f207cddb9947059876a6d36\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6d0c617fe424a39e246d154cd6c0d8dc451c251e24dffaf45d644354e94e01e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"message\\\":\\\" 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1124 08:49:25.277531 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1124 08:49:25.277556 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1124 08:49:25.277707 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1124 08:49:25.277722 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1124 08:49:25.278962 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-505038271/tls.crt::/tmp/serving-cert-505038271/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1763974159\\\\\\\\\\\\\\\" (2025-11-24 08:49:18 +0000 UTC to 2025-12-24 08:49:19 +0000 UTC (now=2025-11-24 08:49:25.278915173 +0000 UTC))\\\\\\\"\\\\nI1124 08:49:25.279136 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1763974165\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1763974164\\\\\\\\\\\\\\\" (2025-11-24 07:49:24 +0000 UTC to 2026-11-24 07:49:24 +0000 UTC (now=2025-11-24 08:49:25.279105228 +0000 UTC))\\\\\\\"\\\\nI1124 08:49:25.279186 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1124 08:49:25.279230 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1124 08:49:25.279272 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-505038271/tls.crt::/tmp/serving-cert-505038271/tls.key\\\\\\\"\\\\nI1124 08:49:25.279422 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF1124 08:49:25.273255 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://97e022f9b9279047d700f687148beaeef5ce4f72de418be1bdf0edde4e5211da\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc5ed0e2c8f6299305d6c26194c4b43e6e884785effd5ae99254221d738e7662\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc5ed0e2c8f6299305d6c26194c4b43e6e884785effd5ae99254221d738e7662\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:50:27Z is after 2025-08-24T17:21:41Z" Nov 24 08:50:27 crc kubenswrapper[4886]: I1124 08:50:27.951916 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15edcf2f-5ba7-4db0-b44e-fac293a82688\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44ecd99cee98ca44de5a0af10ab152bc27aca5ab8f94d9d58b295804261846d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b4868ba55a55a9c6373e02adc88283d51196d5e19b554d0f351c59599731130\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0c4a9d27ad84a91160eed506a2a6b0b34968fc71063ce3493326092049d2bf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://98787b856c8680640854ec328aa9dc502d03f8617e5b19283b2bd03c3cca7988\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://98787b856c8680640854ec328aa9dc502d03f8617e5b19283b2bd03c3cca7988\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:05Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:04Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:50:27Z is after 2025-08-24T17:21:41Z" Nov 24 08:50:27 crc kubenswrapper[4886]: I1124 08:50:27.964576 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://736f509b74b3eb557d2963822fd7f3aa507fc34bc178d6b2c05e05dde6e2c88e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:50:27Z is after 2025-08-24T17:21:41Z" Nov 24 08:50:27 crc kubenswrapper[4886]: I1124 08:50:27.977190 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:50:27Z is after 2025-08-24T17:21:41Z" Nov 24 08:50:27 crc kubenswrapper[4886]: I1124 08:50:27.987945 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:27 crc kubenswrapper[4886]: I1124 08:50:27.987982 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:27 crc kubenswrapper[4886]: I1124 08:50:27.987993 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:27 crc kubenswrapper[4886]: I1124 08:50:27.988010 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:27 crc kubenswrapper[4886]: I1124 08:50:27.988021 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:27Z","lastTransitionTime":"2025-11-24T08:50:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:27 crc kubenswrapper[4886]: I1124 08:50:27.991691 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vcq8j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6279457-41d0-46e2-9a21-1d3c74311083\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35b8da4abfec501a571590893bbace52b9e8453890065d2a01b0da8c5b8f9aaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mvm6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db48bd367bf72ecf426c27f7afe1df9a346b1d8bb8fe198a14399f47699d47b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mvm6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-vcq8j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:50:27Z is after 2025-08-24T17:21:41Z" Nov 24 08:50:28 crc kubenswrapper[4886]: I1124 08:50:28.006318 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:50:28Z is after 2025-08-24T17:21:41Z" Nov 24 08:50:28 crc kubenswrapper[4886]: I1124 08:50:28.018298 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zc46q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23cb993e-0360-4449-b604-8ddd825a6502\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://008c7d8517b1c3c5f875d9e73315ceb9936936a1d977422f16bf2aaba7e3f4dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8hf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b6a02c10c81171f23bf0e623a3f86710da37f6dd62f885c0366830a60b2be34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8hf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zc46q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:50:28Z is after 2025-08-24T17:21:41Z" Nov 24 08:50:28 crc kubenswrapper[4886]: I1124 08:50:28.031219 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2dk8j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d515fec-60f3-4bf7-9ba4-697bb691b670\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:50:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:50:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d14b62d61a68782ff616caa790ed7dd0213d43cd7f9efca2204e0f0bb8400d60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ead2821f6b71571212807c6f2984d541faf5143c28fee48594b19e08bd91d085\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T08:50:13Z\\\",\\\"message\\\":\\\"2025-11-24T08:49:28+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_be9db12b-72b2-4aa1-a7ac-01b80a43728e\\\\n2025-11-24T08:49:28+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_be9db12b-72b2-4aa1-a7ac-01b80a43728e to /host/opt/cni/bin/\\\\n2025-11-24T08:49:28Z [verbose] multus-daemon started\\\\n2025-11-24T08:49:28Z [verbose] Readiness Indicator file check\\\\n2025-11-24T08:50:13Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:26Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:50:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6pfs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2dk8j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:50:28Z is after 2025-08-24T17:21:41Z" Nov 24 08:50:28 crc kubenswrapper[4886]: I1124 08:50:28.043642 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56bddcbe3378ccc43de3046352a1284d9590790973736e218e891b804bfe1ff8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:50:28Z is after 2025-08-24T17:21:41Z" Nov 24 08:50:28 crc kubenswrapper[4886]: I1124 08:50:28.056197 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:50:28Z is after 2025-08-24T17:21:41Z" Nov 24 08:50:28 crc kubenswrapper[4886]: I1124 08:50:28.066896 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-82v6c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5d34f7b-b75b-4572-87ca-a01c66ba67b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9fe889c11fc75d5c449c6facbeb67f8fc2d60ffedbdf1433b72147cdabd353e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8qmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-82v6c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:50:28Z is after 2025-08-24T17:21:41Z" Nov 24 08:50:28 crc kubenswrapper[4886]: I1124 08:50:28.091591 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:28 crc kubenswrapper[4886]: I1124 08:50:28.091658 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:28 crc kubenswrapper[4886]: I1124 08:50:28.091667 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:28 crc kubenswrapper[4886]: I1124 08:50:28.091685 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:28 crc kubenswrapper[4886]: I1124 08:50:28.091697 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:28Z","lastTransitionTime":"2025-11-24T08:50:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:28 crc kubenswrapper[4886]: I1124 08:50:28.194553 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:28 crc kubenswrapper[4886]: I1124 08:50:28.194594 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:28 crc kubenswrapper[4886]: I1124 08:50:28.194603 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:28 crc kubenswrapper[4886]: I1124 08:50:28.194617 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:28 crc kubenswrapper[4886]: I1124 08:50:28.194628 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:28Z","lastTransitionTime":"2025-11-24T08:50:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:28 crc kubenswrapper[4886]: I1124 08:50:28.296852 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:28 crc kubenswrapper[4886]: I1124 08:50:28.296899 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:28 crc kubenswrapper[4886]: I1124 08:50:28.296911 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:28 crc kubenswrapper[4886]: I1124 08:50:28.296931 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:28 crc kubenswrapper[4886]: I1124 08:50:28.296944 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:28Z","lastTransitionTime":"2025-11-24T08:50:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:28 crc kubenswrapper[4886]: I1124 08:50:28.400660 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:28 crc kubenswrapper[4886]: I1124 08:50:28.400734 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:28 crc kubenswrapper[4886]: I1124 08:50:28.400746 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:28 crc kubenswrapper[4886]: I1124 08:50:28.400768 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:28 crc kubenswrapper[4886]: I1124 08:50:28.400781 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:28Z","lastTransitionTime":"2025-11-24T08:50:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:28 crc kubenswrapper[4886]: I1124 08:50:28.502970 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:28 crc kubenswrapper[4886]: I1124 08:50:28.503017 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:28 crc kubenswrapper[4886]: I1124 08:50:28.503028 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:28 crc kubenswrapper[4886]: I1124 08:50:28.503046 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:28 crc kubenswrapper[4886]: I1124 08:50:28.503058 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:28Z","lastTransitionTime":"2025-11-24T08:50:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:28 crc kubenswrapper[4886]: I1124 08:50:28.605872 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:28 crc kubenswrapper[4886]: I1124 08:50:28.605934 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:28 crc kubenswrapper[4886]: I1124 08:50:28.605957 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:28 crc kubenswrapper[4886]: I1124 08:50:28.605988 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:28 crc kubenswrapper[4886]: I1124 08:50:28.606004 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:28Z","lastTransitionTime":"2025-11-24T08:50:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:28 crc kubenswrapper[4886]: I1124 08:50:28.708420 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:28 crc kubenswrapper[4886]: I1124 08:50:28.708984 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:28 crc kubenswrapper[4886]: I1124 08:50:28.709060 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:28 crc kubenswrapper[4886]: I1124 08:50:28.709149 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:28 crc kubenswrapper[4886]: I1124 08:50:28.709281 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:28Z","lastTransitionTime":"2025-11-24T08:50:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:28 crc kubenswrapper[4886]: I1124 08:50:28.726112 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 08:50:28 crc kubenswrapper[4886]: I1124 08:50:28.726343 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 08:50:28 crc kubenswrapper[4886]: I1124 08:50:28.726385 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 08:50:28 crc kubenswrapper[4886]: E1124 08:50:28.726504 4886 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 24 08:50:28 crc kubenswrapper[4886]: E1124 08:50:28.726527 4886 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 24 08:50:28 crc kubenswrapper[4886]: E1124 08:50:28.726576 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-24 08:51:32.726558367 +0000 UTC m=+148.613296502 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 24 08:50:28 crc kubenswrapper[4886]: E1124 08:50:28.726611 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-24 08:51:32.726595089 +0000 UTC m=+148.613333214 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 24 08:50:28 crc kubenswrapper[4886]: E1124 08:50:28.726742 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 08:51:32.726727422 +0000 UTC m=+148.613465627 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 08:50:28 crc kubenswrapper[4886]: I1124 08:50:28.812075 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:28 crc kubenswrapper[4886]: I1124 08:50:28.812133 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:28 crc kubenswrapper[4886]: I1124 08:50:28.812145 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:28 crc kubenswrapper[4886]: I1124 08:50:28.812182 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:28 crc kubenswrapper[4886]: I1124 08:50:28.812194 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:28Z","lastTransitionTime":"2025-11-24T08:50:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:28 crc kubenswrapper[4886]: I1124 08:50:28.823079 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-657wc_03f9078c-6b20-46d5-ae2a-2eb20e236769/ovnkube-controller/3.log" Nov 24 08:50:28 crc kubenswrapper[4886]: I1124 08:50:28.827002 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 08:50:28 crc kubenswrapper[4886]: I1124 08:50:28.827082 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 08:50:28 crc kubenswrapper[4886]: E1124 08:50:28.827308 4886 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 24 08:50:28 crc kubenswrapper[4886]: E1124 08:50:28.827322 4886 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 24 08:50:28 crc kubenswrapper[4886]: E1124 08:50:28.827353 4886 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 24 08:50:28 crc kubenswrapper[4886]: E1124 08:50:28.827384 4886 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 08:50:28 crc kubenswrapper[4886]: E1124 08:50:28.827444 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-24 08:51:32.827425546 +0000 UTC m=+148.714163891 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 08:50:28 crc kubenswrapper[4886]: E1124 08:50:28.827329 4886 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 24 08:50:28 crc kubenswrapper[4886]: E1124 08:50:28.827596 4886 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 08:50:28 crc kubenswrapper[4886]: E1124 08:50:28.827625 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-24 08:51:32.827618352 +0000 UTC m=+148.714356487 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 08:50:28 crc kubenswrapper[4886]: I1124 08:50:28.827357 4886 scope.go:117] "RemoveContainer" containerID="7339f5b94dd48022cf3d53105e626424c485536d2deb094f3aba90f56c27c698" Nov 24 08:50:28 crc kubenswrapper[4886]: E1124 08:50:28.828061 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-657wc_openshift-ovn-kubernetes(03f9078c-6b20-46d5-ae2a-2eb20e236769)\"" pod="openshift-ovn-kubernetes/ovnkube-node-657wc" podUID="03f9078c-6b20-46d5-ae2a-2eb20e236769" Nov 24 08:50:28 crc kubenswrapper[4886]: I1124 08:50:28.846809 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb6a474a-7674-4280-8da4-784cc3c4fc67\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e99af72fba7cd02a022aede4ff904f6dc0d263429468b460f3886cbbf8767e9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3a5e301ad7f424b4db6c7c2d4054973bf707f225de81d58a13077e1a130fe59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://879cdf4650f88cd23cfed38dabd912619f3c95ab3c254c0eb893eb4cbb2f8c5c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://243dc8c38e86fbd96d073b5dc2edaf3f378f052563abfedf86cd78e6270f11b1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:04Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:50:28Z is after 2025-08-24T17:21:41Z" Nov 24 08:50:28 crc kubenswrapper[4886]: I1124 08:50:28.848960 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 08:50:28 crc kubenswrapper[4886]: I1124 08:50:28.849097 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-fkfxv" Nov 24 08:50:28 crc kubenswrapper[4886]: E1124 08:50:28.849354 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-fkfxv" podUID="7844f778-bcd5-40fe-ad92-0cc0fcd6c5d7" Nov 24 08:50:28 crc kubenswrapper[4886]: E1124 08:50:28.849481 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 08:50:28 crc kubenswrapper[4886]: I1124 08:50:28.860571 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://336d2d1c7e69b9e2c481681be12af7bfd17b8517e7fd97a67b3ca82dbc84ac35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c72af163fd4d5f2dc2a642984873ee83c62e9f1f96fac318fd17d5a74e5ea82f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:50:28Z is after 2025-08-24T17:21:41Z" Nov 24 08:50:28 crc kubenswrapper[4886]: I1124 08:50:28.872249 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-fkfxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7844f778-bcd5-40fe-ad92-0cc0fcd6c5d7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9ckc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9ckc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:39Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-fkfxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:50:28Z is after 2025-08-24T17:21:41Z" Nov 24 08:50:28 crc kubenswrapper[4886]: I1124 08:50:28.883145 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5g6ld" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82737edd-859f-4e06-8559-47375deb3a1a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f1c6253e8a3c02a22a64ad8913937a50b00f99c7c9da201aa0dd154175dc24c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ffhbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5g6ld\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:50:28Z is after 2025-08-24T17:21:41Z" Nov 24 08:50:28 crc kubenswrapper[4886]: I1124 08:50:28.899718 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kl8k6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b578bbaf-7246-42d9-9d2d-346bd1da2c41\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d401bfaa5ee27091fc815de2752da57dcce2799f3ff3f6c88faacbe02565324\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6ng9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b4ab49b6707685341f4480554ba33f3dce9687661a3ed08ca228914a6821703\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b4ab49b6707685341f4480554ba33f3dce9687661a3ed08ca228914a6821703\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6ng9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce7a090aac10049685ef8de3f6533a3a7c61f8fe5cc79f2af17f6f36d0e90168\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce7a090aac10049685ef8de3f6533a3a7c61f8fe5cc79f2af17f6f36d0e90168\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6ng9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d53782a307c740b991c352671c3365951fb623f61ade4b061452b63f19b5e606\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d53782a307c740b991c352671c3365951fb623f61ade4b061452b63f19b5e606\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6ng9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce2a601938d1e214fdd14de097a4d95fdf3ac619b03bc2d269a9edf192fa940c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce2a601938d1e214fdd14de097a4d95fdf3ac619b03bc2d269a9edf192fa940c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6ng9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e219f09fdaf4fbce2d0c7f6d7dd746f27e424ef649941221dcd487c414e9eabd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e219f09fdaf4fbce2d0c7f6d7dd746f27e424ef649941221dcd487c414e9eabd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6ng9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bd697c5a7e9cd31c3ef75163507c789373f10f4ae458f55bf9bdfe318eda6e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bd697c5a7e9cd31c3ef75163507c789373f10f4ae458f55bf9bdfe318eda6e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6ng9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kl8k6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:50:28Z is after 2025-08-24T17:21:41Z" Nov 24 08:50:28 crc kubenswrapper[4886]: I1124 08:50:28.915421 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:28 crc kubenswrapper[4886]: I1124 08:50:28.915502 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:28 crc kubenswrapper[4886]: I1124 08:50:28.915515 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:28 crc kubenswrapper[4886]: I1124 08:50:28.915533 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:28 crc kubenswrapper[4886]: I1124 08:50:28.915542 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:28Z","lastTransitionTime":"2025-11-24T08:50:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:28 crc kubenswrapper[4886]: I1124 08:50:28.919987 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-657wc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03f9078c-6b20-46d5-ae2a-2eb20e236769\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f59f3a090f9824d71c23fcb40dde8c9d34fa8116c17ba79a45c005b2719f3dc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8c129c56a2c69042516d9caa6abdf4095aefde8fdaef4953a8eed353940be89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f504f7f421bd783399a6d08fa713f0a885d8edd5f5245933a98b6830c5573d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51cd23eb1b9056a44b5716ce542fafec8e3ebde494f136655a2a5d838ec90ab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://48584e4993cf06ba4b29a0732136f0f86c23d81adea76e945015ffafb462819d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://97477bfe13de2286806b9a8e3c723e3828c1b0ef182329585eaef507dd0c36f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7339f5b94dd48022cf3d53105e626424c485536d2deb094f3aba90f56c27c698\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7339f5b94dd48022cf3d53105e626424c485536d2deb094f3aba90f56c27c698\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T08:50:27Z\\\",\\\"message\\\":\\\"workPolicy event handler 4 for removal\\\\nI1124 08:50:27.556108 6936 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1124 08:50:27.556113 6936 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1124 08:50:27.556129 6936 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1124 08:50:27.556134 6936 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1124 08:50:27.556184 6936 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1124 08:50:27.556190 6936 handler.go:208] Removed *v1.Node event handler 2\\\\nI1124 08:50:27.556213 6936 handler.go:208] Removed *v1.Node event handler 7\\\\nI1124 08:50:27.556214 6936 factory.go:656] Stopping watch factory\\\\nI1124 08:50:27.556213 6936 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1124 08:50:27.556251 6936 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1124 08:50:27.556233 6936 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1124 08:50:27.556237 6936 ovnkube.go:599] Stopped ovnkube\\\\nI1124 08:50:27.556275 6936 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1124 08:50:27.556251 6936 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1124 08:50:27.556332 6936 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1124 08:50:27.556436 6936 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T08:50:26Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-657wc_openshift-ovn-kubernetes(03f9078c-6b20-46d5-ae2a-2eb20e236769)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://070c77876ce83a479d0ab423d8107a14d506c655b476b2719766d78ad3c88dc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53fba806955f4d7e98c5a2c830bc39cc6f9ac2a6248099aa13da6dff5a0cde76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53fba806955f4d7e98c5a2c830bc39cc6f9ac2a6248099aa13da6dff5a0cde76\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-657wc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:50:28Z is after 2025-08-24T17:21:41Z" Nov 24 08:50:28 crc kubenswrapper[4886]: I1124 08:50:28.943573 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1de93d0-350b-4f10-a50d-d4ca45b47a33\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6cae19a77290a6aa02abc366057b242de03f58a183d136dc19248cdfa2c0fa75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e34e380e0dd3b6788878e07e1d497d003445742fb3e976e78f116fd775fb0ae8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aa26820df6b9179aede13abc90067edc933b9bae6f11a6a1c32cf1a3ee494c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66381e4a8de0eb3ec683f0643c26082dac4f16a6e152164bf9a8b14449a23b0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52f6359fcbe915c66d972a4cdf89646528d2c1d5099c96cb3436e2da54d1099a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ee3470d2b357bc4a39b9fa7da8800930ea9f4d4a3926aa9d2000be8ca2da241\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ee3470d2b357bc4a39b9fa7da8800930ea9f4d4a3926aa9d2000be8ca2da241\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74bf2e186184e37d22515fc9a765b4ff0530354eb4b6af7679cf6f046651f9e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74bf2e186184e37d22515fc9a765b4ff0530354eb4b6af7679cf6f046651f9e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1794ee7f2b8bf736d63385a3755d5a50e5f05cc74470ed04c3f21054476ff5f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1794ee7f2b8bf736d63385a3755d5a50e5f05cc74470ed04c3f21054476ff5f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:04Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:50:28Z is after 2025-08-24T17:21:41Z" Nov 24 08:50:28 crc kubenswrapper[4886]: I1124 08:50:28.960316 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b56e78f-e569-4f01-9a0f-6b9db3b505d7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:50:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:50:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4cd419a96834f5093a6a3bad2d2b99f9ab6f4abdbe5d7fddadf1505a0fd1f3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff0e663f6feec9c77de1cd993c443c9b3df3492612c554cc8b95c8e4e4a026b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://146e753c49089634c3d3105a3debf9737245745d8ed9aee61ae2cf4f56ae37ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f13b61e62e7d9dc91305545f1843a74d2b10a9df1f207cddb9947059876a6d36\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6d0c617fe424a39e246d154cd6c0d8dc451c251e24dffaf45d644354e94e01e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"message\\\":\\\" 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1124 08:49:25.277531 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1124 08:49:25.277556 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1124 08:49:25.277707 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1124 08:49:25.277722 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1124 08:49:25.278962 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-505038271/tls.crt::/tmp/serving-cert-505038271/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1763974159\\\\\\\\\\\\\\\" (2025-11-24 08:49:18 +0000 UTC to 2025-12-24 08:49:19 +0000 UTC (now=2025-11-24 08:49:25.278915173 +0000 UTC))\\\\\\\"\\\\nI1124 08:49:25.279136 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1763974165\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1763974164\\\\\\\\\\\\\\\" (2025-11-24 07:49:24 +0000 UTC to 2026-11-24 07:49:24 +0000 UTC (now=2025-11-24 08:49:25.279105228 +0000 UTC))\\\\\\\"\\\\nI1124 08:49:25.279186 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1124 08:49:25.279230 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1124 08:49:25.279272 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-505038271/tls.crt::/tmp/serving-cert-505038271/tls.key\\\\\\\"\\\\nI1124 08:49:25.279422 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF1124 08:49:25.273255 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://97e022f9b9279047d700f687148beaeef5ce4f72de418be1bdf0edde4e5211da\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc5ed0e2c8f6299305d6c26194c4b43e6e884785effd5ae99254221d738e7662\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc5ed0e2c8f6299305d6c26194c4b43e6e884785effd5ae99254221d738e7662\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:50:28Z is after 2025-08-24T17:21:41Z" Nov 24 08:50:28 crc kubenswrapper[4886]: I1124 08:50:28.973655 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15edcf2f-5ba7-4db0-b44e-fac293a82688\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44ecd99cee98ca44de5a0af10ab152bc27aca5ab8f94d9d58b295804261846d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b4868ba55a55a9c6373e02adc88283d51196d5e19b554d0f351c59599731130\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0c4a9d27ad84a91160eed506a2a6b0b34968fc71063ce3493326092049d2bf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://98787b856c8680640854ec328aa9dc502d03f8617e5b19283b2bd03c3cca7988\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://98787b856c8680640854ec328aa9dc502d03f8617e5b19283b2bd03c3cca7988\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:05Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:04Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:50:28Z is after 2025-08-24T17:21:41Z" Nov 24 08:50:28 crc kubenswrapper[4886]: I1124 08:50:28.988351 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://736f509b74b3eb557d2963822fd7f3aa507fc34bc178d6b2c05e05dde6e2c88e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:50:28Z is after 2025-08-24T17:21:41Z" Nov 24 08:50:29 crc kubenswrapper[4886]: I1124 08:50:29.002592 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:50:29Z is after 2025-08-24T17:21:41Z" Nov 24 08:50:29 crc kubenswrapper[4886]: I1124 08:50:29.016317 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vcq8j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6279457-41d0-46e2-9a21-1d3c74311083\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35b8da4abfec501a571590893bbace52b9e8453890065d2a01b0da8c5b8f9aaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mvm6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db48bd367bf72ecf426c27f7afe1df9a346b1d8bb8fe198a14399f47699d47b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mvm6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-vcq8j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:50:29Z is after 2025-08-24T17:21:41Z" Nov 24 08:50:29 crc kubenswrapper[4886]: I1124 08:50:29.017814 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:29 crc kubenswrapper[4886]: I1124 08:50:29.017845 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:29 crc kubenswrapper[4886]: I1124 08:50:29.017853 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:29 crc kubenswrapper[4886]: I1124 08:50:29.017875 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:29 crc kubenswrapper[4886]: I1124 08:50:29.017889 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:29Z","lastTransitionTime":"2025-11-24T08:50:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:29 crc kubenswrapper[4886]: I1124 08:50:29.028125 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:50:29Z is after 2025-08-24T17:21:41Z" Nov 24 08:50:29 crc kubenswrapper[4886]: I1124 08:50:29.041952 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zc46q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23cb993e-0360-4449-b604-8ddd825a6502\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://008c7d8517b1c3c5f875d9e73315ceb9936936a1d977422f16bf2aaba7e3f4dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8hf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b6a02c10c81171f23bf0e623a3f86710da37f6dd62f885c0366830a60b2be34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8hf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zc46q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:50:29Z is after 2025-08-24T17:21:41Z" Nov 24 08:50:29 crc kubenswrapper[4886]: I1124 08:50:29.059595 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2dk8j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d515fec-60f3-4bf7-9ba4-697bb691b670\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:50:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:50:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d14b62d61a68782ff616caa790ed7dd0213d43cd7f9efca2204e0f0bb8400d60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ead2821f6b71571212807c6f2984d541faf5143c28fee48594b19e08bd91d085\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T08:50:13Z\\\",\\\"message\\\":\\\"2025-11-24T08:49:28+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_be9db12b-72b2-4aa1-a7ac-01b80a43728e\\\\n2025-11-24T08:49:28+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_be9db12b-72b2-4aa1-a7ac-01b80a43728e to /host/opt/cni/bin/\\\\n2025-11-24T08:49:28Z [verbose] multus-daemon started\\\\n2025-11-24T08:49:28Z [verbose] Readiness Indicator file check\\\\n2025-11-24T08:50:13Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:26Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:50:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6pfs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2dk8j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:50:29Z is after 2025-08-24T17:21:41Z" Nov 24 08:50:29 crc kubenswrapper[4886]: I1124 08:50:29.078022 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56bddcbe3378ccc43de3046352a1284d9590790973736e218e891b804bfe1ff8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:50:29Z is after 2025-08-24T17:21:41Z" Nov 24 08:50:29 crc kubenswrapper[4886]: I1124 08:50:29.092127 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:50:29Z is after 2025-08-24T17:21:41Z" Nov 24 08:50:29 crc kubenswrapper[4886]: I1124 08:50:29.105069 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-82v6c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5d34f7b-b75b-4572-87ca-a01c66ba67b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9fe889c11fc75d5c449c6facbeb67f8fc2d60ffedbdf1433b72147cdabd353e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8qmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-82v6c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:50:29Z is after 2025-08-24T17:21:41Z" Nov 24 08:50:29 crc kubenswrapper[4886]: I1124 08:50:29.120442 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:29 crc kubenswrapper[4886]: I1124 08:50:29.120471 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:29 crc kubenswrapper[4886]: I1124 08:50:29.120480 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:29 crc kubenswrapper[4886]: I1124 08:50:29.120495 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:29 crc kubenswrapper[4886]: I1124 08:50:29.120506 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:29Z","lastTransitionTime":"2025-11-24T08:50:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:29 crc kubenswrapper[4886]: I1124 08:50:29.223293 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:29 crc kubenswrapper[4886]: I1124 08:50:29.223354 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:29 crc kubenswrapper[4886]: I1124 08:50:29.223367 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:29 crc kubenswrapper[4886]: I1124 08:50:29.223392 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:29 crc kubenswrapper[4886]: I1124 08:50:29.223409 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:29Z","lastTransitionTime":"2025-11-24T08:50:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:29 crc kubenswrapper[4886]: I1124 08:50:29.326049 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:29 crc kubenswrapper[4886]: I1124 08:50:29.326097 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:29 crc kubenswrapper[4886]: I1124 08:50:29.326110 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:29 crc kubenswrapper[4886]: I1124 08:50:29.326127 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:29 crc kubenswrapper[4886]: I1124 08:50:29.326139 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:29Z","lastTransitionTime":"2025-11-24T08:50:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:29 crc kubenswrapper[4886]: I1124 08:50:29.429001 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:29 crc kubenswrapper[4886]: I1124 08:50:29.429047 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:29 crc kubenswrapper[4886]: I1124 08:50:29.429056 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:29 crc kubenswrapper[4886]: I1124 08:50:29.429077 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:29 crc kubenswrapper[4886]: I1124 08:50:29.429090 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:29Z","lastTransitionTime":"2025-11-24T08:50:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:29 crc kubenswrapper[4886]: I1124 08:50:29.532139 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:29 crc kubenswrapper[4886]: I1124 08:50:29.532240 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:29 crc kubenswrapper[4886]: I1124 08:50:29.532255 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:29 crc kubenswrapper[4886]: I1124 08:50:29.532280 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:29 crc kubenswrapper[4886]: I1124 08:50:29.532295 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:29Z","lastTransitionTime":"2025-11-24T08:50:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:29 crc kubenswrapper[4886]: I1124 08:50:29.635756 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:29 crc kubenswrapper[4886]: I1124 08:50:29.635841 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:29 crc kubenswrapper[4886]: I1124 08:50:29.635855 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:29 crc kubenswrapper[4886]: I1124 08:50:29.635878 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:29 crc kubenswrapper[4886]: I1124 08:50:29.635892 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:29Z","lastTransitionTime":"2025-11-24T08:50:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:29 crc kubenswrapper[4886]: I1124 08:50:29.738990 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:29 crc kubenswrapper[4886]: I1124 08:50:29.739031 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:29 crc kubenswrapper[4886]: I1124 08:50:29.739040 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:29 crc kubenswrapper[4886]: I1124 08:50:29.739058 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:29 crc kubenswrapper[4886]: I1124 08:50:29.739070 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:29Z","lastTransitionTime":"2025-11-24T08:50:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:29 crc kubenswrapper[4886]: I1124 08:50:29.843584 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:29 crc kubenswrapper[4886]: I1124 08:50:29.843653 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:29 crc kubenswrapper[4886]: I1124 08:50:29.843665 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:29 crc kubenswrapper[4886]: I1124 08:50:29.843691 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:29 crc kubenswrapper[4886]: I1124 08:50:29.843704 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:29Z","lastTransitionTime":"2025-11-24T08:50:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:29 crc kubenswrapper[4886]: I1124 08:50:29.848977 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 08:50:29 crc kubenswrapper[4886]: I1124 08:50:29.848997 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 08:50:29 crc kubenswrapper[4886]: E1124 08:50:29.849216 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 08:50:29 crc kubenswrapper[4886]: E1124 08:50:29.849297 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 08:50:29 crc kubenswrapper[4886]: I1124 08:50:29.947257 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:29 crc kubenswrapper[4886]: I1124 08:50:29.947334 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:29 crc kubenswrapper[4886]: I1124 08:50:29.947347 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:29 crc kubenswrapper[4886]: I1124 08:50:29.947371 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:29 crc kubenswrapper[4886]: I1124 08:50:29.947384 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:29Z","lastTransitionTime":"2025-11-24T08:50:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:30 crc kubenswrapper[4886]: I1124 08:50:30.050079 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:30 crc kubenswrapper[4886]: I1124 08:50:30.050126 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:30 crc kubenswrapper[4886]: I1124 08:50:30.050142 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:30 crc kubenswrapper[4886]: I1124 08:50:30.050177 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:30 crc kubenswrapper[4886]: I1124 08:50:30.050190 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:30Z","lastTransitionTime":"2025-11-24T08:50:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:30 crc kubenswrapper[4886]: I1124 08:50:30.153517 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:30 crc kubenswrapper[4886]: I1124 08:50:30.153583 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:30 crc kubenswrapper[4886]: I1124 08:50:30.153599 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:30 crc kubenswrapper[4886]: I1124 08:50:30.153624 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:30 crc kubenswrapper[4886]: I1124 08:50:30.153645 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:30Z","lastTransitionTime":"2025-11-24T08:50:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:30 crc kubenswrapper[4886]: I1124 08:50:30.256603 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:30 crc kubenswrapper[4886]: I1124 08:50:30.256665 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:30 crc kubenswrapper[4886]: I1124 08:50:30.256676 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:30 crc kubenswrapper[4886]: I1124 08:50:30.256699 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:30 crc kubenswrapper[4886]: I1124 08:50:30.256712 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:30Z","lastTransitionTime":"2025-11-24T08:50:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:30 crc kubenswrapper[4886]: I1124 08:50:30.360337 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:30 crc kubenswrapper[4886]: I1124 08:50:30.360396 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:30 crc kubenswrapper[4886]: I1124 08:50:30.360409 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:30 crc kubenswrapper[4886]: I1124 08:50:30.360431 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:30 crc kubenswrapper[4886]: I1124 08:50:30.360443 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:30Z","lastTransitionTime":"2025-11-24T08:50:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:30 crc kubenswrapper[4886]: I1124 08:50:30.464515 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:30 crc kubenswrapper[4886]: I1124 08:50:30.464565 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:30 crc kubenswrapper[4886]: I1124 08:50:30.464582 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:30 crc kubenswrapper[4886]: I1124 08:50:30.464607 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:30 crc kubenswrapper[4886]: I1124 08:50:30.464619 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:30Z","lastTransitionTime":"2025-11-24T08:50:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:30 crc kubenswrapper[4886]: I1124 08:50:30.567694 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:30 crc kubenswrapper[4886]: I1124 08:50:30.567751 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:30 crc kubenswrapper[4886]: I1124 08:50:30.567764 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:30 crc kubenswrapper[4886]: I1124 08:50:30.567790 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:30 crc kubenswrapper[4886]: I1124 08:50:30.567807 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:30Z","lastTransitionTime":"2025-11-24T08:50:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:30 crc kubenswrapper[4886]: I1124 08:50:30.670523 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:30 crc kubenswrapper[4886]: I1124 08:50:30.670576 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:30 crc kubenswrapper[4886]: I1124 08:50:30.670587 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:30 crc kubenswrapper[4886]: I1124 08:50:30.670608 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:30 crc kubenswrapper[4886]: I1124 08:50:30.670623 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:30Z","lastTransitionTime":"2025-11-24T08:50:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:30 crc kubenswrapper[4886]: I1124 08:50:30.772920 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:30 crc kubenswrapper[4886]: I1124 08:50:30.772959 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:30 crc kubenswrapper[4886]: I1124 08:50:30.772968 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:30 crc kubenswrapper[4886]: I1124 08:50:30.772985 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:30 crc kubenswrapper[4886]: I1124 08:50:30.773001 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:30Z","lastTransitionTime":"2025-11-24T08:50:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:30 crc kubenswrapper[4886]: I1124 08:50:30.849029 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 08:50:30 crc kubenswrapper[4886]: E1124 08:50:30.849208 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 08:50:30 crc kubenswrapper[4886]: I1124 08:50:30.849462 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-fkfxv" Nov 24 08:50:30 crc kubenswrapper[4886]: E1124 08:50:30.849565 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-fkfxv" podUID="7844f778-bcd5-40fe-ad92-0cc0fcd6c5d7" Nov 24 08:50:30 crc kubenswrapper[4886]: I1124 08:50:30.874816 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:30 crc kubenswrapper[4886]: I1124 08:50:30.874860 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:30 crc kubenswrapper[4886]: I1124 08:50:30.874871 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:30 crc kubenswrapper[4886]: I1124 08:50:30.874888 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:30 crc kubenswrapper[4886]: I1124 08:50:30.874899 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:30Z","lastTransitionTime":"2025-11-24T08:50:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:30 crc kubenswrapper[4886]: I1124 08:50:30.977451 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:30 crc kubenswrapper[4886]: I1124 08:50:30.977498 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:30 crc kubenswrapper[4886]: I1124 08:50:30.977509 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:30 crc kubenswrapper[4886]: I1124 08:50:30.977525 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:30 crc kubenswrapper[4886]: I1124 08:50:30.977536 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:30Z","lastTransitionTime":"2025-11-24T08:50:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:31 crc kubenswrapper[4886]: I1124 08:50:31.080520 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:31 crc kubenswrapper[4886]: I1124 08:50:31.080564 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:31 crc kubenswrapper[4886]: I1124 08:50:31.080571 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:31 crc kubenswrapper[4886]: I1124 08:50:31.080588 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:31 crc kubenswrapper[4886]: I1124 08:50:31.080598 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:31Z","lastTransitionTime":"2025-11-24T08:50:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:31 crc kubenswrapper[4886]: I1124 08:50:31.183937 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:31 crc kubenswrapper[4886]: I1124 08:50:31.184567 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:31 crc kubenswrapper[4886]: I1124 08:50:31.184585 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:31 crc kubenswrapper[4886]: I1124 08:50:31.184610 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:31 crc kubenswrapper[4886]: I1124 08:50:31.184627 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:31Z","lastTransitionTime":"2025-11-24T08:50:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:31 crc kubenswrapper[4886]: I1124 08:50:31.287453 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:31 crc kubenswrapper[4886]: I1124 08:50:31.287496 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:31 crc kubenswrapper[4886]: I1124 08:50:31.287506 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:31 crc kubenswrapper[4886]: I1124 08:50:31.287524 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:31 crc kubenswrapper[4886]: I1124 08:50:31.287535 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:31Z","lastTransitionTime":"2025-11-24T08:50:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:31 crc kubenswrapper[4886]: I1124 08:50:31.389831 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:31 crc kubenswrapper[4886]: I1124 08:50:31.389875 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:31 crc kubenswrapper[4886]: I1124 08:50:31.389886 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:31 crc kubenswrapper[4886]: I1124 08:50:31.389905 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:31 crc kubenswrapper[4886]: I1124 08:50:31.389916 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:31Z","lastTransitionTime":"2025-11-24T08:50:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:31 crc kubenswrapper[4886]: I1124 08:50:31.492879 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:31 crc kubenswrapper[4886]: I1124 08:50:31.492948 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:31 crc kubenswrapper[4886]: I1124 08:50:31.492974 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:31 crc kubenswrapper[4886]: I1124 08:50:31.493004 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:31 crc kubenswrapper[4886]: I1124 08:50:31.493025 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:31Z","lastTransitionTime":"2025-11-24T08:50:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:31 crc kubenswrapper[4886]: I1124 08:50:31.595702 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:31 crc kubenswrapper[4886]: I1124 08:50:31.595763 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:31 crc kubenswrapper[4886]: I1124 08:50:31.595781 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:31 crc kubenswrapper[4886]: I1124 08:50:31.595804 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:31 crc kubenswrapper[4886]: I1124 08:50:31.595822 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:31Z","lastTransitionTime":"2025-11-24T08:50:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:31 crc kubenswrapper[4886]: I1124 08:50:31.698380 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:31 crc kubenswrapper[4886]: I1124 08:50:31.698450 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:31 crc kubenswrapper[4886]: I1124 08:50:31.698476 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:31 crc kubenswrapper[4886]: I1124 08:50:31.698512 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:31 crc kubenswrapper[4886]: I1124 08:50:31.698537 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:31Z","lastTransitionTime":"2025-11-24T08:50:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:31 crc kubenswrapper[4886]: I1124 08:50:31.802229 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:31 crc kubenswrapper[4886]: I1124 08:50:31.802292 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:31 crc kubenswrapper[4886]: I1124 08:50:31.802304 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:31 crc kubenswrapper[4886]: I1124 08:50:31.802327 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:31 crc kubenswrapper[4886]: I1124 08:50:31.802341 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:31Z","lastTransitionTime":"2025-11-24T08:50:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:31 crc kubenswrapper[4886]: I1124 08:50:31.848621 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 08:50:31 crc kubenswrapper[4886]: E1124 08:50:31.848733 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 08:50:31 crc kubenswrapper[4886]: I1124 08:50:31.848886 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 08:50:31 crc kubenswrapper[4886]: E1124 08:50:31.849336 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 08:50:31 crc kubenswrapper[4886]: I1124 08:50:31.905612 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:31 crc kubenswrapper[4886]: I1124 08:50:31.905908 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:31 crc kubenswrapper[4886]: I1124 08:50:31.906041 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:31 crc kubenswrapper[4886]: I1124 08:50:31.906229 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:31 crc kubenswrapper[4886]: I1124 08:50:31.906439 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:31Z","lastTransitionTime":"2025-11-24T08:50:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:32 crc kubenswrapper[4886]: I1124 08:50:32.009894 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:32 crc kubenswrapper[4886]: I1124 08:50:32.009938 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:32 crc kubenswrapper[4886]: I1124 08:50:32.009950 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:32 crc kubenswrapper[4886]: I1124 08:50:32.009970 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:32 crc kubenswrapper[4886]: I1124 08:50:32.009984 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:32Z","lastTransitionTime":"2025-11-24T08:50:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:32 crc kubenswrapper[4886]: I1124 08:50:32.112635 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:32 crc kubenswrapper[4886]: I1124 08:50:32.113041 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:32 crc kubenswrapper[4886]: I1124 08:50:32.113112 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:32 crc kubenswrapper[4886]: I1124 08:50:32.113198 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:32 crc kubenswrapper[4886]: I1124 08:50:32.113270 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:32Z","lastTransitionTime":"2025-11-24T08:50:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:32 crc kubenswrapper[4886]: I1124 08:50:32.215860 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:32 crc kubenswrapper[4886]: I1124 08:50:32.215900 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:32 crc kubenswrapper[4886]: I1124 08:50:32.215911 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:32 crc kubenswrapper[4886]: I1124 08:50:32.215930 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:32 crc kubenswrapper[4886]: I1124 08:50:32.215940 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:32Z","lastTransitionTime":"2025-11-24T08:50:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:32 crc kubenswrapper[4886]: I1124 08:50:32.318622 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:32 crc kubenswrapper[4886]: I1124 08:50:32.318646 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:32 crc kubenswrapper[4886]: I1124 08:50:32.318654 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:32 crc kubenswrapper[4886]: I1124 08:50:32.318669 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:32 crc kubenswrapper[4886]: I1124 08:50:32.318703 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:32Z","lastTransitionTime":"2025-11-24T08:50:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:32 crc kubenswrapper[4886]: I1124 08:50:32.421026 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:32 crc kubenswrapper[4886]: I1124 08:50:32.421319 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:32 crc kubenswrapper[4886]: I1124 08:50:32.421419 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:32 crc kubenswrapper[4886]: I1124 08:50:32.421487 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:32 crc kubenswrapper[4886]: I1124 08:50:32.421559 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:32Z","lastTransitionTime":"2025-11-24T08:50:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:32 crc kubenswrapper[4886]: I1124 08:50:32.524482 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:32 crc kubenswrapper[4886]: I1124 08:50:32.524522 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:32 crc kubenswrapper[4886]: I1124 08:50:32.524531 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:32 crc kubenswrapper[4886]: I1124 08:50:32.524550 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:32 crc kubenswrapper[4886]: I1124 08:50:32.524559 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:32Z","lastTransitionTime":"2025-11-24T08:50:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:32 crc kubenswrapper[4886]: I1124 08:50:32.627837 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:32 crc kubenswrapper[4886]: I1124 08:50:32.627884 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:32 crc kubenswrapper[4886]: I1124 08:50:32.627894 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:32 crc kubenswrapper[4886]: I1124 08:50:32.627912 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:32 crc kubenswrapper[4886]: I1124 08:50:32.627924 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:32Z","lastTransitionTime":"2025-11-24T08:50:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:32 crc kubenswrapper[4886]: I1124 08:50:32.731137 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:32 crc kubenswrapper[4886]: I1124 08:50:32.731589 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:32 crc kubenswrapper[4886]: I1124 08:50:32.731704 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:32 crc kubenswrapper[4886]: I1124 08:50:32.731827 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:32 crc kubenswrapper[4886]: I1124 08:50:32.731923 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:32Z","lastTransitionTime":"2025-11-24T08:50:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:32 crc kubenswrapper[4886]: I1124 08:50:32.822041 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:32 crc kubenswrapper[4886]: I1124 08:50:32.822094 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:32 crc kubenswrapper[4886]: I1124 08:50:32.822110 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:32 crc kubenswrapper[4886]: I1124 08:50:32.822130 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:32 crc kubenswrapper[4886]: I1124 08:50:32.822143 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:32Z","lastTransitionTime":"2025-11-24T08:50:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:32 crc kubenswrapper[4886]: E1124 08:50:32.837713 4886 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T08:50:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T08:50:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T08:50:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T08:50:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T08:50:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T08:50:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T08:50:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T08:50:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bf6e1d17-5641-40b5-abdc-9697895ace84\\\",\\\"systemUUID\\\":\\\"b95cd08d-0a26-454e-842e-e33553e0c6a8\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:50:32Z is after 2025-08-24T17:21:41Z" Nov 24 08:50:32 crc kubenswrapper[4886]: I1124 08:50:32.843657 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:32 crc kubenswrapper[4886]: I1124 08:50:32.844070 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:32 crc kubenswrapper[4886]: I1124 08:50:32.844209 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:32 crc kubenswrapper[4886]: I1124 08:50:32.844291 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:32 crc kubenswrapper[4886]: I1124 08:50:32.844373 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:32Z","lastTransitionTime":"2025-11-24T08:50:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:32 crc kubenswrapper[4886]: I1124 08:50:32.849046 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-fkfxv" Nov 24 08:50:32 crc kubenswrapper[4886]: I1124 08:50:32.849050 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 08:50:32 crc kubenswrapper[4886]: E1124 08:50:32.849283 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-fkfxv" podUID="7844f778-bcd5-40fe-ad92-0cc0fcd6c5d7" Nov 24 08:50:32 crc kubenswrapper[4886]: E1124 08:50:32.849472 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 08:50:32 crc kubenswrapper[4886]: E1124 08:50:32.859096 4886 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T08:50:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T08:50:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T08:50:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T08:50:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T08:50:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T08:50:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T08:50:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T08:50:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bf6e1d17-5641-40b5-abdc-9697895ace84\\\",\\\"systemUUID\\\":\\\"b95cd08d-0a26-454e-842e-e33553e0c6a8\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:50:32Z is after 2025-08-24T17:21:41Z" Nov 24 08:50:32 crc kubenswrapper[4886]: I1124 08:50:32.864263 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:32 crc kubenswrapper[4886]: I1124 08:50:32.864314 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:32 crc kubenswrapper[4886]: I1124 08:50:32.864323 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:32 crc kubenswrapper[4886]: I1124 08:50:32.864343 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:32 crc kubenswrapper[4886]: I1124 08:50:32.864356 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:32Z","lastTransitionTime":"2025-11-24T08:50:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:32 crc kubenswrapper[4886]: E1124 08:50:32.877189 4886 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T08:50:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T08:50:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T08:50:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T08:50:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T08:50:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T08:50:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T08:50:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T08:50:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bf6e1d17-5641-40b5-abdc-9697895ace84\\\",\\\"systemUUID\\\":\\\"b95cd08d-0a26-454e-842e-e33553e0c6a8\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:50:32Z is after 2025-08-24T17:21:41Z" Nov 24 08:50:32 crc kubenswrapper[4886]: I1124 08:50:32.886624 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:32 crc kubenswrapper[4886]: I1124 08:50:32.886715 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:32 crc kubenswrapper[4886]: I1124 08:50:32.886735 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:32 crc kubenswrapper[4886]: I1124 08:50:32.886766 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:32 crc kubenswrapper[4886]: I1124 08:50:32.886781 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:32Z","lastTransitionTime":"2025-11-24T08:50:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:32 crc kubenswrapper[4886]: E1124 08:50:32.901553 4886 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T08:50:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T08:50:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T08:50:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T08:50:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T08:50:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T08:50:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T08:50:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T08:50:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bf6e1d17-5641-40b5-abdc-9697895ace84\\\",\\\"systemUUID\\\":\\\"b95cd08d-0a26-454e-842e-e33553e0c6a8\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:50:32Z is after 2025-08-24T17:21:41Z" Nov 24 08:50:32 crc kubenswrapper[4886]: I1124 08:50:32.906427 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:32 crc kubenswrapper[4886]: I1124 08:50:32.906474 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:32 crc kubenswrapper[4886]: I1124 08:50:32.906485 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:32 crc kubenswrapper[4886]: I1124 08:50:32.906504 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:32 crc kubenswrapper[4886]: I1124 08:50:32.906516 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:32Z","lastTransitionTime":"2025-11-24T08:50:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:32 crc kubenswrapper[4886]: E1124 08:50:32.919254 4886 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T08:50:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T08:50:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T08:50:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T08:50:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T08:50:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T08:50:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T08:50:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T08:50:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bf6e1d17-5641-40b5-abdc-9697895ace84\\\",\\\"systemUUID\\\":\\\"b95cd08d-0a26-454e-842e-e33553e0c6a8\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:50:32Z is after 2025-08-24T17:21:41Z" Nov 24 08:50:32 crc kubenswrapper[4886]: E1124 08:50:32.919390 4886 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 24 08:50:32 crc kubenswrapper[4886]: I1124 08:50:32.920979 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:32 crc kubenswrapper[4886]: I1124 08:50:32.921029 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:32 crc kubenswrapper[4886]: I1124 08:50:32.921043 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:32 crc kubenswrapper[4886]: I1124 08:50:32.921065 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:32 crc kubenswrapper[4886]: I1124 08:50:32.921078 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:32Z","lastTransitionTime":"2025-11-24T08:50:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:33 crc kubenswrapper[4886]: I1124 08:50:33.023725 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:33 crc kubenswrapper[4886]: I1124 08:50:33.023765 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:33 crc kubenswrapper[4886]: I1124 08:50:33.023775 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:33 crc kubenswrapper[4886]: I1124 08:50:33.023792 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:33 crc kubenswrapper[4886]: I1124 08:50:33.023802 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:33Z","lastTransitionTime":"2025-11-24T08:50:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:33 crc kubenswrapper[4886]: I1124 08:50:33.126268 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:33 crc kubenswrapper[4886]: I1124 08:50:33.126307 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:33 crc kubenswrapper[4886]: I1124 08:50:33.126317 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:33 crc kubenswrapper[4886]: I1124 08:50:33.126333 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:33 crc kubenswrapper[4886]: I1124 08:50:33.126343 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:33Z","lastTransitionTime":"2025-11-24T08:50:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:33 crc kubenswrapper[4886]: I1124 08:50:33.228628 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:33 crc kubenswrapper[4886]: I1124 08:50:33.228696 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:33 crc kubenswrapper[4886]: I1124 08:50:33.228704 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:33 crc kubenswrapper[4886]: I1124 08:50:33.228726 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:33 crc kubenswrapper[4886]: I1124 08:50:33.228735 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:33Z","lastTransitionTime":"2025-11-24T08:50:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:33 crc kubenswrapper[4886]: I1124 08:50:33.331591 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:33 crc kubenswrapper[4886]: I1124 08:50:33.331913 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:33 crc kubenswrapper[4886]: I1124 08:50:33.332026 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:33 crc kubenswrapper[4886]: I1124 08:50:33.332143 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:33 crc kubenswrapper[4886]: I1124 08:50:33.332290 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:33Z","lastTransitionTime":"2025-11-24T08:50:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:33 crc kubenswrapper[4886]: I1124 08:50:33.435592 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:33 crc kubenswrapper[4886]: I1124 08:50:33.435648 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:33 crc kubenswrapper[4886]: I1124 08:50:33.435658 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:33 crc kubenswrapper[4886]: I1124 08:50:33.435675 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:33 crc kubenswrapper[4886]: I1124 08:50:33.435686 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:33Z","lastTransitionTime":"2025-11-24T08:50:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:33 crc kubenswrapper[4886]: I1124 08:50:33.538399 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:33 crc kubenswrapper[4886]: I1124 08:50:33.538653 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:33 crc kubenswrapper[4886]: I1124 08:50:33.538727 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:33 crc kubenswrapper[4886]: I1124 08:50:33.538795 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:33 crc kubenswrapper[4886]: I1124 08:50:33.538856 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:33Z","lastTransitionTime":"2025-11-24T08:50:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:33 crc kubenswrapper[4886]: I1124 08:50:33.641494 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:33 crc kubenswrapper[4886]: I1124 08:50:33.641536 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:33 crc kubenswrapper[4886]: I1124 08:50:33.641544 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:33 crc kubenswrapper[4886]: I1124 08:50:33.641559 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:33 crc kubenswrapper[4886]: I1124 08:50:33.641569 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:33Z","lastTransitionTime":"2025-11-24T08:50:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:33 crc kubenswrapper[4886]: I1124 08:50:33.743270 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:33 crc kubenswrapper[4886]: I1124 08:50:33.743303 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:33 crc kubenswrapper[4886]: I1124 08:50:33.743313 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:33 crc kubenswrapper[4886]: I1124 08:50:33.743328 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:33 crc kubenswrapper[4886]: I1124 08:50:33.743341 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:33Z","lastTransitionTime":"2025-11-24T08:50:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:33 crc kubenswrapper[4886]: I1124 08:50:33.845047 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:33 crc kubenswrapper[4886]: I1124 08:50:33.845555 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:33 crc kubenswrapper[4886]: I1124 08:50:33.845648 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:33 crc kubenswrapper[4886]: I1124 08:50:33.845722 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:33 crc kubenswrapper[4886]: I1124 08:50:33.845783 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:33Z","lastTransitionTime":"2025-11-24T08:50:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:33 crc kubenswrapper[4886]: I1124 08:50:33.848399 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 08:50:33 crc kubenswrapper[4886]: I1124 08:50:33.848466 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 08:50:33 crc kubenswrapper[4886]: E1124 08:50:33.848561 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 08:50:33 crc kubenswrapper[4886]: E1124 08:50:33.848916 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 08:50:33 crc kubenswrapper[4886]: I1124 08:50:33.948694 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:33 crc kubenswrapper[4886]: I1124 08:50:33.948741 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:33 crc kubenswrapper[4886]: I1124 08:50:33.948752 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:33 crc kubenswrapper[4886]: I1124 08:50:33.948771 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:33 crc kubenswrapper[4886]: I1124 08:50:33.948782 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:33Z","lastTransitionTime":"2025-11-24T08:50:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:34 crc kubenswrapper[4886]: I1124 08:50:34.050927 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:34 crc kubenswrapper[4886]: I1124 08:50:34.050962 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:34 crc kubenswrapper[4886]: I1124 08:50:34.050975 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:34 crc kubenswrapper[4886]: I1124 08:50:34.050995 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:34 crc kubenswrapper[4886]: I1124 08:50:34.051008 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:34Z","lastTransitionTime":"2025-11-24T08:50:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:34 crc kubenswrapper[4886]: I1124 08:50:34.153406 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:34 crc kubenswrapper[4886]: I1124 08:50:34.153454 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:34 crc kubenswrapper[4886]: I1124 08:50:34.153465 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:34 crc kubenswrapper[4886]: I1124 08:50:34.153483 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:34 crc kubenswrapper[4886]: I1124 08:50:34.153496 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:34Z","lastTransitionTime":"2025-11-24T08:50:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:34 crc kubenswrapper[4886]: I1124 08:50:34.256193 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:34 crc kubenswrapper[4886]: I1124 08:50:34.256241 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:34 crc kubenswrapper[4886]: I1124 08:50:34.256251 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:34 crc kubenswrapper[4886]: I1124 08:50:34.256271 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:34 crc kubenswrapper[4886]: I1124 08:50:34.256286 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:34Z","lastTransitionTime":"2025-11-24T08:50:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:34 crc kubenswrapper[4886]: I1124 08:50:34.359238 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:34 crc kubenswrapper[4886]: I1124 08:50:34.359289 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:34 crc kubenswrapper[4886]: I1124 08:50:34.359299 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:34 crc kubenswrapper[4886]: I1124 08:50:34.359315 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:34 crc kubenswrapper[4886]: I1124 08:50:34.359326 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:34Z","lastTransitionTime":"2025-11-24T08:50:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:34 crc kubenswrapper[4886]: I1124 08:50:34.462910 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:34 crc kubenswrapper[4886]: I1124 08:50:34.462966 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:34 crc kubenswrapper[4886]: I1124 08:50:34.462986 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:34 crc kubenswrapper[4886]: I1124 08:50:34.463014 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:34 crc kubenswrapper[4886]: I1124 08:50:34.463036 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:34Z","lastTransitionTime":"2025-11-24T08:50:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:34 crc kubenswrapper[4886]: I1124 08:50:34.566611 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:34 crc kubenswrapper[4886]: I1124 08:50:34.566694 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:34 crc kubenswrapper[4886]: I1124 08:50:34.566716 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:34 crc kubenswrapper[4886]: I1124 08:50:34.566760 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:34 crc kubenswrapper[4886]: I1124 08:50:34.567344 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:34Z","lastTransitionTime":"2025-11-24T08:50:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:34 crc kubenswrapper[4886]: I1124 08:50:34.670328 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:34 crc kubenswrapper[4886]: I1124 08:50:34.670686 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:34 crc kubenswrapper[4886]: I1124 08:50:34.670810 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:34 crc kubenswrapper[4886]: I1124 08:50:34.670914 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:34 crc kubenswrapper[4886]: I1124 08:50:34.671012 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:34Z","lastTransitionTime":"2025-11-24T08:50:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:34 crc kubenswrapper[4886]: I1124 08:50:34.773459 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:34 crc kubenswrapper[4886]: I1124 08:50:34.773497 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:34 crc kubenswrapper[4886]: I1124 08:50:34.773505 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:34 crc kubenswrapper[4886]: I1124 08:50:34.773521 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:34 crc kubenswrapper[4886]: I1124 08:50:34.773531 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:34Z","lastTransitionTime":"2025-11-24T08:50:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:34 crc kubenswrapper[4886]: I1124 08:50:34.848255 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 08:50:34 crc kubenswrapper[4886]: I1124 08:50:34.848315 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-fkfxv" Nov 24 08:50:34 crc kubenswrapper[4886]: E1124 08:50:34.848411 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 08:50:34 crc kubenswrapper[4886]: E1124 08:50:34.848651 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-fkfxv" podUID="7844f778-bcd5-40fe-ad92-0cc0fcd6c5d7" Nov 24 08:50:34 crc kubenswrapper[4886]: I1124 08:50:34.862151 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb6a474a-7674-4280-8da4-784cc3c4fc67\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e99af72fba7cd02a022aede4ff904f6dc0d263429468b460f3886cbbf8767e9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3a5e301ad7f424b4db6c7c2d4054973bf707f225de81d58a13077e1a130fe59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://879cdf4650f88cd23cfed38dabd912619f3c95ab3c254c0eb893eb4cbb2f8c5c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://243dc8c38e86fbd96d073b5dc2edaf3f378f052563abfedf86cd78e6270f11b1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:04Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:50:34Z is after 2025-08-24T17:21:41Z" Nov 24 08:50:34 crc kubenswrapper[4886]: I1124 08:50:34.874772 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://336d2d1c7e69b9e2c481681be12af7bfd17b8517e7fd97a67b3ca82dbc84ac35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c72af163fd4d5f2dc2a642984873ee83c62e9f1f96fac318fd17d5a74e5ea82f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:50:34Z is after 2025-08-24T17:21:41Z" Nov 24 08:50:34 crc kubenswrapper[4886]: I1124 08:50:34.884178 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:34 crc kubenswrapper[4886]: I1124 08:50:34.884226 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:34 crc kubenswrapper[4886]: I1124 08:50:34.884240 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:34 crc kubenswrapper[4886]: I1124 08:50:34.884262 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:34 crc kubenswrapper[4886]: I1124 08:50:34.884276 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:34Z","lastTransitionTime":"2025-11-24T08:50:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:34 crc kubenswrapper[4886]: I1124 08:50:34.888470 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-fkfxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7844f778-bcd5-40fe-ad92-0cc0fcd6c5d7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9ckc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9ckc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:39Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-fkfxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:50:34Z is after 2025-08-24T17:21:41Z" Nov 24 08:50:34 crc kubenswrapper[4886]: I1124 08:50:34.903020 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:50:34Z is after 2025-08-24T17:21:41Z" Nov 24 08:50:34 crc kubenswrapper[4886]: I1124 08:50:34.916600 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5g6ld" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82737edd-859f-4e06-8559-47375deb3a1a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f1c6253e8a3c02a22a64ad8913937a50b00f99c7c9da201aa0dd154175dc24c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ffhbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5g6ld\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:50:34Z is after 2025-08-24T17:21:41Z" Nov 24 08:50:34 crc kubenswrapper[4886]: I1124 08:50:34.932415 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kl8k6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b578bbaf-7246-42d9-9d2d-346bd1da2c41\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d401bfaa5ee27091fc815de2752da57dcce2799f3ff3f6c88faacbe02565324\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6ng9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b4ab49b6707685341f4480554ba33f3dce9687661a3ed08ca228914a6821703\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b4ab49b6707685341f4480554ba33f3dce9687661a3ed08ca228914a6821703\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6ng9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce7a090aac10049685ef8de3f6533a3a7c61f8fe5cc79f2af17f6f36d0e90168\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce7a090aac10049685ef8de3f6533a3a7c61f8fe5cc79f2af17f6f36d0e90168\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6ng9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d53782a307c740b991c352671c3365951fb623f61ade4b061452b63f19b5e606\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d53782a307c740b991c352671c3365951fb623f61ade4b061452b63f19b5e606\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6ng9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce2a601938d1e214fdd14de097a4d95fdf3ac619b03bc2d269a9edf192fa940c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce2a601938d1e214fdd14de097a4d95fdf3ac619b03bc2d269a9edf192fa940c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6ng9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e219f09fdaf4fbce2d0c7f6d7dd746f27e424ef649941221dcd487c414e9eabd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e219f09fdaf4fbce2d0c7f6d7dd746f27e424ef649941221dcd487c414e9eabd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6ng9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bd697c5a7e9cd31c3ef75163507c789373f10f4ae458f55bf9bdfe318eda6e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bd697c5a7e9cd31c3ef75163507c789373f10f4ae458f55bf9bdfe318eda6e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6ng9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kl8k6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:50:34Z is after 2025-08-24T17:21:41Z" Nov 24 08:50:34 crc kubenswrapper[4886]: I1124 08:50:34.953642 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-657wc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03f9078c-6b20-46d5-ae2a-2eb20e236769\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f59f3a090f9824d71c23fcb40dde8c9d34fa8116c17ba79a45c005b2719f3dc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8c129c56a2c69042516d9caa6abdf4095aefde8fdaef4953a8eed353940be89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f504f7f421bd783399a6d08fa713f0a885d8edd5f5245933a98b6830c5573d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51cd23eb1b9056a44b5716ce542fafec8e3ebde494f136655a2a5d838ec90ab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://48584e4993cf06ba4b29a0732136f0f86c23d81adea76e945015ffafb462819d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://97477bfe13de2286806b9a8e3c723e3828c1b0ef182329585eaef507dd0c36f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7339f5b94dd48022cf3d53105e626424c485536d2deb094f3aba90f56c27c698\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7339f5b94dd48022cf3d53105e626424c485536d2deb094f3aba90f56c27c698\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T08:50:27Z\\\",\\\"message\\\":\\\"workPolicy event handler 4 for removal\\\\nI1124 08:50:27.556108 6936 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1124 08:50:27.556113 6936 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1124 08:50:27.556129 6936 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1124 08:50:27.556134 6936 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1124 08:50:27.556184 6936 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1124 08:50:27.556190 6936 handler.go:208] Removed *v1.Node event handler 2\\\\nI1124 08:50:27.556213 6936 handler.go:208] Removed *v1.Node event handler 7\\\\nI1124 08:50:27.556214 6936 factory.go:656] Stopping watch factory\\\\nI1124 08:50:27.556213 6936 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1124 08:50:27.556251 6936 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1124 08:50:27.556233 6936 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1124 08:50:27.556237 6936 ovnkube.go:599] Stopped ovnkube\\\\nI1124 08:50:27.556275 6936 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1124 08:50:27.556251 6936 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1124 08:50:27.556332 6936 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1124 08:50:27.556436 6936 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T08:50:26Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-657wc_openshift-ovn-kubernetes(03f9078c-6b20-46d5-ae2a-2eb20e236769)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://070c77876ce83a479d0ab423d8107a14d506c655b476b2719766d78ad3c88dc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53fba806955f4d7e98c5a2c830bc39cc6f9ac2a6248099aa13da6dff5a0cde76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53fba806955f4d7e98c5a2c830bc39cc6f9ac2a6248099aa13da6dff5a0cde76\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m55nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-657wc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:50:34Z is after 2025-08-24T17:21:41Z" Nov 24 08:50:34 crc kubenswrapper[4886]: I1124 08:50:34.971788 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1de93d0-350b-4f10-a50d-d4ca45b47a33\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6cae19a77290a6aa02abc366057b242de03f58a183d136dc19248cdfa2c0fa75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e34e380e0dd3b6788878e07e1d497d003445742fb3e976e78f116fd775fb0ae8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aa26820df6b9179aede13abc90067edc933b9bae6f11a6a1c32cf1a3ee494c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66381e4a8de0eb3ec683f0643c26082dac4f16a6e152164bf9a8b14449a23b0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52f6359fcbe915c66d972a4cdf89646528d2c1d5099c96cb3436e2da54d1099a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ee3470d2b357bc4a39b9fa7da8800930ea9f4d4a3926aa9d2000be8ca2da241\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ee3470d2b357bc4a39b9fa7da8800930ea9f4d4a3926aa9d2000be8ca2da241\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74bf2e186184e37d22515fc9a765b4ff0530354eb4b6af7679cf6f046651f9e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74bf2e186184e37d22515fc9a765b4ff0530354eb4b6af7679cf6f046651f9e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1794ee7f2b8bf736d63385a3755d5a50e5f05cc74470ed04c3f21054476ff5f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1794ee7f2b8bf736d63385a3755d5a50e5f05cc74470ed04c3f21054476ff5f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:04Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:50:34Z is after 2025-08-24T17:21:41Z" Nov 24 08:50:34 crc kubenswrapper[4886]: I1124 08:50:34.984837 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b56e78f-e569-4f01-9a0f-6b9db3b505d7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:50:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:50:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4cd419a96834f5093a6a3bad2d2b99f9ab6f4abdbe5d7fddadf1505a0fd1f3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff0e663f6feec9c77de1cd993c443c9b3df3492612c554cc8b95c8e4e4a026b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://146e753c49089634c3d3105a3debf9737245745d8ed9aee61ae2cf4f56ae37ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f13b61e62e7d9dc91305545f1843a74d2b10a9df1f207cddb9947059876a6d36\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6d0c617fe424a39e246d154cd6c0d8dc451c251e24dffaf45d644354e94e01e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T08:49:25Z\\\",\\\"message\\\":\\\" 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1124 08:49:25.277531 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1124 08:49:25.277556 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1124 08:49:25.277707 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1124 08:49:25.277722 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1124 08:49:25.278962 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-505038271/tls.crt::/tmp/serving-cert-505038271/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1763974159\\\\\\\\\\\\\\\" (2025-11-24 08:49:18 +0000 UTC to 2025-12-24 08:49:19 +0000 UTC (now=2025-11-24 08:49:25.278915173 +0000 UTC))\\\\\\\"\\\\nI1124 08:49:25.279136 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1763974165\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1763974164\\\\\\\\\\\\\\\" (2025-11-24 07:49:24 +0000 UTC to 2026-11-24 07:49:24 +0000 UTC (now=2025-11-24 08:49:25.279105228 +0000 UTC))\\\\\\\"\\\\nI1124 08:49:25.279186 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1124 08:49:25.279230 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1124 08:49:25.279272 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-505038271/tls.crt::/tmp/serving-cert-505038271/tls.key\\\\\\\"\\\\nI1124 08:49:25.279422 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF1124 08:49:25.273255 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://97e022f9b9279047d700f687148beaeef5ce4f72de418be1bdf0edde4e5211da\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc5ed0e2c8f6299305d6c26194c4b43e6e884785effd5ae99254221d738e7662\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc5ed0e2c8f6299305d6c26194c4b43e6e884785effd5ae99254221d738e7662\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:50:34Z is after 2025-08-24T17:21:41Z" Nov 24 08:50:34 crc kubenswrapper[4886]: I1124 08:50:34.986529 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:34 crc kubenswrapper[4886]: I1124 08:50:34.986594 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:34 crc kubenswrapper[4886]: I1124 08:50:34.986608 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:34 crc kubenswrapper[4886]: I1124 08:50:34.986632 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:34 crc kubenswrapper[4886]: I1124 08:50:34.986644 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:34Z","lastTransitionTime":"2025-11-24T08:50:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:34 crc kubenswrapper[4886]: I1124 08:50:34.998279 4886 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15edcf2f-5ba7-4db0-b44e-fac293a82688\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:49:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44ecd99cee98ca44de5a0af10ab152bc27aca5ab8f94d9d58b295804261846d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b4868ba55a55a9c6373e02adc88283d51196d5e19b554d0f351c59599731130\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0c4a9d27ad84a91160eed506a2a6b0b34968fc71063ce3493326092049d2bf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:49:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://98787b856c8680640854ec328aa9dc502d03f8617e5b19283b2bd03c3cca7988\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://98787b856c8680640854ec328aa9dc502d03f8617e5b19283b2bd03c3cca7988\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:49:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:49:05Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:49:04Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:50:34Z is after 2025-08-24T17:21:41Z" Nov 24 08:50:35 crc kubenswrapper[4886]: I1124 08:50:35.055715 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vcq8j" podStartSLOduration=69.055681697 podStartE2EDuration="1m9.055681697s" podCreationTimestamp="2025-11-24 08:49:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 08:50:35.034900503 +0000 UTC m=+90.921638658" watchObservedRunningTime="2025-11-24 08:50:35.055681697 +0000 UTC m=+90.942419832" Nov 24 08:50:35 crc kubenswrapper[4886]: I1124 08:50:35.070025 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-zc46q" podStartSLOduration=70.069997312 podStartE2EDuration="1m10.069997312s" podCreationTimestamp="2025-11-24 08:49:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 08:50:35.069485768 +0000 UTC m=+90.956223903" watchObservedRunningTime="2025-11-24 08:50:35.069997312 +0000 UTC m=+90.956735447" Nov 24 08:50:35 crc kubenswrapper[4886]: I1124 08:50:35.088783 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:35 crc kubenswrapper[4886]: I1124 08:50:35.088836 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:35 crc kubenswrapper[4886]: I1124 08:50:35.088849 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:35 crc kubenswrapper[4886]: I1124 08:50:35.088870 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:35 crc kubenswrapper[4886]: I1124 08:50:35.088886 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:35Z","lastTransitionTime":"2025-11-24T08:50:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:35 crc kubenswrapper[4886]: I1124 08:50:35.102001 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-2dk8j" podStartSLOduration=70.101981721 podStartE2EDuration="1m10.101981721s" podCreationTimestamp="2025-11-24 08:49:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 08:50:35.083966828 +0000 UTC m=+90.970704963" watchObservedRunningTime="2025-11-24 08:50:35.101981721 +0000 UTC m=+90.988719846" Nov 24 08:50:35 crc kubenswrapper[4886]: I1124 08:50:35.126674 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-82v6c" podStartSLOduration=70.126656428 podStartE2EDuration="1m10.126656428s" podCreationTimestamp="2025-11-24 08:49:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 08:50:35.126284947 +0000 UTC m=+91.013023092" watchObservedRunningTime="2025-11-24 08:50:35.126656428 +0000 UTC m=+91.013394563" Nov 24 08:50:35 crc kubenswrapper[4886]: I1124 08:50:35.192585 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:35 crc kubenswrapper[4886]: I1124 08:50:35.192641 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:35 crc kubenswrapper[4886]: I1124 08:50:35.192653 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:35 crc kubenswrapper[4886]: I1124 08:50:35.192674 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:35 crc kubenswrapper[4886]: I1124 08:50:35.192690 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:35Z","lastTransitionTime":"2025-11-24T08:50:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:35 crc kubenswrapper[4886]: I1124 08:50:35.295536 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:35 crc kubenswrapper[4886]: I1124 08:50:35.295592 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:35 crc kubenswrapper[4886]: I1124 08:50:35.295606 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:35 crc kubenswrapper[4886]: I1124 08:50:35.295627 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:35 crc kubenswrapper[4886]: I1124 08:50:35.295641 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:35Z","lastTransitionTime":"2025-11-24T08:50:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:35 crc kubenswrapper[4886]: I1124 08:50:35.398095 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:35 crc kubenswrapper[4886]: I1124 08:50:35.398196 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:35 crc kubenswrapper[4886]: I1124 08:50:35.398211 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:35 crc kubenswrapper[4886]: I1124 08:50:35.398234 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:35 crc kubenswrapper[4886]: I1124 08:50:35.398249 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:35Z","lastTransitionTime":"2025-11-24T08:50:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:35 crc kubenswrapper[4886]: I1124 08:50:35.501037 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:35 crc kubenswrapper[4886]: I1124 08:50:35.501118 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:35 crc kubenswrapper[4886]: I1124 08:50:35.501129 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:35 crc kubenswrapper[4886]: I1124 08:50:35.501181 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:35 crc kubenswrapper[4886]: I1124 08:50:35.501204 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:35Z","lastTransitionTime":"2025-11-24T08:50:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:35 crc kubenswrapper[4886]: I1124 08:50:35.604110 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:35 crc kubenswrapper[4886]: I1124 08:50:35.604199 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:35 crc kubenswrapper[4886]: I1124 08:50:35.604217 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:35 crc kubenswrapper[4886]: I1124 08:50:35.604242 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:35 crc kubenswrapper[4886]: I1124 08:50:35.604256 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:35Z","lastTransitionTime":"2025-11-24T08:50:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:35 crc kubenswrapper[4886]: I1124 08:50:35.707641 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:35 crc kubenswrapper[4886]: I1124 08:50:35.707681 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:35 crc kubenswrapper[4886]: I1124 08:50:35.707689 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:35 crc kubenswrapper[4886]: I1124 08:50:35.707704 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:35 crc kubenswrapper[4886]: I1124 08:50:35.707713 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:35Z","lastTransitionTime":"2025-11-24T08:50:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:35 crc kubenswrapper[4886]: I1124 08:50:35.810008 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:35 crc kubenswrapper[4886]: I1124 08:50:35.810289 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:35 crc kubenswrapper[4886]: I1124 08:50:35.810389 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:35 crc kubenswrapper[4886]: I1124 08:50:35.810464 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:35 crc kubenswrapper[4886]: I1124 08:50:35.810546 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:35Z","lastTransitionTime":"2025-11-24T08:50:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:35 crc kubenswrapper[4886]: I1124 08:50:35.849075 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 08:50:35 crc kubenswrapper[4886]: E1124 08:50:35.849257 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 08:50:35 crc kubenswrapper[4886]: I1124 08:50:35.849468 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 08:50:35 crc kubenswrapper[4886]: E1124 08:50:35.849525 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 08:50:35 crc kubenswrapper[4886]: I1124 08:50:35.912242 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:35 crc kubenswrapper[4886]: I1124 08:50:35.912273 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:35 crc kubenswrapper[4886]: I1124 08:50:35.912285 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:35 crc kubenswrapper[4886]: I1124 08:50:35.912304 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:35 crc kubenswrapper[4886]: I1124 08:50:35.912314 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:35Z","lastTransitionTime":"2025-11-24T08:50:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:36 crc kubenswrapper[4886]: I1124 08:50:36.014941 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:36 crc kubenswrapper[4886]: I1124 08:50:36.015008 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:36 crc kubenswrapper[4886]: I1124 08:50:36.015021 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:36 crc kubenswrapper[4886]: I1124 08:50:36.015042 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:36 crc kubenswrapper[4886]: I1124 08:50:36.015055 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:36Z","lastTransitionTime":"2025-11-24T08:50:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:36 crc kubenswrapper[4886]: I1124 08:50:36.117866 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:36 crc kubenswrapper[4886]: I1124 08:50:36.117916 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:36 crc kubenswrapper[4886]: I1124 08:50:36.117928 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:36 crc kubenswrapper[4886]: I1124 08:50:36.117949 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:36 crc kubenswrapper[4886]: I1124 08:50:36.117963 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:36Z","lastTransitionTime":"2025-11-24T08:50:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:36 crc kubenswrapper[4886]: I1124 08:50:36.220537 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:36 crc kubenswrapper[4886]: I1124 08:50:36.220620 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:36 crc kubenswrapper[4886]: I1124 08:50:36.220647 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:36 crc kubenswrapper[4886]: I1124 08:50:36.220678 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:36 crc kubenswrapper[4886]: I1124 08:50:36.220706 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:36Z","lastTransitionTime":"2025-11-24T08:50:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:36 crc kubenswrapper[4886]: I1124 08:50:36.323580 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:36 crc kubenswrapper[4886]: I1124 08:50:36.323883 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:36 crc kubenswrapper[4886]: I1124 08:50:36.323960 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:36 crc kubenswrapper[4886]: I1124 08:50:36.324064 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:36 crc kubenswrapper[4886]: I1124 08:50:36.324171 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:36Z","lastTransitionTime":"2025-11-24T08:50:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:36 crc kubenswrapper[4886]: I1124 08:50:36.426510 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:36 crc kubenswrapper[4886]: I1124 08:50:36.426825 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:36 crc kubenswrapper[4886]: I1124 08:50:36.426902 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:36 crc kubenswrapper[4886]: I1124 08:50:36.426985 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:36 crc kubenswrapper[4886]: I1124 08:50:36.427072 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:36Z","lastTransitionTime":"2025-11-24T08:50:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:36 crc kubenswrapper[4886]: I1124 08:50:36.529441 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:36 crc kubenswrapper[4886]: I1124 08:50:36.529674 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:36 crc kubenswrapper[4886]: I1124 08:50:36.529768 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:36 crc kubenswrapper[4886]: I1124 08:50:36.529842 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:36 crc kubenswrapper[4886]: I1124 08:50:36.529911 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:36Z","lastTransitionTime":"2025-11-24T08:50:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:36 crc kubenswrapper[4886]: I1124 08:50:36.632488 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:36 crc kubenswrapper[4886]: I1124 08:50:36.632551 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:36 crc kubenswrapper[4886]: I1124 08:50:36.632566 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:36 crc kubenswrapper[4886]: I1124 08:50:36.632584 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:36 crc kubenswrapper[4886]: I1124 08:50:36.632593 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:36Z","lastTransitionTime":"2025-11-24T08:50:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:36 crc kubenswrapper[4886]: I1124 08:50:36.735301 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:36 crc kubenswrapper[4886]: I1124 08:50:36.735568 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:36 crc kubenswrapper[4886]: I1124 08:50:36.735641 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:36 crc kubenswrapper[4886]: I1124 08:50:36.735726 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:36 crc kubenswrapper[4886]: I1124 08:50:36.735859 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:36Z","lastTransitionTime":"2025-11-24T08:50:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:36 crc kubenswrapper[4886]: I1124 08:50:36.839620 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:36 crc kubenswrapper[4886]: I1124 08:50:36.840027 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:36 crc kubenswrapper[4886]: I1124 08:50:36.840111 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:36 crc kubenswrapper[4886]: I1124 08:50:36.840210 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:36 crc kubenswrapper[4886]: I1124 08:50:36.840283 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:36Z","lastTransitionTime":"2025-11-24T08:50:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:36 crc kubenswrapper[4886]: I1124 08:50:36.848300 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-fkfxv" Nov 24 08:50:36 crc kubenswrapper[4886]: I1124 08:50:36.848319 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 08:50:36 crc kubenswrapper[4886]: E1124 08:50:36.848500 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-fkfxv" podUID="7844f778-bcd5-40fe-ad92-0cc0fcd6c5d7" Nov 24 08:50:36 crc kubenswrapper[4886]: E1124 08:50:36.848596 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 08:50:36 crc kubenswrapper[4886]: I1124 08:50:36.943389 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:36 crc kubenswrapper[4886]: I1124 08:50:36.943422 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:36 crc kubenswrapper[4886]: I1124 08:50:36.943432 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:36 crc kubenswrapper[4886]: I1124 08:50:36.943450 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:36 crc kubenswrapper[4886]: I1124 08:50:36.943461 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:36Z","lastTransitionTime":"2025-11-24T08:50:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:37 crc kubenswrapper[4886]: I1124 08:50:37.046072 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:37 crc kubenswrapper[4886]: I1124 08:50:37.046527 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:37 crc kubenswrapper[4886]: I1124 08:50:37.046646 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:37 crc kubenswrapper[4886]: I1124 08:50:37.046719 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:37 crc kubenswrapper[4886]: I1124 08:50:37.046780 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:37Z","lastTransitionTime":"2025-11-24T08:50:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:37 crc kubenswrapper[4886]: I1124 08:50:37.149647 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:37 crc kubenswrapper[4886]: I1124 08:50:37.149687 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:37 crc kubenswrapper[4886]: I1124 08:50:37.149697 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:37 crc kubenswrapper[4886]: I1124 08:50:37.149712 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:37 crc kubenswrapper[4886]: I1124 08:50:37.149721 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:37Z","lastTransitionTime":"2025-11-24T08:50:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:37 crc kubenswrapper[4886]: I1124 08:50:37.252216 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:37 crc kubenswrapper[4886]: I1124 08:50:37.252267 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:37 crc kubenswrapper[4886]: I1124 08:50:37.252283 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:37 crc kubenswrapper[4886]: I1124 08:50:37.252304 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:37 crc kubenswrapper[4886]: I1124 08:50:37.252315 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:37Z","lastTransitionTime":"2025-11-24T08:50:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:37 crc kubenswrapper[4886]: I1124 08:50:37.354435 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:37 crc kubenswrapper[4886]: I1124 08:50:37.354676 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:37 crc kubenswrapper[4886]: I1124 08:50:37.354751 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:37 crc kubenswrapper[4886]: I1124 08:50:37.354831 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:37 crc kubenswrapper[4886]: I1124 08:50:37.354894 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:37Z","lastTransitionTime":"2025-11-24T08:50:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:37 crc kubenswrapper[4886]: I1124 08:50:37.457413 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:37 crc kubenswrapper[4886]: I1124 08:50:37.457463 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:37 crc kubenswrapper[4886]: I1124 08:50:37.457482 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:37 crc kubenswrapper[4886]: I1124 08:50:37.457502 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:37 crc kubenswrapper[4886]: I1124 08:50:37.457513 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:37Z","lastTransitionTime":"2025-11-24T08:50:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:37 crc kubenswrapper[4886]: I1124 08:50:37.560497 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:37 crc kubenswrapper[4886]: I1124 08:50:37.560785 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:37 crc kubenswrapper[4886]: I1124 08:50:37.560858 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:37 crc kubenswrapper[4886]: I1124 08:50:37.560928 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:37 crc kubenswrapper[4886]: I1124 08:50:37.561010 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:37Z","lastTransitionTime":"2025-11-24T08:50:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:37 crc kubenswrapper[4886]: I1124 08:50:37.664430 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:37 crc kubenswrapper[4886]: I1124 08:50:37.664503 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:37 crc kubenswrapper[4886]: I1124 08:50:37.664520 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:37 crc kubenswrapper[4886]: I1124 08:50:37.664548 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:37 crc kubenswrapper[4886]: I1124 08:50:37.664570 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:37Z","lastTransitionTime":"2025-11-24T08:50:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:37 crc kubenswrapper[4886]: I1124 08:50:37.767219 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:37 crc kubenswrapper[4886]: I1124 08:50:37.767260 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:37 crc kubenswrapper[4886]: I1124 08:50:37.767271 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:37 crc kubenswrapper[4886]: I1124 08:50:37.767290 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:37 crc kubenswrapper[4886]: I1124 08:50:37.767303 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:37Z","lastTransitionTime":"2025-11-24T08:50:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:37 crc kubenswrapper[4886]: I1124 08:50:37.848277 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 08:50:37 crc kubenswrapper[4886]: I1124 08:50:37.848314 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 08:50:37 crc kubenswrapper[4886]: E1124 08:50:37.848422 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 08:50:37 crc kubenswrapper[4886]: E1124 08:50:37.848485 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 08:50:37 crc kubenswrapper[4886]: I1124 08:50:37.860169 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Nov 24 08:50:37 crc kubenswrapper[4886]: I1124 08:50:37.869859 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:37 crc kubenswrapper[4886]: I1124 08:50:37.869904 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:37 crc kubenswrapper[4886]: I1124 08:50:37.869917 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:37 crc kubenswrapper[4886]: I1124 08:50:37.869933 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:37 crc kubenswrapper[4886]: I1124 08:50:37.869944 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:37Z","lastTransitionTime":"2025-11-24T08:50:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:37 crc kubenswrapper[4886]: I1124 08:50:37.972648 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:37 crc kubenswrapper[4886]: I1124 08:50:37.973193 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:37 crc kubenswrapper[4886]: I1124 08:50:37.973278 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:37 crc kubenswrapper[4886]: I1124 08:50:37.973367 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:37 crc kubenswrapper[4886]: I1124 08:50:37.973466 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:37Z","lastTransitionTime":"2025-11-24T08:50:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:38 crc kubenswrapper[4886]: I1124 08:50:38.076898 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:38 crc kubenswrapper[4886]: I1124 08:50:38.076943 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:38 crc kubenswrapper[4886]: I1124 08:50:38.076954 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:38 crc kubenswrapper[4886]: I1124 08:50:38.076974 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:38 crc kubenswrapper[4886]: I1124 08:50:38.076986 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:38Z","lastTransitionTime":"2025-11-24T08:50:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:38 crc kubenswrapper[4886]: I1124 08:50:38.184109 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:38 crc kubenswrapper[4886]: I1124 08:50:38.184296 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:38 crc kubenswrapper[4886]: I1124 08:50:38.184316 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:38 crc kubenswrapper[4886]: I1124 08:50:38.184336 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:38 crc kubenswrapper[4886]: I1124 08:50:38.184348 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:38Z","lastTransitionTime":"2025-11-24T08:50:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:38 crc kubenswrapper[4886]: I1124 08:50:38.287398 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:38 crc kubenswrapper[4886]: I1124 08:50:38.287455 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:38 crc kubenswrapper[4886]: I1124 08:50:38.287466 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:38 crc kubenswrapper[4886]: I1124 08:50:38.287484 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:38 crc kubenswrapper[4886]: I1124 08:50:38.287496 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:38Z","lastTransitionTime":"2025-11-24T08:50:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:38 crc kubenswrapper[4886]: I1124 08:50:38.389809 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:38 crc kubenswrapper[4886]: I1124 08:50:38.389852 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:38 crc kubenswrapper[4886]: I1124 08:50:38.389861 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:38 crc kubenswrapper[4886]: I1124 08:50:38.389877 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:38 crc kubenswrapper[4886]: I1124 08:50:38.389888 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:38Z","lastTransitionTime":"2025-11-24T08:50:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:38 crc kubenswrapper[4886]: I1124 08:50:38.492110 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:38 crc kubenswrapper[4886]: I1124 08:50:38.492145 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:38 crc kubenswrapper[4886]: I1124 08:50:38.492179 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:38 crc kubenswrapper[4886]: I1124 08:50:38.492221 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:38 crc kubenswrapper[4886]: I1124 08:50:38.492236 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:38Z","lastTransitionTime":"2025-11-24T08:50:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:38 crc kubenswrapper[4886]: I1124 08:50:38.595404 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:38 crc kubenswrapper[4886]: I1124 08:50:38.595831 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:38 crc kubenswrapper[4886]: I1124 08:50:38.595938 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:38 crc kubenswrapper[4886]: I1124 08:50:38.596041 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:38 crc kubenswrapper[4886]: I1124 08:50:38.596229 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:38Z","lastTransitionTime":"2025-11-24T08:50:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:38 crc kubenswrapper[4886]: I1124 08:50:38.699279 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:38 crc kubenswrapper[4886]: I1124 08:50:38.699323 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:38 crc kubenswrapper[4886]: I1124 08:50:38.699336 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:38 crc kubenswrapper[4886]: I1124 08:50:38.699360 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:38 crc kubenswrapper[4886]: I1124 08:50:38.699374 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:38Z","lastTransitionTime":"2025-11-24T08:50:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:38 crc kubenswrapper[4886]: I1124 08:50:38.802566 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:38 crc kubenswrapper[4886]: I1124 08:50:38.802607 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:38 crc kubenswrapper[4886]: I1124 08:50:38.802617 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:38 crc kubenswrapper[4886]: I1124 08:50:38.802632 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:38 crc kubenswrapper[4886]: I1124 08:50:38.802642 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:38Z","lastTransitionTime":"2025-11-24T08:50:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:38 crc kubenswrapper[4886]: I1124 08:50:38.849361 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-fkfxv" Nov 24 08:50:38 crc kubenswrapper[4886]: I1124 08:50:38.849506 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 08:50:38 crc kubenswrapper[4886]: E1124 08:50:38.849598 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-fkfxv" podUID="7844f778-bcd5-40fe-ad92-0cc0fcd6c5d7" Nov 24 08:50:38 crc kubenswrapper[4886]: E1124 08:50:38.849717 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 08:50:38 crc kubenswrapper[4886]: I1124 08:50:38.905842 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:38 crc kubenswrapper[4886]: I1124 08:50:38.905902 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:38 crc kubenswrapper[4886]: I1124 08:50:38.905920 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:38 crc kubenswrapper[4886]: I1124 08:50:38.905946 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:38 crc kubenswrapper[4886]: I1124 08:50:38.905971 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:38Z","lastTransitionTime":"2025-11-24T08:50:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:39 crc kubenswrapper[4886]: I1124 08:50:39.008095 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:39 crc kubenswrapper[4886]: I1124 08:50:39.008136 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:39 crc kubenswrapper[4886]: I1124 08:50:39.008163 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:39 crc kubenswrapper[4886]: I1124 08:50:39.008183 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:39 crc kubenswrapper[4886]: I1124 08:50:39.008193 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:39Z","lastTransitionTime":"2025-11-24T08:50:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:39 crc kubenswrapper[4886]: I1124 08:50:39.110558 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:39 crc kubenswrapper[4886]: I1124 08:50:39.110589 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:39 crc kubenswrapper[4886]: I1124 08:50:39.110598 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:39 crc kubenswrapper[4886]: I1124 08:50:39.110611 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:39 crc kubenswrapper[4886]: I1124 08:50:39.110620 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:39Z","lastTransitionTime":"2025-11-24T08:50:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:39 crc kubenswrapper[4886]: I1124 08:50:39.213018 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:39 crc kubenswrapper[4886]: I1124 08:50:39.213062 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:39 crc kubenswrapper[4886]: I1124 08:50:39.213071 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:39 crc kubenswrapper[4886]: I1124 08:50:39.213086 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:39 crc kubenswrapper[4886]: I1124 08:50:39.213100 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:39Z","lastTransitionTime":"2025-11-24T08:50:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:39 crc kubenswrapper[4886]: I1124 08:50:39.316236 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:39 crc kubenswrapper[4886]: I1124 08:50:39.316281 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:39 crc kubenswrapper[4886]: I1124 08:50:39.316289 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:39 crc kubenswrapper[4886]: I1124 08:50:39.316306 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:39 crc kubenswrapper[4886]: I1124 08:50:39.316318 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:39Z","lastTransitionTime":"2025-11-24T08:50:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:39 crc kubenswrapper[4886]: I1124 08:50:39.419489 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:39 crc kubenswrapper[4886]: I1124 08:50:39.419544 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:39 crc kubenswrapper[4886]: I1124 08:50:39.419557 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:39 crc kubenswrapper[4886]: I1124 08:50:39.419576 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:39 crc kubenswrapper[4886]: I1124 08:50:39.419589 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:39Z","lastTransitionTime":"2025-11-24T08:50:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:39 crc kubenswrapper[4886]: I1124 08:50:39.522554 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:39 crc kubenswrapper[4886]: I1124 08:50:39.522608 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:39 crc kubenswrapper[4886]: I1124 08:50:39.522619 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:39 crc kubenswrapper[4886]: I1124 08:50:39.522642 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:39 crc kubenswrapper[4886]: I1124 08:50:39.522662 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:39Z","lastTransitionTime":"2025-11-24T08:50:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:39 crc kubenswrapper[4886]: I1124 08:50:39.625565 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:39 crc kubenswrapper[4886]: I1124 08:50:39.625642 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:39 crc kubenswrapper[4886]: I1124 08:50:39.625655 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:39 crc kubenswrapper[4886]: I1124 08:50:39.625682 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:39 crc kubenswrapper[4886]: I1124 08:50:39.625698 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:39Z","lastTransitionTime":"2025-11-24T08:50:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:39 crc kubenswrapper[4886]: I1124 08:50:39.728072 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:39 crc kubenswrapper[4886]: I1124 08:50:39.728138 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:39 crc kubenswrapper[4886]: I1124 08:50:39.728184 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:39 crc kubenswrapper[4886]: I1124 08:50:39.728207 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:39 crc kubenswrapper[4886]: I1124 08:50:39.728228 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:39Z","lastTransitionTime":"2025-11-24T08:50:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:39 crc kubenswrapper[4886]: I1124 08:50:39.831299 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:39 crc kubenswrapper[4886]: I1124 08:50:39.831769 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:39 crc kubenswrapper[4886]: I1124 08:50:39.831853 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:39 crc kubenswrapper[4886]: I1124 08:50:39.831926 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:39 crc kubenswrapper[4886]: I1124 08:50:39.832007 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:39Z","lastTransitionTime":"2025-11-24T08:50:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:39 crc kubenswrapper[4886]: I1124 08:50:39.848462 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 08:50:39 crc kubenswrapper[4886]: I1124 08:50:39.848999 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 08:50:39 crc kubenswrapper[4886]: E1124 08:50:39.849119 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 08:50:39 crc kubenswrapper[4886]: E1124 08:50:39.849394 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 08:50:39 crc kubenswrapper[4886]: I1124 08:50:39.935099 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:39 crc kubenswrapper[4886]: I1124 08:50:39.935204 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:39 crc kubenswrapper[4886]: I1124 08:50:39.935221 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:39 crc kubenswrapper[4886]: I1124 08:50:39.935243 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:39 crc kubenswrapper[4886]: I1124 08:50:39.935257 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:39Z","lastTransitionTime":"2025-11-24T08:50:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:40 crc kubenswrapper[4886]: I1124 08:50:40.038015 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:40 crc kubenswrapper[4886]: I1124 08:50:40.038066 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:40 crc kubenswrapper[4886]: I1124 08:50:40.038079 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:40 crc kubenswrapper[4886]: I1124 08:50:40.038101 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:40 crc kubenswrapper[4886]: I1124 08:50:40.038116 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:40Z","lastTransitionTime":"2025-11-24T08:50:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:40 crc kubenswrapper[4886]: I1124 08:50:40.141094 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:40 crc kubenswrapper[4886]: I1124 08:50:40.141136 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:40 crc kubenswrapper[4886]: I1124 08:50:40.141175 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:40 crc kubenswrapper[4886]: I1124 08:50:40.141195 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:40 crc kubenswrapper[4886]: I1124 08:50:40.141213 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:40Z","lastTransitionTime":"2025-11-24T08:50:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:40 crc kubenswrapper[4886]: I1124 08:50:40.243938 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:40 crc kubenswrapper[4886]: I1124 08:50:40.244000 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:40 crc kubenswrapper[4886]: I1124 08:50:40.244019 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:40 crc kubenswrapper[4886]: I1124 08:50:40.244041 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:40 crc kubenswrapper[4886]: I1124 08:50:40.244056 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:40Z","lastTransitionTime":"2025-11-24T08:50:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:40 crc kubenswrapper[4886]: I1124 08:50:40.346949 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:40 crc kubenswrapper[4886]: I1124 08:50:40.347003 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:40 crc kubenswrapper[4886]: I1124 08:50:40.347014 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:40 crc kubenswrapper[4886]: I1124 08:50:40.347030 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:40 crc kubenswrapper[4886]: I1124 08:50:40.347044 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:40Z","lastTransitionTime":"2025-11-24T08:50:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:40 crc kubenswrapper[4886]: I1124 08:50:40.450462 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:40 crc kubenswrapper[4886]: I1124 08:50:40.450547 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:40 crc kubenswrapper[4886]: I1124 08:50:40.450568 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:40 crc kubenswrapper[4886]: I1124 08:50:40.450598 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:40 crc kubenswrapper[4886]: I1124 08:50:40.450621 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:40Z","lastTransitionTime":"2025-11-24T08:50:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:40 crc kubenswrapper[4886]: I1124 08:50:40.553347 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:40 crc kubenswrapper[4886]: I1124 08:50:40.553403 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:40 crc kubenswrapper[4886]: I1124 08:50:40.553413 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:40 crc kubenswrapper[4886]: I1124 08:50:40.553434 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:40 crc kubenswrapper[4886]: I1124 08:50:40.553449 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:40Z","lastTransitionTime":"2025-11-24T08:50:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:40 crc kubenswrapper[4886]: I1124 08:50:40.656206 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:40 crc kubenswrapper[4886]: I1124 08:50:40.656238 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:40 crc kubenswrapper[4886]: I1124 08:50:40.656248 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:40 crc kubenswrapper[4886]: I1124 08:50:40.656264 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:40 crc kubenswrapper[4886]: I1124 08:50:40.656273 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:40Z","lastTransitionTime":"2025-11-24T08:50:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:40 crc kubenswrapper[4886]: I1124 08:50:40.759197 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:40 crc kubenswrapper[4886]: I1124 08:50:40.759237 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:40 crc kubenswrapper[4886]: I1124 08:50:40.759267 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:40 crc kubenswrapper[4886]: I1124 08:50:40.759284 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:40 crc kubenswrapper[4886]: I1124 08:50:40.759293 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:40Z","lastTransitionTime":"2025-11-24T08:50:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:40 crc kubenswrapper[4886]: I1124 08:50:40.849072 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-fkfxv" Nov 24 08:50:40 crc kubenswrapper[4886]: I1124 08:50:40.849171 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 08:50:40 crc kubenswrapper[4886]: E1124 08:50:40.849259 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-fkfxv" podUID="7844f778-bcd5-40fe-ad92-0cc0fcd6c5d7" Nov 24 08:50:40 crc kubenswrapper[4886]: E1124 08:50:40.849329 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 08:50:40 crc kubenswrapper[4886]: I1124 08:50:40.853472 4886 scope.go:117] "RemoveContainer" containerID="7339f5b94dd48022cf3d53105e626424c485536d2deb094f3aba90f56c27c698" Nov 24 08:50:40 crc kubenswrapper[4886]: E1124 08:50:40.853747 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-657wc_openshift-ovn-kubernetes(03f9078c-6b20-46d5-ae2a-2eb20e236769)\"" pod="openshift-ovn-kubernetes/ovnkube-node-657wc" podUID="03f9078c-6b20-46d5-ae2a-2eb20e236769" Nov 24 08:50:40 crc kubenswrapper[4886]: I1124 08:50:40.861498 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:40 crc kubenswrapper[4886]: I1124 08:50:40.861533 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:40 crc kubenswrapper[4886]: I1124 08:50:40.861541 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:40 crc kubenswrapper[4886]: I1124 08:50:40.861555 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:40 crc kubenswrapper[4886]: I1124 08:50:40.861564 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:40Z","lastTransitionTime":"2025-11-24T08:50:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:40 crc kubenswrapper[4886]: I1124 08:50:40.903110 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-657wc" Nov 24 08:50:40 crc kubenswrapper[4886]: I1124 08:50:40.903983 4886 scope.go:117] "RemoveContainer" containerID="7339f5b94dd48022cf3d53105e626424c485536d2deb094f3aba90f56c27c698" Nov 24 08:50:40 crc kubenswrapper[4886]: E1124 08:50:40.904187 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-657wc_openshift-ovn-kubernetes(03f9078c-6b20-46d5-ae2a-2eb20e236769)\"" pod="openshift-ovn-kubernetes/ovnkube-node-657wc" podUID="03f9078c-6b20-46d5-ae2a-2eb20e236769" Nov 24 08:50:40 crc kubenswrapper[4886]: I1124 08:50:40.963882 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:40 crc kubenswrapper[4886]: I1124 08:50:40.964290 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:40 crc kubenswrapper[4886]: I1124 08:50:40.964429 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:40 crc kubenswrapper[4886]: I1124 08:50:40.964540 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:40 crc kubenswrapper[4886]: I1124 08:50:40.964627 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:40Z","lastTransitionTime":"2025-11-24T08:50:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:41 crc kubenswrapper[4886]: I1124 08:50:41.067334 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:41 crc kubenswrapper[4886]: I1124 08:50:41.067401 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:41 crc kubenswrapper[4886]: I1124 08:50:41.067416 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:41 crc kubenswrapper[4886]: I1124 08:50:41.067438 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:41 crc kubenswrapper[4886]: I1124 08:50:41.067454 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:41Z","lastTransitionTime":"2025-11-24T08:50:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:41 crc kubenswrapper[4886]: I1124 08:50:41.169510 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:41 crc kubenswrapper[4886]: I1124 08:50:41.169541 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:41 crc kubenswrapper[4886]: I1124 08:50:41.169548 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:41 crc kubenswrapper[4886]: I1124 08:50:41.169561 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:41 crc kubenswrapper[4886]: I1124 08:50:41.169570 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:41Z","lastTransitionTime":"2025-11-24T08:50:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:41 crc kubenswrapper[4886]: I1124 08:50:41.272131 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:41 crc kubenswrapper[4886]: I1124 08:50:41.272197 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:41 crc kubenswrapper[4886]: I1124 08:50:41.272210 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:41 crc kubenswrapper[4886]: I1124 08:50:41.272230 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:41 crc kubenswrapper[4886]: I1124 08:50:41.272243 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:41Z","lastTransitionTime":"2025-11-24T08:50:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:41 crc kubenswrapper[4886]: I1124 08:50:41.374244 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:41 crc kubenswrapper[4886]: I1124 08:50:41.374278 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:41 crc kubenswrapper[4886]: I1124 08:50:41.374290 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:41 crc kubenswrapper[4886]: I1124 08:50:41.374305 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:41 crc kubenswrapper[4886]: I1124 08:50:41.374316 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:41Z","lastTransitionTime":"2025-11-24T08:50:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:41 crc kubenswrapper[4886]: I1124 08:50:41.476550 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:41 crc kubenswrapper[4886]: I1124 08:50:41.476580 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:41 crc kubenswrapper[4886]: I1124 08:50:41.476588 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:41 crc kubenswrapper[4886]: I1124 08:50:41.476602 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:41 crc kubenswrapper[4886]: I1124 08:50:41.476613 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:41Z","lastTransitionTime":"2025-11-24T08:50:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:41 crc kubenswrapper[4886]: I1124 08:50:41.579004 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:41 crc kubenswrapper[4886]: I1124 08:50:41.579044 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:41 crc kubenswrapper[4886]: I1124 08:50:41.579063 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:41 crc kubenswrapper[4886]: I1124 08:50:41.579088 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:41 crc kubenswrapper[4886]: I1124 08:50:41.579105 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:41Z","lastTransitionTime":"2025-11-24T08:50:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:41 crc kubenswrapper[4886]: I1124 08:50:41.682069 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:41 crc kubenswrapper[4886]: I1124 08:50:41.682112 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:41 crc kubenswrapper[4886]: I1124 08:50:41.682123 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:41 crc kubenswrapper[4886]: I1124 08:50:41.682142 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:41 crc kubenswrapper[4886]: I1124 08:50:41.682170 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:41Z","lastTransitionTime":"2025-11-24T08:50:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:41 crc kubenswrapper[4886]: I1124 08:50:41.784839 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:41 crc kubenswrapper[4886]: I1124 08:50:41.784885 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:41 crc kubenswrapper[4886]: I1124 08:50:41.784897 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:41 crc kubenswrapper[4886]: I1124 08:50:41.784916 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:41 crc kubenswrapper[4886]: I1124 08:50:41.784928 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:41Z","lastTransitionTime":"2025-11-24T08:50:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:41 crc kubenswrapper[4886]: I1124 08:50:41.849119 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 08:50:41 crc kubenswrapper[4886]: I1124 08:50:41.849119 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 08:50:41 crc kubenswrapper[4886]: E1124 08:50:41.849403 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 08:50:41 crc kubenswrapper[4886]: E1124 08:50:41.849489 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 08:50:41 crc kubenswrapper[4886]: I1124 08:50:41.887394 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:41 crc kubenswrapper[4886]: I1124 08:50:41.887457 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:41 crc kubenswrapper[4886]: I1124 08:50:41.887468 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:41 crc kubenswrapper[4886]: I1124 08:50:41.887486 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:41 crc kubenswrapper[4886]: I1124 08:50:41.887520 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:41Z","lastTransitionTime":"2025-11-24T08:50:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:41 crc kubenswrapper[4886]: I1124 08:50:41.989480 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:41 crc kubenswrapper[4886]: I1124 08:50:41.989518 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:41 crc kubenswrapper[4886]: I1124 08:50:41.989528 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:41 crc kubenswrapper[4886]: I1124 08:50:41.989544 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:41 crc kubenswrapper[4886]: I1124 08:50:41.989556 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:41Z","lastTransitionTime":"2025-11-24T08:50:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:42 crc kubenswrapper[4886]: I1124 08:50:42.091474 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:42 crc kubenswrapper[4886]: I1124 08:50:42.091501 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:42 crc kubenswrapper[4886]: I1124 08:50:42.091508 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:42 crc kubenswrapper[4886]: I1124 08:50:42.091521 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:42 crc kubenswrapper[4886]: I1124 08:50:42.091530 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:42Z","lastTransitionTime":"2025-11-24T08:50:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:42 crc kubenswrapper[4886]: I1124 08:50:42.194458 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:42 crc kubenswrapper[4886]: I1124 08:50:42.194513 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:42 crc kubenswrapper[4886]: I1124 08:50:42.194524 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:42 crc kubenswrapper[4886]: I1124 08:50:42.194543 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:42 crc kubenswrapper[4886]: I1124 08:50:42.194553 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:42Z","lastTransitionTime":"2025-11-24T08:50:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:42 crc kubenswrapper[4886]: I1124 08:50:42.296684 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:42 crc kubenswrapper[4886]: I1124 08:50:42.297238 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:42 crc kubenswrapper[4886]: I1124 08:50:42.297336 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:42 crc kubenswrapper[4886]: I1124 08:50:42.297409 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:42 crc kubenswrapper[4886]: I1124 08:50:42.297536 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:42Z","lastTransitionTime":"2025-11-24T08:50:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:42 crc kubenswrapper[4886]: I1124 08:50:42.400707 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:42 crc kubenswrapper[4886]: I1124 08:50:42.400748 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:42 crc kubenswrapper[4886]: I1124 08:50:42.400758 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:42 crc kubenswrapper[4886]: I1124 08:50:42.400774 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:42 crc kubenswrapper[4886]: I1124 08:50:42.400782 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:42Z","lastTransitionTime":"2025-11-24T08:50:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:42 crc kubenswrapper[4886]: I1124 08:50:42.503740 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:42 crc kubenswrapper[4886]: I1124 08:50:42.503785 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:42 crc kubenswrapper[4886]: I1124 08:50:42.503796 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:42 crc kubenswrapper[4886]: I1124 08:50:42.503817 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:42 crc kubenswrapper[4886]: I1124 08:50:42.503828 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:42Z","lastTransitionTime":"2025-11-24T08:50:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:42 crc kubenswrapper[4886]: I1124 08:50:42.606730 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:42 crc kubenswrapper[4886]: I1124 08:50:42.606780 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:42 crc kubenswrapper[4886]: I1124 08:50:42.606794 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:42 crc kubenswrapper[4886]: I1124 08:50:42.606815 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:42 crc kubenswrapper[4886]: I1124 08:50:42.606830 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:42Z","lastTransitionTime":"2025-11-24T08:50:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:42 crc kubenswrapper[4886]: I1124 08:50:42.708893 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:42 crc kubenswrapper[4886]: I1124 08:50:42.708938 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:42 crc kubenswrapper[4886]: I1124 08:50:42.708989 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:42 crc kubenswrapper[4886]: I1124 08:50:42.709015 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:42 crc kubenswrapper[4886]: I1124 08:50:42.709029 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:42Z","lastTransitionTime":"2025-11-24T08:50:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:42 crc kubenswrapper[4886]: I1124 08:50:42.811802 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:42 crc kubenswrapper[4886]: I1124 08:50:42.811873 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:42 crc kubenswrapper[4886]: I1124 08:50:42.811897 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:42 crc kubenswrapper[4886]: I1124 08:50:42.811949 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:42 crc kubenswrapper[4886]: I1124 08:50:42.811964 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:42Z","lastTransitionTime":"2025-11-24T08:50:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:42 crc kubenswrapper[4886]: I1124 08:50:42.848416 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-fkfxv" Nov 24 08:50:42 crc kubenswrapper[4886]: E1124 08:50:42.848548 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-fkfxv" podUID="7844f778-bcd5-40fe-ad92-0cc0fcd6c5d7" Nov 24 08:50:42 crc kubenswrapper[4886]: I1124 08:50:42.848720 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 08:50:42 crc kubenswrapper[4886]: E1124 08:50:42.848766 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 08:50:42 crc kubenswrapper[4886]: I1124 08:50:42.914592 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:42 crc kubenswrapper[4886]: I1124 08:50:42.914684 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:42 crc kubenswrapper[4886]: I1124 08:50:42.914709 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:42 crc kubenswrapper[4886]: I1124 08:50:42.914737 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:42 crc kubenswrapper[4886]: I1124 08:50:42.914757 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:42Z","lastTransitionTime":"2025-11-24T08:50:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:43 crc kubenswrapper[4886]: I1124 08:50:43.017416 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:43 crc kubenswrapper[4886]: I1124 08:50:43.017470 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:43 crc kubenswrapper[4886]: I1124 08:50:43.017480 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:43 crc kubenswrapper[4886]: I1124 08:50:43.017496 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:43 crc kubenswrapper[4886]: I1124 08:50:43.017506 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:43Z","lastTransitionTime":"2025-11-24T08:50:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:43 crc kubenswrapper[4886]: I1124 08:50:43.120267 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:43 crc kubenswrapper[4886]: I1124 08:50:43.120313 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:43 crc kubenswrapper[4886]: I1124 08:50:43.120322 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:43 crc kubenswrapper[4886]: I1124 08:50:43.120336 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:43 crc kubenswrapper[4886]: I1124 08:50:43.120346 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:43Z","lastTransitionTime":"2025-11-24T08:50:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:43 crc kubenswrapper[4886]: I1124 08:50:43.223852 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:43 crc kubenswrapper[4886]: I1124 08:50:43.224004 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:43 crc kubenswrapper[4886]: I1124 08:50:43.224015 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:43 crc kubenswrapper[4886]: I1124 08:50:43.224034 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:43 crc kubenswrapper[4886]: I1124 08:50:43.224046 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:43Z","lastTransitionTime":"2025-11-24T08:50:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:43 crc kubenswrapper[4886]: I1124 08:50:43.250725 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:50:43 crc kubenswrapper[4886]: I1124 08:50:43.250788 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:50:43 crc kubenswrapper[4886]: I1124 08:50:43.250803 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:50:43 crc kubenswrapper[4886]: I1124 08:50:43.250824 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:50:43 crc kubenswrapper[4886]: I1124 08:50:43.250838 4886 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:50:43Z","lastTransitionTime":"2025-11-24T08:50:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:50:43 crc kubenswrapper[4886]: I1124 08:50:43.325395 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-vbgd9"] Nov 24 08:50:43 crc kubenswrapper[4886]: I1124 08:50:43.326194 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-vbgd9" Nov 24 08:50:43 crc kubenswrapper[4886]: I1124 08:50:43.328666 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Nov 24 08:50:43 crc kubenswrapper[4886]: I1124 08:50:43.330616 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Nov 24 08:50:43 crc kubenswrapper[4886]: I1124 08:50:43.330854 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Nov 24 08:50:43 crc kubenswrapper[4886]: I1124 08:50:43.331075 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Nov 24 08:50:43 crc kubenswrapper[4886]: I1124 08:50:43.361353 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=78.361331089 podStartE2EDuration="1m18.361331089s" podCreationTimestamp="2025-11-24 08:49:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 08:50:43.347827947 +0000 UTC m=+99.234566072" watchObservedRunningTime="2025-11-24 08:50:43.361331089 +0000 UTC m=+99.248069224" Nov 24 08:50:43 crc kubenswrapper[4886]: I1124 08:50:43.415038 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-kl8k6" podStartSLOduration=78.415014788 podStartE2EDuration="1m18.415014788s" podCreationTimestamp="2025-11-24 08:49:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 08:50:43.388699914 +0000 UTC m=+99.275438059" watchObservedRunningTime="2025-11-24 08:50:43.415014788 +0000 UTC m=+99.301752923" Nov 24 08:50:43 crc kubenswrapper[4886]: I1124 08:50:43.436224 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=76.436203113 podStartE2EDuration="1m16.436203113s" podCreationTimestamp="2025-11-24 08:49:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 08:50:43.435119122 +0000 UTC m=+99.321857277" watchObservedRunningTime="2025-11-24 08:50:43.436203113 +0000 UTC m=+99.322941248" Nov 24 08:50:43 crc kubenswrapper[4886]: I1124 08:50:43.462769 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=77.462752234 podStartE2EDuration="1m17.462752234s" podCreationTimestamp="2025-11-24 08:49:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 08:50:43.451478677 +0000 UTC m=+99.338216832" watchObservedRunningTime="2025-11-24 08:50:43.462752234 +0000 UTC m=+99.349490369" Nov 24 08:50:43 crc kubenswrapper[4886]: I1124 08:50:43.463341 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=46.463335211 podStartE2EDuration="46.463335211s" podCreationTimestamp="2025-11-24 08:49:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 08:50:43.462664181 +0000 UTC m=+99.349402336" watchObservedRunningTime="2025-11-24 08:50:43.463335211 +0000 UTC m=+99.350073346" Nov 24 08:50:43 crc kubenswrapper[4886]: I1124 08:50:43.484457 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-5g6ld" podStartSLOduration=78.484433223 podStartE2EDuration="1m18.484433223s" podCreationTimestamp="2025-11-24 08:49:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 08:50:43.483852987 +0000 UTC m=+99.370591122" watchObservedRunningTime="2025-11-24 08:50:43.484433223 +0000 UTC m=+99.371171368" Nov 24 08:50:43 crc kubenswrapper[4886]: I1124 08:50:43.492115 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/3738ed59-228b-4c6e-93a8-973986fe32b0-service-ca\") pod \"cluster-version-operator-5c965bbfc6-vbgd9\" (UID: \"3738ed59-228b-4c6e-93a8-973986fe32b0\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-vbgd9" Nov 24 08:50:43 crc kubenswrapper[4886]: I1124 08:50:43.492191 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/3738ed59-228b-4c6e-93a8-973986fe32b0-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-vbgd9\" (UID: \"3738ed59-228b-4c6e-93a8-973986fe32b0\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-vbgd9" Nov 24 08:50:43 crc kubenswrapper[4886]: I1124 08:50:43.492215 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3738ed59-228b-4c6e-93a8-973986fe32b0-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-vbgd9\" (UID: \"3738ed59-228b-4c6e-93a8-973986fe32b0\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-vbgd9" Nov 24 08:50:43 crc kubenswrapper[4886]: I1124 08:50:43.492242 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/3738ed59-228b-4c6e-93a8-973986fe32b0-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-vbgd9\" (UID: \"3738ed59-228b-4c6e-93a8-973986fe32b0\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-vbgd9" Nov 24 08:50:43 crc kubenswrapper[4886]: I1124 08:50:43.492323 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3738ed59-228b-4c6e-93a8-973986fe32b0-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-vbgd9\" (UID: \"3738ed59-228b-4c6e-93a8-973986fe32b0\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-vbgd9" Nov 24 08:50:43 crc kubenswrapper[4886]: I1124 08:50:43.506786 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=6.506761002 podStartE2EDuration="6.506761002s" podCreationTimestamp="2025-11-24 08:50:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 08:50:43.506388211 +0000 UTC m=+99.393126346" watchObservedRunningTime="2025-11-24 08:50:43.506761002 +0000 UTC m=+99.393499137" Nov 24 08:50:43 crc kubenswrapper[4886]: I1124 08:50:43.593778 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3738ed59-228b-4c6e-93a8-973986fe32b0-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-vbgd9\" (UID: \"3738ed59-228b-4c6e-93a8-973986fe32b0\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-vbgd9" Nov 24 08:50:43 crc kubenswrapper[4886]: I1124 08:50:43.594028 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/3738ed59-228b-4c6e-93a8-973986fe32b0-service-ca\") pod \"cluster-version-operator-5c965bbfc6-vbgd9\" (UID: \"3738ed59-228b-4c6e-93a8-973986fe32b0\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-vbgd9" Nov 24 08:50:43 crc kubenswrapper[4886]: I1124 08:50:43.594122 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/3738ed59-228b-4c6e-93a8-973986fe32b0-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-vbgd9\" (UID: \"3738ed59-228b-4c6e-93a8-973986fe32b0\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-vbgd9" Nov 24 08:50:43 crc kubenswrapper[4886]: I1124 08:50:43.594205 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/3738ed59-228b-4c6e-93a8-973986fe32b0-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-vbgd9\" (UID: \"3738ed59-228b-4c6e-93a8-973986fe32b0\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-vbgd9" Nov 24 08:50:43 crc kubenswrapper[4886]: I1124 08:50:43.594286 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3738ed59-228b-4c6e-93a8-973986fe32b0-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-vbgd9\" (UID: \"3738ed59-228b-4c6e-93a8-973986fe32b0\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-vbgd9" Nov 24 08:50:43 crc kubenswrapper[4886]: I1124 08:50:43.594371 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/3738ed59-228b-4c6e-93a8-973986fe32b0-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-vbgd9\" (UID: \"3738ed59-228b-4c6e-93a8-973986fe32b0\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-vbgd9" Nov 24 08:50:43 crc kubenswrapper[4886]: I1124 08:50:43.594443 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/3738ed59-228b-4c6e-93a8-973986fe32b0-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-vbgd9\" (UID: \"3738ed59-228b-4c6e-93a8-973986fe32b0\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-vbgd9" Nov 24 08:50:43 crc kubenswrapper[4886]: I1124 08:50:43.594751 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/3738ed59-228b-4c6e-93a8-973986fe32b0-service-ca\") pod \"cluster-version-operator-5c965bbfc6-vbgd9\" (UID: \"3738ed59-228b-4c6e-93a8-973986fe32b0\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-vbgd9" Nov 24 08:50:43 crc kubenswrapper[4886]: I1124 08:50:43.604059 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3738ed59-228b-4c6e-93a8-973986fe32b0-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-vbgd9\" (UID: \"3738ed59-228b-4c6e-93a8-973986fe32b0\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-vbgd9" Nov 24 08:50:43 crc kubenswrapper[4886]: I1124 08:50:43.615322 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3738ed59-228b-4c6e-93a8-973986fe32b0-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-vbgd9\" (UID: \"3738ed59-228b-4c6e-93a8-973986fe32b0\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-vbgd9" Nov 24 08:50:43 crc kubenswrapper[4886]: I1124 08:50:43.642958 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-vbgd9" Nov 24 08:50:43 crc kubenswrapper[4886]: I1124 08:50:43.797016 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7844f778-bcd5-40fe-ad92-0cc0fcd6c5d7-metrics-certs\") pod \"network-metrics-daemon-fkfxv\" (UID: \"7844f778-bcd5-40fe-ad92-0cc0fcd6c5d7\") " pod="openshift-multus/network-metrics-daemon-fkfxv" Nov 24 08:50:43 crc kubenswrapper[4886]: E1124 08:50:43.797145 4886 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 24 08:50:43 crc kubenswrapper[4886]: E1124 08:50:43.797239 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7844f778-bcd5-40fe-ad92-0cc0fcd6c5d7-metrics-certs podName:7844f778-bcd5-40fe-ad92-0cc0fcd6c5d7 nodeName:}" failed. No retries permitted until 2025-11-24 08:51:47.797223156 +0000 UTC m=+163.683961291 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/7844f778-bcd5-40fe-ad92-0cc0fcd6c5d7-metrics-certs") pod "network-metrics-daemon-fkfxv" (UID: "7844f778-bcd5-40fe-ad92-0cc0fcd6c5d7") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 24 08:50:43 crc kubenswrapper[4886]: I1124 08:50:43.848365 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 08:50:43 crc kubenswrapper[4886]: I1124 08:50:43.848444 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 08:50:43 crc kubenswrapper[4886]: E1124 08:50:43.848480 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 08:50:43 crc kubenswrapper[4886]: E1124 08:50:43.848518 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 08:50:43 crc kubenswrapper[4886]: I1124 08:50:43.874987 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-vbgd9" event={"ID":"3738ed59-228b-4c6e-93a8-973986fe32b0","Type":"ContainerStarted","Data":"5c160d056c657cb8215ff8385a10ef047a8ec8418a14e9094763291cac604895"} Nov 24 08:50:43 crc kubenswrapper[4886]: I1124 08:50:43.875036 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-vbgd9" event={"ID":"3738ed59-228b-4c6e-93a8-973986fe32b0","Type":"ContainerStarted","Data":"aa29e3f1746ba68731d8b415e2ba6b9dc93641f557613b44846c917ac90913d1"} Nov 24 08:50:43 crc kubenswrapper[4886]: I1124 08:50:43.890877 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-vbgd9" podStartSLOduration=78.890858565 podStartE2EDuration="1m18.890858565s" podCreationTimestamp="2025-11-24 08:49:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 08:50:43.890390371 +0000 UTC m=+99.777128506" watchObservedRunningTime="2025-11-24 08:50:43.890858565 +0000 UTC m=+99.777596700" Nov 24 08:50:44 crc kubenswrapper[4886]: I1124 08:50:44.848492 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-fkfxv" Nov 24 08:50:44 crc kubenswrapper[4886]: I1124 08:50:44.849638 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 08:50:44 crc kubenswrapper[4886]: E1124 08:50:44.849822 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-fkfxv" podUID="7844f778-bcd5-40fe-ad92-0cc0fcd6c5d7" Nov 24 08:50:44 crc kubenswrapper[4886]: E1124 08:50:44.849933 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 08:50:45 crc kubenswrapper[4886]: I1124 08:50:45.848969 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 08:50:45 crc kubenswrapper[4886]: E1124 08:50:45.849106 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 08:50:45 crc kubenswrapper[4886]: I1124 08:50:45.849741 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 08:50:45 crc kubenswrapper[4886]: E1124 08:50:45.849963 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 08:50:46 crc kubenswrapper[4886]: I1124 08:50:46.848346 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 08:50:46 crc kubenswrapper[4886]: I1124 08:50:46.848436 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-fkfxv" Nov 24 08:50:46 crc kubenswrapper[4886]: E1124 08:50:46.848513 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 08:50:46 crc kubenswrapper[4886]: E1124 08:50:46.848593 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-fkfxv" podUID="7844f778-bcd5-40fe-ad92-0cc0fcd6c5d7" Nov 24 08:50:47 crc kubenswrapper[4886]: I1124 08:50:47.848243 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 08:50:47 crc kubenswrapper[4886]: I1124 08:50:47.848244 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 08:50:47 crc kubenswrapper[4886]: E1124 08:50:47.848347 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 08:50:47 crc kubenswrapper[4886]: E1124 08:50:47.848654 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 08:50:48 crc kubenswrapper[4886]: I1124 08:50:48.848679 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 08:50:48 crc kubenswrapper[4886]: I1124 08:50:48.848687 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-fkfxv" Nov 24 08:50:48 crc kubenswrapper[4886]: E1124 08:50:48.848852 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 08:50:48 crc kubenswrapper[4886]: E1124 08:50:48.848919 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-fkfxv" podUID="7844f778-bcd5-40fe-ad92-0cc0fcd6c5d7" Nov 24 08:50:49 crc kubenswrapper[4886]: I1124 08:50:49.848562 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 08:50:49 crc kubenswrapper[4886]: I1124 08:50:49.848650 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 08:50:49 crc kubenswrapper[4886]: E1124 08:50:49.848729 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 08:50:49 crc kubenswrapper[4886]: E1124 08:50:49.848906 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 08:50:50 crc kubenswrapper[4886]: I1124 08:50:50.849029 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 08:50:50 crc kubenswrapper[4886]: I1124 08:50:50.849138 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-fkfxv" Nov 24 08:50:50 crc kubenswrapper[4886]: E1124 08:50:50.849211 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 08:50:50 crc kubenswrapper[4886]: E1124 08:50:50.849387 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-fkfxv" podUID="7844f778-bcd5-40fe-ad92-0cc0fcd6c5d7" Nov 24 08:50:51 crc kubenswrapper[4886]: I1124 08:50:51.849128 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 08:50:51 crc kubenswrapper[4886]: I1124 08:50:51.849190 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 08:50:51 crc kubenswrapper[4886]: E1124 08:50:51.849404 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 08:50:51 crc kubenswrapper[4886]: E1124 08:50:51.849488 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 08:50:52 crc kubenswrapper[4886]: I1124 08:50:52.849315 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-fkfxv" Nov 24 08:50:52 crc kubenswrapper[4886]: I1124 08:50:52.849335 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 08:50:52 crc kubenswrapper[4886]: E1124 08:50:52.849527 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-fkfxv" podUID="7844f778-bcd5-40fe-ad92-0cc0fcd6c5d7" Nov 24 08:50:52 crc kubenswrapper[4886]: E1124 08:50:52.849686 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 08:50:52 crc kubenswrapper[4886]: I1124 08:50:52.850464 4886 scope.go:117] "RemoveContainer" containerID="7339f5b94dd48022cf3d53105e626424c485536d2deb094f3aba90f56c27c698" Nov 24 08:50:52 crc kubenswrapper[4886]: E1124 08:50:52.850758 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-657wc_openshift-ovn-kubernetes(03f9078c-6b20-46d5-ae2a-2eb20e236769)\"" pod="openshift-ovn-kubernetes/ovnkube-node-657wc" podUID="03f9078c-6b20-46d5-ae2a-2eb20e236769" Nov 24 08:50:53 crc kubenswrapper[4886]: I1124 08:50:53.848633 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 08:50:53 crc kubenswrapper[4886]: I1124 08:50:53.848754 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 08:50:53 crc kubenswrapper[4886]: E1124 08:50:53.848909 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 08:50:53 crc kubenswrapper[4886]: E1124 08:50:53.849336 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 08:50:54 crc kubenswrapper[4886]: I1124 08:50:54.849338 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-fkfxv" Nov 24 08:50:54 crc kubenswrapper[4886]: I1124 08:50:54.849340 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 08:50:54 crc kubenswrapper[4886]: E1124 08:50:54.850722 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-fkfxv" podUID="7844f778-bcd5-40fe-ad92-0cc0fcd6c5d7" Nov 24 08:50:54 crc kubenswrapper[4886]: E1124 08:50:54.850829 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 08:50:55 crc kubenswrapper[4886]: I1124 08:50:55.848931 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 08:50:55 crc kubenswrapper[4886]: E1124 08:50:55.849133 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 08:50:55 crc kubenswrapper[4886]: I1124 08:50:55.849632 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 08:50:55 crc kubenswrapper[4886]: E1124 08:50:55.849853 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 08:50:56 crc kubenswrapper[4886]: I1124 08:50:56.848596 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 08:50:56 crc kubenswrapper[4886]: I1124 08:50:56.848594 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-fkfxv" Nov 24 08:50:56 crc kubenswrapper[4886]: E1124 08:50:56.848985 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-fkfxv" podUID="7844f778-bcd5-40fe-ad92-0cc0fcd6c5d7" Nov 24 08:50:56 crc kubenswrapper[4886]: E1124 08:50:56.849088 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 08:50:57 crc kubenswrapper[4886]: I1124 08:50:57.848867 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 08:50:57 crc kubenswrapper[4886]: I1124 08:50:57.848942 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 08:50:57 crc kubenswrapper[4886]: E1124 08:50:57.849040 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 08:50:57 crc kubenswrapper[4886]: E1124 08:50:57.849099 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 08:50:58 crc kubenswrapper[4886]: I1124 08:50:58.849085 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 08:50:58 crc kubenswrapper[4886]: I1124 08:50:58.849199 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-fkfxv" Nov 24 08:50:58 crc kubenswrapper[4886]: E1124 08:50:58.849355 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 08:50:58 crc kubenswrapper[4886]: E1124 08:50:58.849497 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-fkfxv" podUID="7844f778-bcd5-40fe-ad92-0cc0fcd6c5d7" Nov 24 08:50:59 crc kubenswrapper[4886]: I1124 08:50:59.848614 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 08:50:59 crc kubenswrapper[4886]: I1124 08:50:59.848730 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 08:50:59 crc kubenswrapper[4886]: E1124 08:50:59.849118 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 08:50:59 crc kubenswrapper[4886]: E1124 08:50:59.849319 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 08:51:00 crc kubenswrapper[4886]: I1124 08:51:00.848862 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 08:51:00 crc kubenswrapper[4886]: E1124 08:51:00.849040 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 08:51:00 crc kubenswrapper[4886]: I1124 08:51:00.849324 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-fkfxv" Nov 24 08:51:00 crc kubenswrapper[4886]: E1124 08:51:00.849378 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-fkfxv" podUID="7844f778-bcd5-40fe-ad92-0cc0fcd6c5d7" Nov 24 08:51:00 crc kubenswrapper[4886]: I1124 08:51:00.939636 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-2dk8j_5d515fec-60f3-4bf7-9ba4-697bb691b670/kube-multus/1.log" Nov 24 08:51:00 crc kubenswrapper[4886]: I1124 08:51:00.940807 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-2dk8j_5d515fec-60f3-4bf7-9ba4-697bb691b670/kube-multus/0.log" Nov 24 08:51:00 crc kubenswrapper[4886]: I1124 08:51:00.940892 4886 generic.go:334] "Generic (PLEG): container finished" podID="5d515fec-60f3-4bf7-9ba4-697bb691b670" containerID="d14b62d61a68782ff616caa790ed7dd0213d43cd7f9efca2204e0f0bb8400d60" exitCode=1 Nov 24 08:51:00 crc kubenswrapper[4886]: I1124 08:51:00.940946 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-2dk8j" event={"ID":"5d515fec-60f3-4bf7-9ba4-697bb691b670","Type":"ContainerDied","Data":"d14b62d61a68782ff616caa790ed7dd0213d43cd7f9efca2204e0f0bb8400d60"} Nov 24 08:51:00 crc kubenswrapper[4886]: I1124 08:51:00.941000 4886 scope.go:117] "RemoveContainer" containerID="ead2821f6b71571212807c6f2984d541faf5143c28fee48594b19e08bd91d085" Nov 24 08:51:00 crc kubenswrapper[4886]: I1124 08:51:00.941651 4886 scope.go:117] "RemoveContainer" containerID="d14b62d61a68782ff616caa790ed7dd0213d43cd7f9efca2204e0f0bb8400d60" Nov 24 08:51:00 crc kubenswrapper[4886]: E1124 08:51:00.941845 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-2dk8j_openshift-multus(5d515fec-60f3-4bf7-9ba4-697bb691b670)\"" pod="openshift-multus/multus-2dk8j" podUID="5d515fec-60f3-4bf7-9ba4-697bb691b670" Nov 24 08:51:01 crc kubenswrapper[4886]: I1124 08:51:01.848793 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 08:51:01 crc kubenswrapper[4886]: I1124 08:51:01.848832 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 08:51:01 crc kubenswrapper[4886]: E1124 08:51:01.848974 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 08:51:01 crc kubenswrapper[4886]: E1124 08:51:01.849084 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 08:51:01 crc kubenswrapper[4886]: I1124 08:51:01.945391 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-2dk8j_5d515fec-60f3-4bf7-9ba4-697bb691b670/kube-multus/1.log" Nov 24 08:51:02 crc kubenswrapper[4886]: I1124 08:51:02.848950 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-fkfxv" Nov 24 08:51:02 crc kubenswrapper[4886]: I1124 08:51:02.848961 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 08:51:02 crc kubenswrapper[4886]: E1124 08:51:02.849174 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-fkfxv" podUID="7844f778-bcd5-40fe-ad92-0cc0fcd6c5d7" Nov 24 08:51:02 crc kubenswrapper[4886]: E1124 08:51:02.849198 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 08:51:03 crc kubenswrapper[4886]: I1124 08:51:03.848594 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 08:51:03 crc kubenswrapper[4886]: I1124 08:51:03.848623 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 08:51:03 crc kubenswrapper[4886]: E1124 08:51:03.848815 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 08:51:03 crc kubenswrapper[4886]: E1124 08:51:03.848885 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 08:51:04 crc kubenswrapper[4886]: E1124 08:51:04.797547 4886 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Nov 24 08:51:04 crc kubenswrapper[4886]: I1124 08:51:04.848204 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-fkfxv" Nov 24 08:51:04 crc kubenswrapper[4886]: I1124 08:51:04.848248 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 08:51:04 crc kubenswrapper[4886]: E1124 08:51:04.849542 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-fkfxv" podUID="7844f778-bcd5-40fe-ad92-0cc0fcd6c5d7" Nov 24 08:51:04 crc kubenswrapper[4886]: E1124 08:51:04.850189 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 08:51:04 crc kubenswrapper[4886]: E1124 08:51:04.941327 4886 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 24 08:51:05 crc kubenswrapper[4886]: I1124 08:51:05.849260 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 08:51:05 crc kubenswrapper[4886]: E1124 08:51:05.849430 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 08:51:05 crc kubenswrapper[4886]: I1124 08:51:05.849282 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 08:51:05 crc kubenswrapper[4886]: E1124 08:51:05.849677 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 08:51:06 crc kubenswrapper[4886]: I1124 08:51:06.848378 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 08:51:06 crc kubenswrapper[4886]: I1124 08:51:06.848544 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-fkfxv" Nov 24 08:51:06 crc kubenswrapper[4886]: E1124 08:51:06.848639 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 08:51:06 crc kubenswrapper[4886]: E1124 08:51:06.848840 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-fkfxv" podUID="7844f778-bcd5-40fe-ad92-0cc0fcd6c5d7" Nov 24 08:51:06 crc kubenswrapper[4886]: I1124 08:51:06.849673 4886 scope.go:117] "RemoveContainer" containerID="7339f5b94dd48022cf3d53105e626424c485536d2deb094f3aba90f56c27c698" Nov 24 08:51:06 crc kubenswrapper[4886]: E1124 08:51:06.849930 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-657wc_openshift-ovn-kubernetes(03f9078c-6b20-46d5-ae2a-2eb20e236769)\"" pod="openshift-ovn-kubernetes/ovnkube-node-657wc" podUID="03f9078c-6b20-46d5-ae2a-2eb20e236769" Nov 24 08:51:07 crc kubenswrapper[4886]: I1124 08:51:07.848880 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 08:51:07 crc kubenswrapper[4886]: I1124 08:51:07.848948 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 08:51:07 crc kubenswrapper[4886]: E1124 08:51:07.849050 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 08:51:07 crc kubenswrapper[4886]: E1124 08:51:07.849193 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 08:51:08 crc kubenswrapper[4886]: I1124 08:51:08.848533 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-fkfxv" Nov 24 08:51:08 crc kubenswrapper[4886]: I1124 08:51:08.848775 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 08:51:08 crc kubenswrapper[4886]: E1124 08:51:08.849330 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 08:51:08 crc kubenswrapper[4886]: E1124 08:51:08.849341 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-fkfxv" podUID="7844f778-bcd5-40fe-ad92-0cc0fcd6c5d7" Nov 24 08:51:09 crc kubenswrapper[4886]: I1124 08:51:09.848717 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 08:51:09 crc kubenswrapper[4886]: I1124 08:51:09.848763 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 08:51:09 crc kubenswrapper[4886]: E1124 08:51:09.849108 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 08:51:09 crc kubenswrapper[4886]: E1124 08:51:09.849219 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 08:51:09 crc kubenswrapper[4886]: E1124 08:51:09.942636 4886 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 24 08:51:10 crc kubenswrapper[4886]: I1124 08:51:10.848944 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-fkfxv" Nov 24 08:51:10 crc kubenswrapper[4886]: E1124 08:51:10.849291 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-fkfxv" podUID="7844f778-bcd5-40fe-ad92-0cc0fcd6c5d7" Nov 24 08:51:10 crc kubenswrapper[4886]: I1124 08:51:10.849611 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 08:51:10 crc kubenswrapper[4886]: E1124 08:51:10.849790 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 08:51:11 crc kubenswrapper[4886]: I1124 08:51:11.848214 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 08:51:11 crc kubenswrapper[4886]: I1124 08:51:11.848287 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 08:51:11 crc kubenswrapper[4886]: E1124 08:51:11.848415 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 08:51:11 crc kubenswrapper[4886]: E1124 08:51:11.848520 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 08:51:12 crc kubenswrapper[4886]: I1124 08:51:12.849254 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-fkfxv" Nov 24 08:51:12 crc kubenswrapper[4886]: I1124 08:51:12.849292 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 08:51:12 crc kubenswrapper[4886]: E1124 08:51:12.849451 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-fkfxv" podUID="7844f778-bcd5-40fe-ad92-0cc0fcd6c5d7" Nov 24 08:51:12 crc kubenswrapper[4886]: E1124 08:51:12.849571 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 08:51:13 crc kubenswrapper[4886]: I1124 08:51:13.848918 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 08:51:13 crc kubenswrapper[4886]: I1124 08:51:13.849022 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 08:51:13 crc kubenswrapper[4886]: E1124 08:51:13.849379 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 08:51:13 crc kubenswrapper[4886]: E1124 08:51:13.849486 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 08:51:13 crc kubenswrapper[4886]: I1124 08:51:13.849507 4886 scope.go:117] "RemoveContainer" containerID="d14b62d61a68782ff616caa790ed7dd0213d43cd7f9efca2204e0f0bb8400d60" Nov 24 08:51:13 crc kubenswrapper[4886]: I1124 08:51:13.989249 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-2dk8j_5d515fec-60f3-4bf7-9ba4-697bb691b670/kube-multus/1.log" Nov 24 08:51:14 crc kubenswrapper[4886]: I1124 08:51:14.848942 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 08:51:14 crc kubenswrapper[4886]: E1124 08:51:14.849955 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 08:51:14 crc kubenswrapper[4886]: I1124 08:51:14.850030 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-fkfxv" Nov 24 08:51:14 crc kubenswrapper[4886]: E1124 08:51:14.850091 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-fkfxv" podUID="7844f778-bcd5-40fe-ad92-0cc0fcd6c5d7" Nov 24 08:51:14 crc kubenswrapper[4886]: E1124 08:51:14.943355 4886 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 24 08:51:14 crc kubenswrapper[4886]: I1124 08:51:14.995429 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-2dk8j_5d515fec-60f3-4bf7-9ba4-697bb691b670/kube-multus/1.log" Nov 24 08:51:14 crc kubenswrapper[4886]: I1124 08:51:14.995508 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-2dk8j" event={"ID":"5d515fec-60f3-4bf7-9ba4-697bb691b670","Type":"ContainerStarted","Data":"97763f8ce77f782f6462d9de656426c9d79e3b8ffc5a0ddcfbe4c68da2ec9905"} Nov 24 08:51:15 crc kubenswrapper[4886]: I1124 08:51:15.848749 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 08:51:15 crc kubenswrapper[4886]: I1124 08:51:15.848846 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 08:51:15 crc kubenswrapper[4886]: E1124 08:51:15.849417 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 08:51:15 crc kubenswrapper[4886]: E1124 08:51:15.849685 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 08:51:16 crc kubenswrapper[4886]: I1124 08:51:16.848273 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 08:51:16 crc kubenswrapper[4886]: I1124 08:51:16.848391 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-fkfxv" Nov 24 08:51:16 crc kubenswrapper[4886]: E1124 08:51:16.848457 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 08:51:16 crc kubenswrapper[4886]: E1124 08:51:16.848583 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-fkfxv" podUID="7844f778-bcd5-40fe-ad92-0cc0fcd6c5d7" Nov 24 08:51:17 crc kubenswrapper[4886]: I1124 08:51:17.848989 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 08:51:17 crc kubenswrapper[4886]: E1124 08:51:17.849190 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 08:51:17 crc kubenswrapper[4886]: I1124 08:51:17.849236 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 08:51:17 crc kubenswrapper[4886]: E1124 08:51:17.849480 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 08:51:17 crc kubenswrapper[4886]: I1124 08:51:17.850211 4886 scope.go:117] "RemoveContainer" containerID="7339f5b94dd48022cf3d53105e626424c485536d2deb094f3aba90f56c27c698" Nov 24 08:51:18 crc kubenswrapper[4886]: I1124 08:51:18.013035 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-657wc_03f9078c-6b20-46d5-ae2a-2eb20e236769/ovnkube-controller/3.log" Nov 24 08:51:18 crc kubenswrapper[4886]: I1124 08:51:18.015932 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-657wc" event={"ID":"03f9078c-6b20-46d5-ae2a-2eb20e236769","Type":"ContainerStarted","Data":"0d891a12cf25b868d102f1ffa18fa60b421774bbfe09a824891723acc4d93fd3"} Nov 24 08:51:18 crc kubenswrapper[4886]: I1124 08:51:18.016442 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-657wc" Nov 24 08:51:18 crc kubenswrapper[4886]: I1124 08:51:18.045022 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-657wc" podStartSLOduration=113.044996029 podStartE2EDuration="1m53.044996029s" podCreationTimestamp="2025-11-24 08:49:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 08:51:18.044906217 +0000 UTC m=+133.931644362" watchObservedRunningTime="2025-11-24 08:51:18.044996029 +0000 UTC m=+133.931734164" Nov 24 08:51:18 crc kubenswrapper[4886]: I1124 08:51:18.573773 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-fkfxv"] Nov 24 08:51:18 crc kubenswrapper[4886]: I1124 08:51:18.573913 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-fkfxv" Nov 24 08:51:18 crc kubenswrapper[4886]: E1124 08:51:18.574024 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-fkfxv" podUID="7844f778-bcd5-40fe-ad92-0cc0fcd6c5d7" Nov 24 08:51:18 crc kubenswrapper[4886]: I1124 08:51:18.848392 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 08:51:18 crc kubenswrapper[4886]: E1124 08:51:18.848902 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 08:51:19 crc kubenswrapper[4886]: I1124 08:51:19.849035 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 08:51:19 crc kubenswrapper[4886]: I1124 08:51:19.849037 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-fkfxv" Nov 24 08:51:19 crc kubenswrapper[4886]: I1124 08:51:19.849387 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 08:51:19 crc kubenswrapper[4886]: E1124 08:51:19.849701 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 08:51:19 crc kubenswrapper[4886]: E1124 08:51:19.849724 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-fkfxv" podUID="7844f778-bcd5-40fe-ad92-0cc0fcd6c5d7" Nov 24 08:51:19 crc kubenswrapper[4886]: E1124 08:51:19.850250 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 08:51:19 crc kubenswrapper[4886]: E1124 08:51:19.944407 4886 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 24 08:51:20 crc kubenswrapper[4886]: I1124 08:51:20.849212 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 08:51:20 crc kubenswrapper[4886]: E1124 08:51:20.849387 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 08:51:21 crc kubenswrapper[4886]: I1124 08:51:21.848484 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 08:51:21 crc kubenswrapper[4886]: I1124 08:51:21.848555 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-fkfxv" Nov 24 08:51:21 crc kubenswrapper[4886]: I1124 08:51:21.848510 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 08:51:21 crc kubenswrapper[4886]: E1124 08:51:21.848691 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 08:51:21 crc kubenswrapper[4886]: E1124 08:51:21.848853 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 08:51:21 crc kubenswrapper[4886]: E1124 08:51:21.848966 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-fkfxv" podUID="7844f778-bcd5-40fe-ad92-0cc0fcd6c5d7" Nov 24 08:51:22 crc kubenswrapper[4886]: I1124 08:51:22.848962 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 08:51:22 crc kubenswrapper[4886]: E1124 08:51:22.849117 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 08:51:23 crc kubenswrapper[4886]: I1124 08:51:23.848830 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 08:51:23 crc kubenswrapper[4886]: I1124 08:51:23.848849 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 08:51:23 crc kubenswrapper[4886]: I1124 08:51:23.849001 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-fkfxv" Nov 24 08:51:23 crc kubenswrapper[4886]: E1124 08:51:23.849062 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 08:51:23 crc kubenswrapper[4886]: E1124 08:51:23.849019 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 08:51:23 crc kubenswrapper[4886]: E1124 08:51:23.849275 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-fkfxv" podUID="7844f778-bcd5-40fe-ad92-0cc0fcd6c5d7" Nov 24 08:51:24 crc kubenswrapper[4886]: I1124 08:51:24.849014 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 08:51:24 crc kubenswrapper[4886]: E1124 08:51:24.851711 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 08:51:25 crc kubenswrapper[4886]: I1124 08:51:25.848887 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 08:51:25 crc kubenswrapper[4886]: I1124 08:51:25.849004 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 08:51:25 crc kubenswrapper[4886]: I1124 08:51:25.848929 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-fkfxv" Nov 24 08:51:25 crc kubenswrapper[4886]: I1124 08:51:25.851757 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Nov 24 08:51:25 crc kubenswrapper[4886]: I1124 08:51:25.852562 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Nov 24 08:51:25 crc kubenswrapper[4886]: I1124 08:51:25.852661 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Nov 24 08:51:25 crc kubenswrapper[4886]: I1124 08:51:25.852679 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Nov 24 08:51:26 crc kubenswrapper[4886]: I1124 08:51:26.848208 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 08:51:26 crc kubenswrapper[4886]: I1124 08:51:26.851211 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Nov 24 08:51:26 crc kubenswrapper[4886]: I1124 08:51:26.853632 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Nov 24 08:51:31 crc kubenswrapper[4886]: I1124 08:51:31.784500 4886 patch_prober.go:28] interesting pod/machine-config-daemon-zc46q container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 08:51:31 crc kubenswrapper[4886]: I1124 08:51:31.785105 4886 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zc46q" podUID="23cb993e-0360-4449-b604-8ddd825a6502" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 08:51:32 crc kubenswrapper[4886]: I1124 08:51:32.752264 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 08:51:32 crc kubenswrapper[4886]: E1124 08:51:32.752522 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 08:53:34.752486344 +0000 UTC m=+270.639224479 (durationBeforeRetry 2m2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 08:51:32 crc kubenswrapper[4886]: I1124 08:51:32.752590 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 08:51:32 crc kubenswrapper[4886]: I1124 08:51:32.752628 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 08:51:32 crc kubenswrapper[4886]: I1124 08:51:32.753915 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 08:51:32 crc kubenswrapper[4886]: I1124 08:51:32.761896 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 08:51:32 crc kubenswrapper[4886]: I1124 08:51:32.853133 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 08:51:32 crc kubenswrapper[4886]: I1124 08:51:32.853223 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 08:51:32 crc kubenswrapper[4886]: I1124 08:51:32.857275 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 08:51:32 crc kubenswrapper[4886]: I1124 08:51:32.857323 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 08:51:32 crc kubenswrapper[4886]: I1124 08:51:32.861870 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 08:51:33 crc kubenswrapper[4886]: I1124 08:51:33.064653 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 08:51:33 crc kubenswrapper[4886]: I1124 08:51:33.073553 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 08:51:33 crc kubenswrapper[4886]: W1124 08:51:33.246677 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3b6479f0_333b_4a96_9adf_2099afdc2447.slice/crio-b428acb57c46ba023910157917a7e88ed30ec891a1d1f25646f8a17bf05eaf20 WatchSource:0}: Error finding container b428acb57c46ba023910157917a7e88ed30ec891a1d1f25646f8a17bf05eaf20: Status 404 returned error can't find the container with id b428acb57c46ba023910157917a7e88ed30ec891a1d1f25646f8a17bf05eaf20 Nov 24 08:51:33 crc kubenswrapper[4886]: W1124 08:51:33.267681 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9d751cbb_f2e2_430d_9754_c882a5e924a5.slice/crio-2580deb40bb39ac7ed0b9e7dfa709c264344a634153b55adf162fac4d8a1fff0 WatchSource:0}: Error finding container 2580deb40bb39ac7ed0b9e7dfa709c264344a634153b55adf162fac4d8a1fff0: Status 404 returned error can't find the container with id 2580deb40bb39ac7ed0b9e7dfa709c264344a634153b55adf162fac4d8a1fff0 Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.080364 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"2f57b3f163cb0372d00e0cdebd32b802ea2bdb7da73c60eaa8568ba512f686e4"} Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.080426 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"2580deb40bb39ac7ed0b9e7dfa709c264344a634153b55adf162fac4d8a1fff0"} Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.082062 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"4ec9ba1d545ff463000c5c9d96bc26bd21215fe09eb2a6566d228668a4e89a07"} Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.082094 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"b428acb57c46ba023910157917a7e88ed30ec891a1d1f25646f8a17bf05eaf20"} Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.082478 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.085204 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"31085b24dca6ed78b9183fab86e3cc3f5559cbaa6504045c08658f5a73941c59"} Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.085240 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"58fd6b6951d7b364ebfbec4ea43cebef2c2e769ca45cb50606ab77ce2b69b961"} Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.143146 4886 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.185230 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-d9sdq"] Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.185925 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-d9sdq" Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.185968 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-lz4ml"] Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.187102 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-lz4ml" Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.189307 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-4k7lh"] Nov 24 08:51:34 crc kubenswrapper[4886]: W1124 08:51:34.189379 4886 reflector.go:561] object-"openshift-apiserver"/"etcd-serving-ca": failed to list *v1.ConfigMap: configmaps "etcd-serving-ca" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-apiserver": no relationship found between node 'crc' and this object Nov 24 08:51:34 crc kubenswrapper[4886]: E1124 08:51:34.189429 4886 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver\"/\"etcd-serving-ca\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"etcd-serving-ca\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-apiserver\": no relationship found between node 'crc' and this object" logger="UnhandledError" Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.189856 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4k7lh" Nov 24 08:51:34 crc kubenswrapper[4886]: W1124 08:51:34.190608 4886 reflector.go:561] object-"openshift-apiserver"/"audit-1": failed to list *v1.ConfigMap: configmaps "audit-1" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-apiserver": no relationship found between node 'crc' and this object Nov 24 08:51:34 crc kubenswrapper[4886]: E1124 08:51:34.190650 4886 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver\"/\"audit-1\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"audit-1\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-apiserver\": no relationship found between node 'crc' and this object" logger="UnhandledError" Nov 24 08:51:34 crc kubenswrapper[4886]: W1124 08:51:34.190731 4886 reflector.go:561] object-"openshift-apiserver"/"config": failed to list *v1.ConfigMap: configmaps "config" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-apiserver": no relationship found between node 'crc' and this object Nov 24 08:51:34 crc kubenswrapper[4886]: E1124 08:51:34.190754 4886 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver\"/\"config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"config\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-apiserver\": no relationship found between node 'crc' and this object" logger="UnhandledError" Nov 24 08:51:34 crc kubenswrapper[4886]: W1124 08:51:34.190806 4886 reflector.go:561] object-"openshift-apiserver"/"trusted-ca-bundle": failed to list *v1.ConfigMap: configmaps "trusted-ca-bundle" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-apiserver": no relationship found between node 'crc' and this object Nov 24 08:51:34 crc kubenswrapper[4886]: E1124 08:51:34.190824 4886 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver\"/\"trusted-ca-bundle\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"trusted-ca-bundle\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-apiserver\": no relationship found between node 'crc' and this object" logger="UnhandledError" Nov 24 08:51:34 crc kubenswrapper[4886]: W1124 08:51:34.190968 4886 reflector.go:561] object-"openshift-apiserver"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: configmaps "openshift-service-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-apiserver": no relationship found between node 'crc' and this object Nov 24 08:51:34 crc kubenswrapper[4886]: E1124 08:51:34.190999 4886 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"openshift-service-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-apiserver\": no relationship found between node 'crc' and this object" logger="UnhandledError" Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.191656 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.192027 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-4z5n9"] Nov 24 08:51:34 crc kubenswrapper[4886]: W1124 08:51:34.192048 4886 reflector.go:561] object-"openshift-apiserver"/"serving-cert": failed to list *v1.Secret: secrets "serving-cert" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-apiserver": no relationship found between node 'crc' and this object Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.192227 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.192292 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.192088 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Nov 24 08:51:34 crc kubenswrapper[4886]: E1124 08:51:34.192221 4886 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver\"/\"serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"serving-cert\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-apiserver\": no relationship found between node 'crc' and this object" logger="UnhandledError" Nov 24 08:51:34 crc kubenswrapper[4886]: W1124 08:51:34.192099 4886 reflector.go:561] object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff": failed to list *v1.Secret: secrets "openshift-apiserver-sa-dockercfg-djjff" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-apiserver": no relationship found between node 'crc' and this object Nov 24 08:51:34 crc kubenswrapper[4886]: E1124 08:51:34.192517 4886 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver\"/\"openshift-apiserver-sa-dockercfg-djjff\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"openshift-apiserver-sa-dockercfg-djjff\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-apiserver\": no relationship found between node 'crc' and this object" logger="UnhandledError" Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.193051 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-4z5n9" Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.193343 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vz657"] Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.193915 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vz657" Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.193999 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-fqpl9"] Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.194380 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-fqpl9" Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.196211 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-845fz"] Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.196683 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-845fz" Nov 24 08:51:34 crc kubenswrapper[4886]: W1124 08:51:34.199400 4886 reflector.go:561] object-"openshift-machine-api"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: configmaps "openshift-service-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-machine-api": no relationship found between node 'crc' and this object Nov 24 08:51:34 crc kubenswrapper[4886]: E1124 08:51:34.199463 4886 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-api\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"openshift-service-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-machine-api\": no relationship found between node 'crc' and this object" logger="UnhandledError" Nov 24 08:51:34 crc kubenswrapper[4886]: W1124 08:51:34.200547 4886 reflector.go:561] object-"openshift-config-operator"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-config-operator": no relationship found between node 'crc' and this object Nov 24 08:51:34 crc kubenswrapper[4886]: E1124 08:51:34.200576 4886 reflector.go:158] "Unhandled Error" err="object-\"openshift-config-operator\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-config-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Nov 24 08:51:34 crc kubenswrapper[4886]: W1124 08:51:34.200647 4886 reflector.go:561] object-"openshift-apiserver"/"image-import-ca": failed to list *v1.ConfigMap: configmaps "image-import-ca" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-apiserver": no relationship found between node 'crc' and this object Nov 24 08:51:34 crc kubenswrapper[4886]: E1124 08:51:34.200660 4886 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver\"/\"image-import-ca\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"image-import-ca\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-apiserver\": no relationship found between node 'crc' and this object" logger="UnhandledError" Nov 24 08:51:34 crc kubenswrapper[4886]: W1124 08:51:34.200702 4886 reflector.go:561] object-"openshift-apiserver"/"etcd-client": failed to list *v1.Secret: secrets "etcd-client" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-apiserver": no relationship found between node 'crc' and this object Nov 24 08:51:34 crc kubenswrapper[4886]: E1124 08:51:34.200714 4886 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver\"/\"etcd-client\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"etcd-client\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-apiserver\": no relationship found between node 'crc' and this object" logger="UnhandledError" Nov 24 08:51:34 crc kubenswrapper[4886]: W1124 08:51:34.200918 4886 reflector.go:561] object-"openshift-apiserver"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-apiserver": no relationship found between node 'crc' and this object Nov 24 08:51:34 crc kubenswrapper[4886]: E1124 08:51:34.200998 4886 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-apiserver\": no relationship found between node 'crc' and this object" logger="UnhandledError" Nov 24 08:51:34 crc kubenswrapper[4886]: W1124 08:51:34.201081 4886 reflector.go:561] object-"openshift-route-controller-manager"/"serving-cert": failed to list *v1.Secret: secrets "serving-cert" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-route-controller-manager": no relationship found between node 'crc' and this object Nov 24 08:51:34 crc kubenswrapper[4886]: E1124 08:51:34.201118 4886 reflector.go:158] "Unhandled Error" err="object-\"openshift-route-controller-manager\"/\"serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"serving-cert\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-route-controller-manager\": no relationship found between node 'crc' and this object" logger="UnhandledError" Nov 24 08:51:34 crc kubenswrapper[4886]: W1124 08:51:34.201273 4886 reflector.go:561] object-"openshift-route-controller-manager"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: configmaps "openshift-service-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-route-controller-manager": no relationship found between node 'crc' and this object Nov 24 08:51:34 crc kubenswrapper[4886]: E1124 08:51:34.201303 4886 reflector.go:158] "Unhandled Error" err="object-\"openshift-route-controller-manager\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"openshift-service-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-route-controller-manager\": no relationship found between node 'crc' and this object" logger="UnhandledError" Nov 24 08:51:34 crc kubenswrapper[4886]: W1124 08:51:34.201384 4886 reflector.go:561] object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2": failed to list *v1.Secret: secrets "route-controller-manager-sa-dockercfg-h2zr2" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-route-controller-manager": no relationship found between node 'crc' and this object Nov 24 08:51:34 crc kubenswrapper[4886]: E1124 08:51:34.201410 4886 reflector.go:158] "Unhandled Error" err="object-\"openshift-route-controller-manager\"/\"route-controller-manager-sa-dockercfg-h2zr2\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"route-controller-manager-sa-dockercfg-h2zr2\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-route-controller-manager\": no relationship found between node 'crc' and this object" logger="UnhandledError" Nov 24 08:51:34 crc kubenswrapper[4886]: W1124 08:51:34.201474 4886 reflector.go:561] object-"openshift-route-controller-manager"/"config": failed to list *v1.ConfigMap: configmaps "config" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-route-controller-manager": no relationship found between node 'crc' and this object Nov 24 08:51:34 crc kubenswrapper[4886]: E1124 08:51:34.201497 4886 reflector.go:158] "Unhandled Error" err="object-\"openshift-route-controller-manager\"/\"config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"config\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-route-controller-manager\": no relationship found between node 'crc' and this object" logger="UnhandledError" Nov 24 08:51:34 crc kubenswrapper[4886]: W1124 08:51:34.201563 4886 reflector.go:561] object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z": failed to list *v1.Secret: secrets "openshift-config-operator-dockercfg-7pc5z" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-config-operator": no relationship found between node 'crc' and this object Nov 24 08:51:34 crc kubenswrapper[4886]: E1124 08:51:34.201587 4886 reflector.go:158] "Unhandled Error" err="object-\"openshift-config-operator\"/\"openshift-config-operator-dockercfg-7pc5z\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"openshift-config-operator-dockercfg-7pc5z\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-config-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Nov 24 08:51:34 crc kubenswrapper[4886]: W1124 08:51:34.201656 4886 reflector.go:561] object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert": failed to list *v1.Secret: secrets "openshift-apiserver-operator-serving-cert" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-apiserver-operator": no relationship found between node 'crc' and this object Nov 24 08:51:34 crc kubenswrapper[4886]: E1124 08:51:34.201678 4886 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"openshift-apiserver-operator-serving-cert\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-apiserver-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Nov 24 08:51:34 crc kubenswrapper[4886]: W1124 08:51:34.201743 4886 reflector.go:561] object-"openshift-machine-api"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-machine-api": no relationship found between node 'crc' and this object Nov 24 08:51:34 crc kubenswrapper[4886]: E1124 08:51:34.201764 4886 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-api\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-machine-api\": no relationship found between node 'crc' and this object" logger="UnhandledError" Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.202480 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Nov 24 08:51:34 crc kubenswrapper[4886]: W1124 08:51:34.202691 4886 reflector.go:561] object-"openshift-config-operator"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: configmaps "openshift-service-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-config-operator": no relationship found between node 'crc' and this object Nov 24 08:51:34 crc kubenswrapper[4886]: E1124 08:51:34.202723 4886 reflector.go:158] "Unhandled Error" err="object-\"openshift-config-operator\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"openshift-service-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-config-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Nov 24 08:51:34 crc kubenswrapper[4886]: W1124 08:51:34.202753 4886 reflector.go:561] object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv": failed to list *v1.Secret: secrets "openshift-apiserver-operator-dockercfg-xtcjv" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-apiserver-operator": no relationship found between node 'crc' and this object Nov 24 08:51:34 crc kubenswrapper[4886]: E1124 08:51:34.202771 4886 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-dockercfg-xtcjv\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"openshift-apiserver-operator-dockercfg-xtcjv\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-apiserver-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.202831 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-bbnxh"] Nov 24 08:51:34 crc kubenswrapper[4886]: W1124 08:51:34.203210 4886 reflector.go:561] object-"openshift-route-controller-manager"/"client-ca": failed to list *v1.ConfigMap: configmaps "client-ca" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-route-controller-manager": no relationship found between node 'crc' and this object Nov 24 08:51:34 crc kubenswrapper[4886]: E1124 08:51:34.203241 4886 reflector.go:158] "Unhandled Error" err="object-\"openshift-route-controller-manager\"/\"client-ca\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"client-ca\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-route-controller-manager\": no relationship found between node 'crc' and this object" logger="UnhandledError" Nov 24 08:51:34 crc kubenswrapper[4886]: W1124 08:51:34.203296 4886 reflector.go:561] object-"openshift-apiserver-operator"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: configmaps "openshift-service-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-apiserver-operator": no relationship found between node 'crc' and this object Nov 24 08:51:34 crc kubenswrapper[4886]: E1124 08:51:34.203313 4886 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver-operator\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"openshift-service-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-apiserver-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Nov 24 08:51:34 crc kubenswrapper[4886]: W1124 08:51:34.203356 4886 reflector.go:561] object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config": failed to list *v1.ConfigMap: configmaps "openshift-apiserver-operator-config" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-apiserver-operator": no relationship found between node 'crc' and this object Nov 24 08:51:34 crc kubenswrapper[4886]: E1124 08:51:34.203370 4886 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"openshift-apiserver-operator-config\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-apiserver-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.203657 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-dd9nj"] Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.203989 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-km4wr"] Nov 24 08:51:34 crc kubenswrapper[4886]: W1124 08:51:34.204008 4886 reflector.go:561] object-"openshift-machine-api"/"kube-rbac-proxy": failed to list *v1.ConfigMap: configmaps "kube-rbac-proxy" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-machine-api": no relationship found between node 'crc' and this object Nov 24 08:51:34 crc kubenswrapper[4886]: E1124 08:51:34.204027 4886 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-api\"/\"kube-rbac-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-rbac-proxy\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-machine-api\": no relationship found between node 'crc' and this object" logger="UnhandledError" Nov 24 08:51:34 crc kubenswrapper[4886]: W1124 08:51:34.204078 4886 reflector.go:561] object-"openshift-apiserver"/"encryption-config-1": failed to list *v1.Secret: secrets "encryption-config-1" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-apiserver": no relationship found between node 'crc' and this object Nov 24 08:51:34 crc kubenswrapper[4886]: E1124 08:51:34.204091 4886 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver\"/\"encryption-config-1\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"encryption-config-1\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-apiserver\": no relationship found between node 'crc' and this object" logger="UnhandledError" Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.204219 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Nov 24 08:51:34 crc kubenswrapper[4886]: W1124 08:51:34.204465 4886 reflector.go:561] object-"openshift-apiserver-operator"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-apiserver-operator": no relationship found between node 'crc' and this object Nov 24 08:51:34 crc kubenswrapper[4886]: E1124 08:51:34.204486 4886 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver-operator\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-apiserver-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.204593 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-bbnxh" Nov 24 08:51:34 crc kubenswrapper[4886]: W1124 08:51:34.204835 4886 reflector.go:561] object-"openshift-config-operator"/"config-operator-serving-cert": failed to list *v1.Secret: secrets "config-operator-serving-cert" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-config-operator": no relationship found between node 'crc' and this object Nov 24 08:51:34 crc kubenswrapper[4886]: E1124 08:51:34.204881 4886 reflector.go:158] "Unhandled Error" err="object-\"openshift-config-operator\"/\"config-operator-serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"config-operator-serving-cert\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-config-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.205099 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-dd9nj" Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.205819 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-tbxjm"] Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.206001 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-km4wr" Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.207094 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-w6cvz"] Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.207448 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-w6cvz" Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.207683 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-tbxjm" Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.210127 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.210223 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.210289 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.210319 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.210530 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.210601 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.210610 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.210528 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.210537 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.213580 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-zqjsm"] Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.214319 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-zqjsm" Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.214768 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-njfqk"] Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.215211 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-njfqk" Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.215505 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-fsxn2"] Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.216073 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-fsxn2" Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.227216 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-n794d"] Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.228459 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-n794d" Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.231605 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.233704 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.234978 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.235129 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.248929 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.249109 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.249763 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.249787 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.249924 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.249979 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.250114 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.250137 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.250293 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.250374 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.250399 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.250137 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.250490 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.250531 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.250624 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.250657 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.250666 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.250753 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.250764 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.250875 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.250936 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.250887 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.251055 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.251054 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.250557 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.251063 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.251226 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.251606 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.251667 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.251863 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-r79cd"] Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.251898 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.252042 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.253298 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.253443 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.253483 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.253680 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.253690 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.253829 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-ztbpv"] Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.253927 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-r79cd" Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.254958 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-ztbpv" Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.258818 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-vtbkx"] Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.259415 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-x5nfr"] Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.259779 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-xbr84"] Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.260219 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-xbr84" Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.260654 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-vtbkx" Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.260767 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-x5nfr" Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.261570 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.262618 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-njp2h"] Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.267834 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-7ftcq"] Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.268593 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-7ftcq" Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.268944 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-njp2h" Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.269537 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-vb86f"] Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.270090 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-v7qjg"] Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.271452 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.272366 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.273284 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.273488 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.273735 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.274398 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/eb6743a4-a25e-4b1b-ae3b-3d2199cfbff2-serving-cert\") pod \"apiserver-76f77b778f-lz4ml\" (UID: \"eb6743a4-a25e-4b1b-ae3b-3d2199cfbff2\") " pod="openshift-apiserver/apiserver-76f77b778f-lz4ml" Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.274432 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/e4cfb4a2-69e1-4c38-b07f-cb1e628cbc2f-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-fqpl9\" (UID: \"e4cfb4a2-69e1-4c38-b07f-cb1e628cbc2f\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-fqpl9" Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.274457 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/74df4af9-c3c0-4e2e-bd0f-aefb13ca53cd-etcd-client\") pod \"etcd-operator-b45778765-zqjsm\" (UID: \"74df4af9-c3c0-4e2e-bd0f-aefb13ca53cd\") " pod="openshift-etcd-operator/etcd-operator-b45778765-zqjsm" Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.274478 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ebe970f2-3e2a-4222-87b7-a7d3d6b5456c-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-dd9nj\" (UID: \"ebe970f2-3e2a-4222-87b7-a7d3d6b5456c\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-dd9nj" Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.274500 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fksxg\" (UniqueName: \"kubernetes.io/projected/123bf335-5130-413e-b3fa-8fa4ba9111da-kube-api-access-fksxg\") pod \"dns-operator-744455d44c-njfqk\" (UID: \"123bf335-5130-413e-b3fa-8fa4ba9111da\") " pod="openshift-dns-operator/dns-operator-744455d44c-njfqk" Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.274519 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/316d4bac-9349-4ec0-82ce-af715e7a3259-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-r79cd\" (UID: \"316d4bac-9349-4ec0-82ce-af715e7a3259\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-r79cd" Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.274537 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b65ba9fc-0a0d-49f2-9991-319b054df0b0-client-ca\") pod \"route-controller-manager-6576b87f9c-4k7lh\" (UID: \"b65ba9fc-0a0d-49f2-9991-319b054df0b0\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4k7lh" Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.274556 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pbw7n\" (UniqueName: \"kubernetes.io/projected/580c4fdc-bdb3-4099-b715-ac4c63acecb2-kube-api-access-pbw7n\") pod \"downloads-7954f5f757-w6cvz\" (UID: \"580c4fdc-bdb3-4099-b715-ac4c63acecb2\") " pod="openshift-console/downloads-7954f5f757-w6cvz" Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.274576 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vv2hw\" (UniqueName: \"kubernetes.io/projected/eb6743a4-a25e-4b1b-ae3b-3d2199cfbff2-kube-api-access-vv2hw\") pod \"apiserver-76f77b778f-lz4ml\" (UID: \"eb6743a4-a25e-4b1b-ae3b-3d2199cfbff2\") " pod="openshift-apiserver/apiserver-76f77b778f-lz4ml" Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.274599 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/eb6743a4-a25e-4b1b-ae3b-3d2199cfbff2-image-import-ca\") pod \"apiserver-76f77b778f-lz4ml\" (UID: \"eb6743a4-a25e-4b1b-ae3b-3d2199cfbff2\") " pod="openshift-apiserver/apiserver-76f77b778f-lz4ml" Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.274619 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9hnlj\" (UniqueName: \"kubernetes.io/projected/74df4af9-c3c0-4e2e-bd0f-aefb13ca53cd-kube-api-access-9hnlj\") pod \"etcd-operator-b45778765-zqjsm\" (UID: \"74df4af9-c3c0-4e2e-bd0f-aefb13ca53cd\") " pod="openshift-etcd-operator/etcd-operator-b45778765-zqjsm" Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.274644 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/74df4af9-c3c0-4e2e-bd0f-aefb13ca53cd-config\") pod \"etcd-operator-b45778765-zqjsm\" (UID: \"74df4af9-c3c0-4e2e-bd0f-aefb13ca53cd\") " pod="openshift-etcd-operator/etcd-operator-b45778765-zqjsm" Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.274668 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dn6cm\" (UniqueName: \"kubernetes.io/projected/316d4bac-9349-4ec0-82ce-af715e7a3259-kube-api-access-dn6cm\") pod \"cluster-image-registry-operator-dc59b4c8b-r79cd\" (UID: \"316d4bac-9349-4ec0-82ce-af715e7a3259\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-r79cd" Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.274692 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f8080498-de06-4f8f-9c35-3d296e28a021-serving-cert\") pod \"console-operator-58897d9998-845fz\" (UID: \"f8080498-de06-4f8f-9c35-3d296e28a021\") " pod="openshift-console-operator/console-operator-58897d9998-845fz" Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.274715 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ebe970f2-3e2a-4222-87b7-a7d3d6b5456c-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-dd9nj\" (UID: \"ebe970f2-3e2a-4222-87b7-a7d3d6b5456c\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-dd9nj" Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.274742 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/123bf335-5130-413e-b3fa-8fa4ba9111da-metrics-tls\") pod \"dns-operator-744455d44c-njfqk\" (UID: \"123bf335-5130-413e-b3fa-8fa4ba9111da\") " pod="openshift-dns-operator/dns-operator-744455d44c-njfqk" Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.274770 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eb6743a4-a25e-4b1b-ae3b-3d2199cfbff2-config\") pod \"apiserver-76f77b778f-lz4ml\" (UID: \"eb6743a4-a25e-4b1b-ae3b-3d2199cfbff2\") " pod="openshift-apiserver/apiserver-76f77b778f-lz4ml" Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.274792 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/eb6743a4-a25e-4b1b-ae3b-3d2199cfbff2-encryption-config\") pod \"apiserver-76f77b778f-lz4ml\" (UID: \"eb6743a4-a25e-4b1b-ae3b-3d2199cfbff2\") " pod="openshift-apiserver/apiserver-76f77b778f-lz4ml" Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.274814 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f8080498-de06-4f8f-9c35-3d296e28a021-trusted-ca\") pod \"console-operator-58897d9998-845fz\" (UID: \"f8080498-de06-4f8f-9c35-3d296e28a021\") " pod="openshift-console-operator/console-operator-58897d9998-845fz" Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.274834 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/eb6743a4-a25e-4b1b-ae3b-3d2199cfbff2-etcd-client\") pod \"apiserver-76f77b778f-lz4ml\" (UID: \"eb6743a4-a25e-4b1b-ae3b-3d2199cfbff2\") " pod="openshift-apiserver/apiserver-76f77b778f-lz4ml" Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.274859 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xshwm\" (UniqueName: \"kubernetes.io/projected/f8080498-de06-4f8f-9c35-3d296e28a021-kube-api-access-xshwm\") pod \"console-operator-58897d9998-845fz\" (UID: \"f8080498-de06-4f8f-9c35-3d296e28a021\") " pod="openshift-console-operator/console-operator-58897d9998-845fz" Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.274888 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v68xl\" (UniqueName: \"kubernetes.io/projected/b65ba9fc-0a0d-49f2-9991-319b054df0b0-kube-api-access-v68xl\") pod \"route-controller-manager-6576b87f9c-4k7lh\" (UID: \"b65ba9fc-0a0d-49f2-9991-319b054df0b0\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4k7lh" Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.274914 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/e4cfb4a2-69e1-4c38-b07f-cb1e628cbc2f-images\") pod \"machine-api-operator-5694c8668f-fqpl9\" (UID: \"e4cfb4a2-69e1-4c38-b07f-cb1e628cbc2f\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-fqpl9" Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.274935 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/316d4bac-9349-4ec0-82ce-af715e7a3259-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-r79cd\" (UID: \"316d4bac-9349-4ec0-82ce-af715e7a3259\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-r79cd" Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.274961 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f9gp8\" (UniqueName: \"kubernetes.io/projected/e4cfb4a2-69e1-4c38-b07f-cb1e628cbc2f-kube-api-access-f9gp8\") pod \"machine-api-operator-5694c8668f-fqpl9\" (UID: \"e4cfb4a2-69e1-4c38-b07f-cb1e628cbc2f\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-fqpl9" Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.274987 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/efe0b276-8633-4435-b8af-d4651276c24f-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-vz657\" (UID: \"efe0b276-8633-4435-b8af-d4651276c24f\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vz657" Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.275008 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/74df4af9-c3c0-4e2e-bd0f-aefb13ca53cd-serving-cert\") pod \"etcd-operator-b45778765-zqjsm\" (UID: \"74df4af9-c3c0-4e2e-bd0f-aefb13ca53cd\") " pod="openshift-etcd-operator/etcd-operator-b45778765-zqjsm" Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.275034 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/eb6743a4-a25e-4b1b-ae3b-3d2199cfbff2-node-pullsecrets\") pod \"apiserver-76f77b778f-lz4ml\" (UID: \"eb6743a4-a25e-4b1b-ae3b-3d2199cfbff2\") " pod="openshift-apiserver/apiserver-76f77b778f-lz4ml" Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.275059 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/eb6743a4-a25e-4b1b-ae3b-3d2199cfbff2-etcd-serving-ca\") pod \"apiserver-76f77b778f-lz4ml\" (UID: \"eb6743a4-a25e-4b1b-ae3b-3d2199cfbff2\") " pod="openshift-apiserver/apiserver-76f77b778f-lz4ml" Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.275081 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/eb6743a4-a25e-4b1b-ae3b-3d2199cfbff2-trusted-ca-bundle\") pod \"apiserver-76f77b778f-lz4ml\" (UID: \"eb6743a4-a25e-4b1b-ae3b-3d2199cfbff2\") " pod="openshift-apiserver/apiserver-76f77b778f-lz4ml" Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.275106 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b65ba9fc-0a0d-49f2-9991-319b054df0b0-config\") pod \"route-controller-manager-6576b87f9c-4k7lh\" (UID: \"b65ba9fc-0a0d-49f2-9991-319b054df0b0\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4k7lh" Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.275130 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/eb6743a4-a25e-4b1b-ae3b-3d2199cfbff2-audit\") pod \"apiserver-76f77b778f-lz4ml\" (UID: \"eb6743a4-a25e-4b1b-ae3b-3d2199cfbff2\") " pod="openshift-apiserver/apiserver-76f77b778f-lz4ml" Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.275769 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-vb86f" Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.276392 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.276583 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.276861 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.276979 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.277093 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-dssrb"] Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.277210 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.277354 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.277767 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-fzqpc"] Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.278552 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.278958 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.279998 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.281197 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f8080498-de06-4f8f-9c35-3d296e28a021-config\") pod \"console-operator-58897d9998-845fz\" (UID: \"f8080498-de06-4f8f-9c35-3d296e28a021\") " pod="openshift-console-operator/console-operator-58897d9998-845fz" Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.281255 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/eb6743a4-a25e-4b1b-ae3b-3d2199cfbff2-audit-dir\") pod \"apiserver-76f77b778f-lz4ml\" (UID: \"eb6743a4-a25e-4b1b-ae3b-3d2199cfbff2\") " pod="openshift-apiserver/apiserver-76f77b778f-lz4ml" Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.281290 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e4cfb4a2-69e1-4c38-b07f-cb1e628cbc2f-config\") pod \"machine-api-operator-5694c8668f-fqpl9\" (UID: \"e4cfb4a2-69e1-4c38-b07f-cb1e628cbc2f\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-fqpl9" Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.281323 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l97qz\" (UniqueName: \"kubernetes.io/projected/ebe970f2-3e2a-4222-87b7-a7d3d6b5456c-kube-api-access-l97qz\") pod \"openshift-controller-manager-operator-756b6f6bc6-dd9nj\" (UID: \"ebe970f2-3e2a-4222-87b7-a7d3d6b5456c\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-dd9nj" Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.281354 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/316d4bac-9349-4ec0-82ce-af715e7a3259-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-r79cd\" (UID: \"316d4bac-9349-4ec0-82ce-af715e7a3259\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-r79cd" Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.281385 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/74df4af9-c3c0-4e2e-bd0f-aefb13ca53cd-etcd-service-ca\") pod \"etcd-operator-b45778765-zqjsm\" (UID: \"74df4af9-c3c0-4e2e-bd0f-aefb13ca53cd\") " pod="openshift-etcd-operator/etcd-operator-b45778765-zqjsm" Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.281415 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qp4dt\" (UniqueName: \"kubernetes.io/projected/efe0b276-8633-4435-b8af-d4651276c24f-kube-api-access-qp4dt\") pod \"openshift-apiserver-operator-796bbdcf4f-vz657\" (UID: \"efe0b276-8633-4435-b8af-d4651276c24f\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vz657" Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.281452 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/efe0b276-8633-4435-b8af-d4651276c24f-config\") pod \"openshift-apiserver-operator-796bbdcf4f-vz657\" (UID: \"efe0b276-8633-4435-b8af-d4651276c24f\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vz657" Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.281481 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b65ba9fc-0a0d-49f2-9991-319b054df0b0-serving-cert\") pod \"route-controller-manager-6576b87f9c-4k7lh\" (UID: \"b65ba9fc-0a0d-49f2-9991-319b054df0b0\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4k7lh" Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.281507 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/74df4af9-c3c0-4e2e-bd0f-aefb13ca53cd-etcd-ca\") pod \"etcd-operator-b45778765-zqjsm\" (UID: \"74df4af9-c3c0-4e2e-bd0f-aefb13ca53cd\") " pod="openshift-etcd-operator/etcd-operator-b45778765-zqjsm" Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.284017 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-4kfg8"] Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.284127 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-fzqpc" Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.284247 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-dssrb" Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.284271 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-v7qjg" Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.285708 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-d9sdq"] Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.285747 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-xjmsp"] Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.286345 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-pxjkj"] Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.300871 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.306565 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-tsmf5"] Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.307781 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-5fr58"] Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.308455 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-z89jx"] Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.308888 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-4kfg8" Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.309191 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-npx2b"] Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.309691 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-tsmf5" Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.311978 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-xjmsp" Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.312166 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-5fr58" Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.312218 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-pxjkj" Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.312297 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-z89jx" Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.345711 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.345933 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.347413 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-npx2b" Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.352444 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-59wgb"] Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.352918 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.352978 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29399565-qwh9r"] Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.355360 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-59wgb" Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.355936 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-4z5n9"] Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.355976 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-t4g4w"] Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.356535 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29399565-qwh9r" Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.356634 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-t4g4w" Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.357537 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.359190 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-hplr8"] Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.359689 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.360114 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-hplr8" Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.360457 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-f8pr4"] Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.360878 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-f8pr4" Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.364352 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-lz4ml"] Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.366933 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-w6cvz"] Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.367930 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.368238 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vz657"] Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.380367 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-845fz"] Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.380437 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-dd9nj"] Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.383017 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/eb6743a4-a25e-4b1b-ae3b-3d2199cfbff2-node-pullsecrets\") pod \"apiserver-76f77b778f-lz4ml\" (UID: \"eb6743a4-a25e-4b1b-ae3b-3d2199cfbff2\") " pod="openshift-apiserver/apiserver-76f77b778f-lz4ml" Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.383069 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/eb6743a4-a25e-4b1b-ae3b-3d2199cfbff2-etcd-serving-ca\") pod \"apiserver-76f77b778f-lz4ml\" (UID: \"eb6743a4-a25e-4b1b-ae3b-3d2199cfbff2\") " pod="openshift-apiserver/apiserver-76f77b778f-lz4ml" Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.383095 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/eb6743a4-a25e-4b1b-ae3b-3d2199cfbff2-trusted-ca-bundle\") pod \"apiserver-76f77b778f-lz4ml\" (UID: \"eb6743a4-a25e-4b1b-ae3b-3d2199cfbff2\") " pod="openshift-apiserver/apiserver-76f77b778f-lz4ml" Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.383123 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b65ba9fc-0a0d-49f2-9991-319b054df0b0-config\") pod \"route-controller-manager-6576b87f9c-4k7lh\" (UID: \"b65ba9fc-0a0d-49f2-9991-319b054df0b0\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4k7lh" Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.383181 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/eb6743a4-a25e-4b1b-ae3b-3d2199cfbff2-audit\") pod \"apiserver-76f77b778f-lz4ml\" (UID: \"eb6743a4-a25e-4b1b-ae3b-3d2199cfbff2\") " pod="openshift-apiserver/apiserver-76f77b778f-lz4ml" Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.383222 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f8080498-de06-4f8f-9c35-3d296e28a021-config\") pod \"console-operator-58897d9998-845fz\" (UID: \"f8080498-de06-4f8f-9c35-3d296e28a021\") " pod="openshift-console-operator/console-operator-58897d9998-845fz" Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.383247 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/eb6743a4-a25e-4b1b-ae3b-3d2199cfbff2-audit-dir\") pod \"apiserver-76f77b778f-lz4ml\" (UID: \"eb6743a4-a25e-4b1b-ae3b-3d2199cfbff2\") " pod="openshift-apiserver/apiserver-76f77b778f-lz4ml" Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.383270 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e4cfb4a2-69e1-4c38-b07f-cb1e628cbc2f-config\") pod \"machine-api-operator-5694c8668f-fqpl9\" (UID: \"e4cfb4a2-69e1-4c38-b07f-cb1e628cbc2f\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-fqpl9" Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.383298 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l97qz\" (UniqueName: \"kubernetes.io/projected/ebe970f2-3e2a-4222-87b7-a7d3d6b5456c-kube-api-access-l97qz\") pod \"openshift-controller-manager-operator-756b6f6bc6-dd9nj\" (UID: \"ebe970f2-3e2a-4222-87b7-a7d3d6b5456c\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-dd9nj" Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.383318 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/eb6743a4-a25e-4b1b-ae3b-3d2199cfbff2-node-pullsecrets\") pod \"apiserver-76f77b778f-lz4ml\" (UID: \"eb6743a4-a25e-4b1b-ae3b-3d2199cfbff2\") " pod="openshift-apiserver/apiserver-76f77b778f-lz4ml" Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.383324 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/316d4bac-9349-4ec0-82ce-af715e7a3259-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-r79cd\" (UID: \"316d4bac-9349-4ec0-82ce-af715e7a3259\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-r79cd" Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.383393 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/74df4af9-c3c0-4e2e-bd0f-aefb13ca53cd-etcd-service-ca\") pod \"etcd-operator-b45778765-zqjsm\" (UID: \"74df4af9-c3c0-4e2e-bd0f-aefb13ca53cd\") " pod="openshift-etcd-operator/etcd-operator-b45778765-zqjsm" Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.383419 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qp4dt\" (UniqueName: \"kubernetes.io/projected/efe0b276-8633-4435-b8af-d4651276c24f-kube-api-access-qp4dt\") pod \"openshift-apiserver-operator-796bbdcf4f-vz657\" (UID: \"efe0b276-8633-4435-b8af-d4651276c24f\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vz657" Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.383444 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/efe0b276-8633-4435-b8af-d4651276c24f-config\") pod \"openshift-apiserver-operator-796bbdcf4f-vz657\" (UID: \"efe0b276-8633-4435-b8af-d4651276c24f\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vz657" Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.383465 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b65ba9fc-0a0d-49f2-9991-319b054df0b0-serving-cert\") pod \"route-controller-manager-6576b87f9c-4k7lh\" (UID: \"b65ba9fc-0a0d-49f2-9991-319b054df0b0\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4k7lh" Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.383487 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/74df4af9-c3c0-4e2e-bd0f-aefb13ca53cd-etcd-ca\") pod \"etcd-operator-b45778765-zqjsm\" (UID: \"74df4af9-c3c0-4e2e-bd0f-aefb13ca53cd\") " pod="openshift-etcd-operator/etcd-operator-b45778765-zqjsm" Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.383525 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/eb6743a4-a25e-4b1b-ae3b-3d2199cfbff2-serving-cert\") pod \"apiserver-76f77b778f-lz4ml\" (UID: \"eb6743a4-a25e-4b1b-ae3b-3d2199cfbff2\") " pod="openshift-apiserver/apiserver-76f77b778f-lz4ml" Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.383555 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/e4cfb4a2-69e1-4c38-b07f-cb1e628cbc2f-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-fqpl9\" (UID: \"e4cfb4a2-69e1-4c38-b07f-cb1e628cbc2f\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-fqpl9" Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.383573 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/74df4af9-c3c0-4e2e-bd0f-aefb13ca53cd-etcd-client\") pod \"etcd-operator-b45778765-zqjsm\" (UID: \"74df4af9-c3c0-4e2e-bd0f-aefb13ca53cd\") " pod="openshift-etcd-operator/etcd-operator-b45778765-zqjsm" Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.383596 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ebe970f2-3e2a-4222-87b7-a7d3d6b5456c-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-dd9nj\" (UID: \"ebe970f2-3e2a-4222-87b7-a7d3d6b5456c\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-dd9nj" Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.383616 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fksxg\" (UniqueName: \"kubernetes.io/projected/123bf335-5130-413e-b3fa-8fa4ba9111da-kube-api-access-fksxg\") pod \"dns-operator-744455d44c-njfqk\" (UID: \"123bf335-5130-413e-b3fa-8fa4ba9111da\") " pod="openshift-dns-operator/dns-operator-744455d44c-njfqk" Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.383634 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/316d4bac-9349-4ec0-82ce-af715e7a3259-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-r79cd\" (UID: \"316d4bac-9349-4ec0-82ce-af715e7a3259\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-r79cd" Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.383658 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vv2hw\" (UniqueName: \"kubernetes.io/projected/eb6743a4-a25e-4b1b-ae3b-3d2199cfbff2-kube-api-access-vv2hw\") pod \"apiserver-76f77b778f-lz4ml\" (UID: \"eb6743a4-a25e-4b1b-ae3b-3d2199cfbff2\") " pod="openshift-apiserver/apiserver-76f77b778f-lz4ml" Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.383676 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b65ba9fc-0a0d-49f2-9991-319b054df0b0-client-ca\") pod \"route-controller-manager-6576b87f9c-4k7lh\" (UID: \"b65ba9fc-0a0d-49f2-9991-319b054df0b0\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4k7lh" Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.383695 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pbw7n\" (UniqueName: \"kubernetes.io/projected/580c4fdc-bdb3-4099-b715-ac4c63acecb2-kube-api-access-pbw7n\") pod \"downloads-7954f5f757-w6cvz\" (UID: \"580c4fdc-bdb3-4099-b715-ac4c63acecb2\") " pod="openshift-console/downloads-7954f5f757-w6cvz" Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.383715 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/eb6743a4-a25e-4b1b-ae3b-3d2199cfbff2-image-import-ca\") pod \"apiserver-76f77b778f-lz4ml\" (UID: \"eb6743a4-a25e-4b1b-ae3b-3d2199cfbff2\") " pod="openshift-apiserver/apiserver-76f77b778f-lz4ml" Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.383736 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9hnlj\" (UniqueName: \"kubernetes.io/projected/74df4af9-c3c0-4e2e-bd0f-aefb13ca53cd-kube-api-access-9hnlj\") pod \"etcd-operator-b45778765-zqjsm\" (UID: \"74df4af9-c3c0-4e2e-bd0f-aefb13ca53cd\") " pod="openshift-etcd-operator/etcd-operator-b45778765-zqjsm" Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.383763 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f8080498-de06-4f8f-9c35-3d296e28a021-serving-cert\") pod \"console-operator-58897d9998-845fz\" (UID: \"f8080498-de06-4f8f-9c35-3d296e28a021\") " pod="openshift-console-operator/console-operator-58897d9998-845fz" Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.383785 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ebe970f2-3e2a-4222-87b7-a7d3d6b5456c-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-dd9nj\" (UID: \"ebe970f2-3e2a-4222-87b7-a7d3d6b5456c\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-dd9nj" Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.383807 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/123bf335-5130-413e-b3fa-8fa4ba9111da-metrics-tls\") pod \"dns-operator-744455d44c-njfqk\" (UID: \"123bf335-5130-413e-b3fa-8fa4ba9111da\") " pod="openshift-dns-operator/dns-operator-744455d44c-njfqk" Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.383828 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/74df4af9-c3c0-4e2e-bd0f-aefb13ca53cd-config\") pod \"etcd-operator-b45778765-zqjsm\" (UID: \"74df4af9-c3c0-4e2e-bd0f-aefb13ca53cd\") " pod="openshift-etcd-operator/etcd-operator-b45778765-zqjsm" Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.383849 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dn6cm\" (UniqueName: \"kubernetes.io/projected/316d4bac-9349-4ec0-82ce-af715e7a3259-kube-api-access-dn6cm\") pod \"cluster-image-registry-operator-dc59b4c8b-r79cd\" (UID: \"316d4bac-9349-4ec0-82ce-af715e7a3259\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-r79cd" Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.383880 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/eb6743a4-a25e-4b1b-ae3b-3d2199cfbff2-encryption-config\") pod \"apiserver-76f77b778f-lz4ml\" (UID: \"eb6743a4-a25e-4b1b-ae3b-3d2199cfbff2\") " pod="openshift-apiserver/apiserver-76f77b778f-lz4ml" Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.383900 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eb6743a4-a25e-4b1b-ae3b-3d2199cfbff2-config\") pod \"apiserver-76f77b778f-lz4ml\" (UID: \"eb6743a4-a25e-4b1b-ae3b-3d2199cfbff2\") " pod="openshift-apiserver/apiserver-76f77b778f-lz4ml" Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.383920 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f8080498-de06-4f8f-9c35-3d296e28a021-trusted-ca\") pod \"console-operator-58897d9998-845fz\" (UID: \"f8080498-de06-4f8f-9c35-3d296e28a021\") " pod="openshift-console-operator/console-operator-58897d9998-845fz" Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.383937 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/eb6743a4-a25e-4b1b-ae3b-3d2199cfbff2-etcd-client\") pod \"apiserver-76f77b778f-lz4ml\" (UID: \"eb6743a4-a25e-4b1b-ae3b-3d2199cfbff2\") " pod="openshift-apiserver/apiserver-76f77b778f-lz4ml" Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.383955 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xshwm\" (UniqueName: \"kubernetes.io/projected/f8080498-de06-4f8f-9c35-3d296e28a021-kube-api-access-xshwm\") pod \"console-operator-58897d9998-845fz\" (UID: \"f8080498-de06-4f8f-9c35-3d296e28a021\") " pod="openshift-console-operator/console-operator-58897d9998-845fz" Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.383990 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v68xl\" (UniqueName: \"kubernetes.io/projected/b65ba9fc-0a0d-49f2-9991-319b054df0b0-kube-api-access-v68xl\") pod \"route-controller-manager-6576b87f9c-4k7lh\" (UID: \"b65ba9fc-0a0d-49f2-9991-319b054df0b0\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4k7lh" Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.384006 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/e4cfb4a2-69e1-4c38-b07f-cb1e628cbc2f-images\") pod \"machine-api-operator-5694c8668f-fqpl9\" (UID: \"e4cfb4a2-69e1-4c38-b07f-cb1e628cbc2f\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-fqpl9" Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.384026 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/316d4bac-9349-4ec0-82ce-af715e7a3259-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-r79cd\" (UID: \"316d4bac-9349-4ec0-82ce-af715e7a3259\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-r79cd" Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.384054 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f9gp8\" (UniqueName: \"kubernetes.io/projected/e4cfb4a2-69e1-4c38-b07f-cb1e628cbc2f-kube-api-access-f9gp8\") pod \"machine-api-operator-5694c8668f-fqpl9\" (UID: \"e4cfb4a2-69e1-4c38-b07f-cb1e628cbc2f\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-fqpl9" Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.384081 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/efe0b276-8633-4435-b8af-d4651276c24f-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-vz657\" (UID: \"efe0b276-8633-4435-b8af-d4651276c24f\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vz657" Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.384099 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/74df4af9-c3c0-4e2e-bd0f-aefb13ca53cd-serving-cert\") pod \"etcd-operator-b45778765-zqjsm\" (UID: \"74df4af9-c3c0-4e2e-bd0f-aefb13ca53cd\") " pod="openshift-etcd-operator/etcd-operator-b45778765-zqjsm" Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.384831 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/316d4bac-9349-4ec0-82ce-af715e7a3259-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-r79cd\" (UID: \"316d4bac-9349-4ec0-82ce-af715e7a3259\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-r79cd" Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.385131 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/74df4af9-c3c0-4e2e-bd0f-aefb13ca53cd-etcd-service-ca\") pod \"etcd-operator-b45778765-zqjsm\" (UID: \"74df4af9-c3c0-4e2e-bd0f-aefb13ca53cd\") " pod="openshift-etcd-operator/etcd-operator-b45778765-zqjsm" Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.385744 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f8080498-de06-4f8f-9c35-3d296e28a021-config\") pod \"console-operator-58897d9998-845fz\" (UID: \"f8080498-de06-4f8f-9c35-3d296e28a021\") " pod="openshift-console-operator/console-operator-58897d9998-845fz" Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.385883 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/eb6743a4-a25e-4b1b-ae3b-3d2199cfbff2-audit-dir\") pod \"apiserver-76f77b778f-lz4ml\" (UID: \"eb6743a4-a25e-4b1b-ae3b-3d2199cfbff2\") " pod="openshift-apiserver/apiserver-76f77b778f-lz4ml" Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.385758 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/74df4af9-c3c0-4e2e-bd0f-aefb13ca53cd-etcd-ca\") pod \"etcd-operator-b45778765-zqjsm\" (UID: \"74df4af9-c3c0-4e2e-bd0f-aefb13ca53cd\") " pod="openshift-etcd-operator/etcd-operator-b45778765-zqjsm" Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.386791 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ebe970f2-3e2a-4222-87b7-a7d3d6b5456c-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-dd9nj\" (UID: \"ebe970f2-3e2a-4222-87b7-a7d3d6b5456c\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-dd9nj" Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.388517 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f8080498-de06-4f8f-9c35-3d296e28a021-trusted-ca\") pod \"console-operator-58897d9998-845fz\" (UID: \"f8080498-de06-4f8f-9c35-3d296e28a021\") " pod="openshift-console-operator/console-operator-58897d9998-845fz" Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.388763 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.388911 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/74df4af9-c3c0-4e2e-bd0f-aefb13ca53cd-config\") pod \"etcd-operator-b45778765-zqjsm\" (UID: \"74df4af9-c3c0-4e2e-bd0f-aefb13ca53cd\") " pod="openshift-etcd-operator/etcd-operator-b45778765-zqjsm" Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.389502 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/e4cfb4a2-69e1-4c38-b07f-cb1e628cbc2f-images\") pod \"machine-api-operator-5694c8668f-fqpl9\" (UID: \"e4cfb4a2-69e1-4c38-b07f-cb1e628cbc2f\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-fqpl9" Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.390083 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-vb86f"] Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.392520 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-x5nfr"] Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.393325 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/74df4af9-c3c0-4e2e-bd0f-aefb13ca53cd-etcd-client\") pod \"etcd-operator-b45778765-zqjsm\" (UID: \"74df4af9-c3c0-4e2e-bd0f-aefb13ca53cd\") " pod="openshift-etcd-operator/etcd-operator-b45778765-zqjsm" Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.394033 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-fsxn2"] Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.395845 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/316d4bac-9349-4ec0-82ce-af715e7a3259-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-r79cd\" (UID: \"316d4bac-9349-4ec0-82ce-af715e7a3259\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-r79cd" Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.397514 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-zqjsm"] Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.398712 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-fqpl9"] Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.398721 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ebe970f2-3e2a-4222-87b7-a7d3d6b5456c-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-dd9nj\" (UID: \"ebe970f2-3e2a-4222-87b7-a7d3d6b5456c\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-dd9nj" Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.399369 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/e4cfb4a2-69e1-4c38-b07f-cb1e628cbc2f-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-fqpl9\" (UID: \"e4cfb4a2-69e1-4c38-b07f-cb1e628cbc2f\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-fqpl9" Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.399369 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/123bf335-5130-413e-b3fa-8fa4ba9111da-metrics-tls\") pod \"dns-operator-744455d44c-njfqk\" (UID: \"123bf335-5130-413e-b3fa-8fa4ba9111da\") " pod="openshift-dns-operator/dns-operator-744455d44c-njfqk" Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.399736 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/74df4af9-c3c0-4e2e-bd0f-aefb13ca53cd-serving-cert\") pod \"etcd-operator-b45778765-zqjsm\" (UID: \"74df4af9-c3c0-4e2e-bd0f-aefb13ca53cd\") " pod="openshift-etcd-operator/etcd-operator-b45778765-zqjsm" Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.400447 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f8080498-de06-4f8f-9c35-3d296e28a021-serving-cert\") pod \"console-operator-58897d9998-845fz\" (UID: \"f8080498-de06-4f8f-9c35-3d296e28a021\") " pod="openshift-console-operator/console-operator-58897d9998-845fz" Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.400499 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-bbnxh"] Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.401379 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-ztbpv"] Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.403414 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-n794d"] Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.403707 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-njfqk"] Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.407614 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.408453 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-tbxjm"] Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.408511 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-7ftcq"] Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.415505 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-4k7lh"] Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.422621 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-t4g4w"] Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.423878 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.431205 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-vtbkx"] Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.431888 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-dssrb"] Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.433643 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-r79cd"] Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.434649 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-z89jx"] Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.437639 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-tsmf5"] Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.440126 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-v7qjg"] Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.442124 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-fzqpc"] Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.444392 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-npx2b"] Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.445678 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-njp2h"] Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.447255 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-4zfpn"] Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.447588 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.448191 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-4zfpn" Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.464775 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.469342 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-rj5kj"] Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.470716 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-rj5kj" Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.473304 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-xjmsp"] Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.475523 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-pxjkj"] Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.479090 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-4kfg8"] Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.481882 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-f8pr4"] Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.482518 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.484962 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-hplr8"] Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.487737 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-rj5kj"] Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.490368 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29399565-qwh9r"] Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.492550 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-5fr58"] Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.494440 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-59wgb"] Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.497052 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-x8hmr"] Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.497912 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-x8hmr" Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.499817 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-x8hmr"] Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.503381 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.523275 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.543081 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.563028 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.583619 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.602919 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.622997 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.644007 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.673553 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.683635 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.702947 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.723063 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.743006 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.777591 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.784057 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.803496 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.823138 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.843360 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.863677 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.882816 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.904455 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.923253 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.943336 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.965383 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Nov 24 08:51:34 crc kubenswrapper[4886]: I1124 08:51:34.983331 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.003888 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.023187 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.043467 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.062745 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.088880 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.123449 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.143643 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.163377 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.194266 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/32727d31-2207-4688-b70c-6045b674538b-serving-cert\") pod \"controller-manager-879f6c89f-d9sdq\" (UID: \"32727d31-2207-4688-b70c-6045b674538b\") " pod="openshift-controller-manager/controller-manager-879f6c89f-d9sdq" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.194319 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gf28v\" (UniqueName: \"kubernetes.io/projected/ce7242d2-301f-4d8f-816a-a36418be67ca-kube-api-access-gf28v\") pod \"openshift-config-operator-7777fb866f-4z5n9\" (UID: \"ce7242d2-301f-4d8f-816a-a36418be67ca\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-4z5n9" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.194341 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/eb155f7a-3c80-42c5-adfe-69f854a2d032-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-fsxn2\" (UID: \"eb155f7a-3c80-42c5-adfe-69f854a2d032\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-fsxn2" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.194362 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/30599c42-eef7-4967-b84f-95b49a225bd6-bound-sa-token\") pod \"image-registry-697d97f7c8-n794d\" (UID: \"30599c42-eef7-4967-b84f-95b49a225bd6\") " pod="openshift-image-registry/image-registry-697d97f7c8-n794d" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.194378 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2tq6l\" (UniqueName: \"kubernetes.io/projected/9d51f527-2205-4113-9b65-655f3fab2e1c-kube-api-access-2tq6l\") pod \"machine-approver-56656f9798-km4wr\" (UID: \"9d51f527-2205-4113-9b65-655f3fab2e1c\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-km4wr" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.194398 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/9d51f527-2205-4113-9b65-655f3fab2e1c-auth-proxy-config\") pod \"machine-approver-56656f9798-km4wr\" (UID: \"9d51f527-2205-4113-9b65-655f3fab2e1c\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-km4wr" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.194422 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/30599c42-eef7-4967-b84f-95b49a225bd6-registry-tls\") pod \"image-registry-697d97f7c8-n794d\" (UID: \"30599c42-eef7-4967-b84f-95b49a225bd6\") " pod="openshift-image-registry/image-registry-697d97f7c8-n794d" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.194441 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/32727d31-2207-4688-b70c-6045b674538b-config\") pod \"controller-manager-879f6c89f-d9sdq\" (UID: \"32727d31-2207-4688-b70c-6045b674538b\") " pod="openshift-controller-manager/controller-manager-879f6c89f-d9sdq" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.194472 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/87f902e1-073b-4ccd-8b3a-717f802e9671-console-serving-cert\") pod \"console-f9d7485db-tbxjm\" (UID: \"87f902e1-073b-4ccd-8b3a-717f802e9671\") " pod="openshift-console/console-f9d7485db-tbxjm" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.194494 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/87f902e1-073b-4ccd-8b3a-717f802e9671-oauth-serving-cert\") pod \"console-f9d7485db-tbxjm\" (UID: \"87f902e1-073b-4ccd-8b3a-717f802e9671\") " pod="openshift-console/console-f9d7485db-tbxjm" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.194528 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/87f902e1-073b-4ccd-8b3a-717f802e9671-service-ca\") pod \"console-f9d7485db-tbxjm\" (UID: \"87f902e1-073b-4ccd-8b3a-717f802e9671\") " pod="openshift-console/console-f9d7485db-tbxjm" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.194546 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cjdn4\" (UniqueName: \"kubernetes.io/projected/87f902e1-073b-4ccd-8b3a-717f802e9671-kube-api-access-cjdn4\") pod \"console-f9d7485db-tbxjm\" (UID: \"87f902e1-073b-4ccd-8b3a-717f802e9671\") " pod="openshift-console/console-f9d7485db-tbxjm" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.194581 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-74mns\" (UniqueName: \"kubernetes.io/projected/32727d31-2207-4688-b70c-6045b674538b-kube-api-access-74mns\") pod \"controller-manager-879f6c89f-d9sdq\" (UID: \"32727d31-2207-4688-b70c-6045b674538b\") " pod="openshift-controller-manager/controller-manager-879f6c89f-d9sdq" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.194598 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eb155f7a-3c80-42c5-adfe-69f854a2d032-config\") pod \"kube-controller-manager-operator-78b949d7b-fsxn2\" (UID: \"eb155f7a-3c80-42c5-adfe-69f854a2d032\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-fsxn2" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.194630 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/30599c42-eef7-4967-b84f-95b49a225bd6-installation-pull-secrets\") pod \"image-registry-697d97f7c8-n794d\" (UID: \"30599c42-eef7-4967-b84f-95b49a225bd6\") " pod="openshift-image-registry/image-registry-697d97f7c8-n794d" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.194652 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/ce7242d2-301f-4d8f-816a-a36418be67ca-available-featuregates\") pod \"openshift-config-operator-7777fb866f-4z5n9\" (UID: \"ce7242d2-301f-4d8f-816a-a36418be67ca\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-4z5n9" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.194684 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ce7242d2-301f-4d8f-816a-a36418be67ca-serving-cert\") pod \"openshift-config-operator-7777fb866f-4z5n9\" (UID: \"ce7242d2-301f-4d8f-816a-a36418be67ca\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-4z5n9" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.194703 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/87f902e1-073b-4ccd-8b3a-717f802e9671-console-oauth-config\") pod \"console-f9d7485db-tbxjm\" (UID: \"87f902e1-073b-4ccd-8b3a-717f802e9671\") " pod="openshift-console/console-f9d7485db-tbxjm" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.194737 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/9d51f527-2205-4113-9b65-655f3fab2e1c-machine-approver-tls\") pod \"machine-approver-56656f9798-km4wr\" (UID: \"9d51f527-2205-4113-9b65-655f3fab2e1c\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-km4wr" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.194764 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d4hmv\" (UniqueName: \"kubernetes.io/projected/30599c42-eef7-4967-b84f-95b49a225bd6-kube-api-access-d4hmv\") pod \"image-registry-697d97f7c8-n794d\" (UID: \"30599c42-eef7-4967-b84f-95b49a225bd6\") " pod="openshift-image-registry/image-registry-697d97f7c8-n794d" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.194784 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/87f902e1-073b-4ccd-8b3a-717f802e9671-trusted-ca-bundle\") pod \"console-f9d7485db-tbxjm\" (UID: \"87f902e1-073b-4ccd-8b3a-717f802e9671\") " pod="openshift-console/console-f9d7485db-tbxjm" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.194800 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/32727d31-2207-4688-b70c-6045b674538b-client-ca\") pod \"controller-manager-879f6c89f-d9sdq\" (UID: \"32727d31-2207-4688-b70c-6045b674538b\") " pod="openshift-controller-manager/controller-manager-879f6c89f-d9sdq" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.194816 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d51f527-2205-4113-9b65-655f3fab2e1c-config\") pod \"machine-approver-56656f9798-km4wr\" (UID: \"9d51f527-2205-4113-9b65-655f3fab2e1c\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-km4wr" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.194832 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/30599c42-eef7-4967-b84f-95b49a225bd6-ca-trust-extracted\") pod \"image-registry-697d97f7c8-n794d\" (UID: \"30599c42-eef7-4967-b84f-95b49a225bd6\") " pod="openshift-image-registry/image-registry-697d97f7c8-n794d" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.194862 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/87f902e1-073b-4ccd-8b3a-717f802e9671-console-config\") pod \"console-f9d7485db-tbxjm\" (UID: \"87f902e1-073b-4ccd-8b3a-717f802e9671\") " pod="openshift-console/console-f9d7485db-tbxjm" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.194907 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/d879a534-2c8a-463b-bbf6-75213cb4d554-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-bbnxh\" (UID: \"d879a534-2c8a-463b-bbf6-75213cb4d554\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-bbnxh" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.194930 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/30599c42-eef7-4967-b84f-95b49a225bd6-trusted-ca\") pod \"image-registry-697d97f7c8-n794d\" (UID: \"30599c42-eef7-4967-b84f-95b49a225bd6\") " pod="openshift-image-registry/image-registry-697d97f7c8-n794d" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.194947 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q5wx5\" (UniqueName: \"kubernetes.io/projected/d879a534-2c8a-463b-bbf6-75213cb4d554-kube-api-access-q5wx5\") pod \"cluster-samples-operator-665b6dd947-bbnxh\" (UID: \"d879a534-2c8a-463b-bbf6-75213cb4d554\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-bbnxh" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.194966 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/32727d31-2207-4688-b70c-6045b674538b-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-d9sdq\" (UID: \"32727d31-2207-4688-b70c-6045b674538b\") " pod="openshift-controller-manager/controller-manager-879f6c89f-d9sdq" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.194990 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n794d\" (UID: \"30599c42-eef7-4967-b84f-95b49a225bd6\") " pod="openshift-image-registry/image-registry-697d97f7c8-n794d" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.195008 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/30599c42-eef7-4967-b84f-95b49a225bd6-registry-certificates\") pod \"image-registry-697d97f7c8-n794d\" (UID: \"30599c42-eef7-4967-b84f-95b49a225bd6\") " pod="openshift-image-registry/image-registry-697d97f7c8-n794d" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.195025 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/eb155f7a-3c80-42c5-adfe-69f854a2d032-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-fsxn2\" (UID: \"eb155f7a-3c80-42c5-adfe-69f854a2d032\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-fsxn2" Nov 24 08:51:35 crc kubenswrapper[4886]: E1124 08:51:35.195714 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 08:51:35.695694711 +0000 UTC m=+151.582432846 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n794d" (UID: "30599c42-eef7-4967-b84f-95b49a225bd6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.203442 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.223029 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.242933 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.263755 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.282851 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.296129 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.296473 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/421edb7e-dea2-4578-894e-32e9eb8aff3b-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-pxjkj\" (UID: \"421edb7e-dea2-4578-894e-32e9eb8aff3b\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-pxjkj" Nov 24 08:51:35 crc kubenswrapper[4886]: E1124 08:51:35.296516 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 08:51:35.796494763 +0000 UTC m=+151.683232888 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.296551 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/d879a534-2c8a-463b-bbf6-75213cb4d554-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-bbnxh\" (UID: \"d879a534-2c8a-463b-bbf6-75213cb4d554\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-bbnxh" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.296585 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/168f234d-da70-475a-b6df-2771ab11368e-audit-policies\") pod \"oauth-openshift-558db77b4-vtbkx\" (UID: \"168f234d-da70-475a-b6df-2771ab11368e\") " pod="openshift-authentication/oauth-openshift-558db77b4-vtbkx" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.296606 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/bbe26e55-76a5-4f66-b3c6-5f8933372332-images\") pod \"machine-config-operator-74547568cd-dssrb\" (UID: \"bbe26e55-76a5-4f66-b3c6-5f8933372332\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-dssrb" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.296654 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/30599c42-eef7-4967-b84f-95b49a225bd6-trusted-ca\") pod \"image-registry-697d97f7c8-n794d\" (UID: \"30599c42-eef7-4967-b84f-95b49a225bd6\") " pod="openshift-image-registry/image-registry-697d97f7c8-n794d" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.296713 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n794d\" (UID: \"30599c42-eef7-4967-b84f-95b49a225bd6\") " pod="openshift-image-registry/image-registry-697d97f7c8-n794d" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.296753 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/30599c42-eef7-4967-b84f-95b49a225bd6-registry-certificates\") pod \"image-registry-697d97f7c8-n794d\" (UID: \"30599c42-eef7-4967-b84f-95b49a225bd6\") " pod="openshift-image-registry/image-registry-697d97f7c8-n794d" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.296777 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/32727d31-2207-4688-b70c-6045b674538b-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-d9sdq\" (UID: \"32727d31-2207-4688-b70c-6045b674538b\") " pod="openshift-controller-manager/controller-manager-879f6c89f-d9sdq" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.296801 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/168f234d-da70-475a-b6df-2771ab11368e-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-vtbkx\" (UID: \"168f234d-da70-475a-b6df-2771ab11368e\") " pod="openshift-authentication/oauth-openshift-558db77b4-vtbkx" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.296827 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bgwgf\" (UniqueName: \"kubernetes.io/projected/3a73ff5c-5292-45f7-a7cf-97714a8a109d-kube-api-access-bgwgf\") pod \"csi-hostpathplugin-rj5kj\" (UID: \"3a73ff5c-5292-45f7-a7cf-97714a8a109d\") " pod="hostpath-provisioner/csi-hostpathplugin-rj5kj" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.296865 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/2edcc7e5-5bfb-4c39-86c9-fabf007078f4-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-njp2h\" (UID: \"2edcc7e5-5bfb-4c39-86c9-fabf007078f4\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-njp2h" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.296887 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2rwmd\" (UniqueName: \"kubernetes.io/projected/fd02555c-e6fc-4825-a06a-53497a2cfeda-kube-api-access-2rwmd\") pod \"ingress-canary-t4g4w\" (UID: \"fd02555c-e6fc-4825-a06a-53497a2cfeda\") " pod="openshift-ingress-canary/ingress-canary-t4g4w" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.296923 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/32727d31-2207-4688-b70c-6045b674538b-serving-cert\") pod \"controller-manager-879f6c89f-d9sdq\" (UID: \"32727d31-2207-4688-b70c-6045b674538b\") " pod="openshift-controller-manager/controller-manager-879f6c89f-d9sdq" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.296950 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/168f234d-da70-475a-b6df-2771ab11368e-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-vtbkx\" (UID: \"168f234d-da70-475a-b6df-2771ab11368e\") " pod="openshift-authentication/oauth-openshift-558db77b4-vtbkx" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.296973 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/eb155f7a-3c80-42c5-adfe-69f854a2d032-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-fsxn2\" (UID: \"eb155f7a-3c80-42c5-adfe-69f854a2d032\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-fsxn2" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.296993 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3a0bfe9c-4f8c-47f6-945b-0f93f9888f93-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-x5nfr\" (UID: \"3a0bfe9c-4f8c-47f6-945b-0f93f9888f93\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-x5nfr" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.297018 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/98d1a535-6aba-4633-8091-42e633b865b1-serving-cert\") pod \"service-ca-operator-777779d784-hplr8\" (UID: \"98d1a535-6aba-4633-8091-42e633b865b1\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-hplr8" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.297045 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2tq6l\" (UniqueName: \"kubernetes.io/projected/9d51f527-2205-4113-9b65-655f3fab2e1c-kube-api-access-2tq6l\") pod \"machine-approver-56656f9798-km4wr\" (UID: \"9d51f527-2205-4113-9b65-655f3fab2e1c\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-km4wr" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.297070 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/9d51f527-2205-4113-9b65-655f3fab2e1c-auth-proxy-config\") pod \"machine-approver-56656f9798-km4wr\" (UID: \"9d51f527-2205-4113-9b65-655f3fab2e1c\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-km4wr" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.297094 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5ffb9e3b-f114-44b1-9521-096b538ce9bf-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-fzqpc\" (UID: \"5ffb9e3b-f114-44b1-9521-096b538ce9bf\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-fzqpc" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.297120 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qt8bb\" (UniqueName: \"kubernetes.io/projected/3349a550-cd49-4627-8fd8-7bb82f26c0e4-kube-api-access-qt8bb\") pod \"kube-storage-version-migrator-operator-b67b599dd-v7qjg\" (UID: \"3349a550-cd49-4627-8fd8-7bb82f26c0e4\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-v7qjg" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.297174 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/a54a2524-099e-4a0f-9762-eafbc576dc56-stats-auth\") pod \"router-default-5444994796-xbr84\" (UID: \"a54a2524-099e-4a0f-9762-eafbc576dc56\") " pod="openshift-ingress/router-default-5444994796-xbr84" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.297198 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bt7cq\" (UniqueName: \"kubernetes.io/projected/1c634f86-dcd4-4cf7-aa6f-4245be5d0b57-kube-api-access-bt7cq\") pod \"ingress-operator-5b745b69d9-vb86f\" (UID: \"1c634f86-dcd4-4cf7-aa6f-4245be5d0b57\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-vb86f" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.297221 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/f0014c51-8bc7-44e7-846b-3c7d97a67913-metrics-tls\") pod \"dns-default-x8hmr\" (UID: \"f0014c51-8bc7-44e7-846b-3c7d97a67913\") " pod="openshift-dns/dns-default-x8hmr" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.297242 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jdfnw\" (UniqueName: \"kubernetes.io/projected/421edb7e-dea2-4578-894e-32e9eb8aff3b-kube-api-access-jdfnw\") pod \"package-server-manager-789f6589d5-pxjkj\" (UID: \"421edb7e-dea2-4578-894e-32e9eb8aff3b\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-pxjkj" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.297265 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a356b9d0-54fe-4bac-9589-027e7cbfeb87-apiservice-cert\") pod \"packageserver-d55dfcdfc-npx2b\" (UID: \"a356b9d0-54fe-4bac-9589-027e7cbfeb87\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-npx2b" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.297292 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/30599c42-eef7-4967-b84f-95b49a225bd6-registry-tls\") pod \"image-registry-697d97f7c8-n794d\" (UID: \"30599c42-eef7-4967-b84f-95b49a225bd6\") " pod="openshift-image-registry/image-registry-697d97f7c8-n794d" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.297319 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/168f234d-da70-475a-b6df-2771ab11368e-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-vtbkx\" (UID: \"168f234d-da70-475a-b6df-2771ab11368e\") " pod="openshift-authentication/oauth-openshift-558db77b4-vtbkx" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.297342 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a54a2524-099e-4a0f-9762-eafbc576dc56-metrics-certs\") pod \"router-default-5444994796-xbr84\" (UID: \"a54a2524-099e-4a0f-9762-eafbc576dc56\") " pod="openshift-ingress/router-default-5444994796-xbr84" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.297365 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3349a550-cd49-4627-8fd8-7bb82f26c0e4-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-v7qjg\" (UID: \"3349a550-cd49-4627-8fd8-7bb82f26c0e4\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-v7qjg" Nov 24 08:51:35 crc kubenswrapper[4886]: E1124 08:51:35.297780 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 08:51:35.797754617 +0000 UTC m=+151.684492752 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n794d" (UID: "30599c42-eef7-4967-b84f-95b49a225bd6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.298483 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/30599c42-eef7-4967-b84f-95b49a225bd6-registry-certificates\") pod \"image-registry-697d97f7c8-n794d\" (UID: \"30599c42-eef7-4967-b84f-95b49a225bd6\") " pod="openshift-image-registry/image-registry-697d97f7c8-n794d" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.298532 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/32727d31-2207-4688-b70c-6045b674538b-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-d9sdq\" (UID: \"32727d31-2207-4688-b70c-6045b674538b\") " pod="openshift-controller-manager/controller-manager-879f6c89f-d9sdq" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.298784 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/32727d31-2207-4688-b70c-6045b674538b-config\") pod \"controller-manager-879f6c89f-d9sdq\" (UID: \"32727d31-2207-4688-b70c-6045b674538b\") " pod="openshift-controller-manager/controller-manager-879f6c89f-d9sdq" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.298959 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kgz6j\" (UniqueName: \"kubernetes.io/projected/1afd949e-d0f2-41b8-9632-917df3468232-kube-api-access-kgz6j\") pod \"collect-profiles-29399565-qwh9r\" (UID: \"1afd949e-d0f2-41b8-9632-917df3468232\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399565-qwh9r" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.301044 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/a54a2524-099e-4a0f-9762-eafbc576dc56-default-certificate\") pod \"router-default-5444994796-xbr84\" (UID: \"a54a2524-099e-4a0f-9762-eafbc576dc56\") " pod="openshift-ingress/router-default-5444994796-xbr84" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.299933 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/d879a534-2c8a-463b-bbf6-75213cb4d554-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-bbnxh\" (UID: \"d879a534-2c8a-463b-bbf6-75213cb4d554\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-bbnxh" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.300778 4886 request.go:700] Waited for 1.015735702s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-storage-version-migrator-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0 Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.300591 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/32727d31-2207-4688-b70c-6045b674538b-config\") pod \"controller-manager-879f6c89f-d9sdq\" (UID: \"32727d31-2207-4688-b70c-6045b674538b\") " pod="openshift-controller-manager/controller-manager-879f6c89f-d9sdq" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.301733 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/9d51f527-2205-4113-9b65-655f3fab2e1c-auth-proxy-config\") pod \"machine-approver-56656f9798-km4wr\" (UID: \"9d51f527-2205-4113-9b65-655f3fab2e1c\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-km4wr" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.301741 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3a0bfe9c-4f8c-47f6-945b-0f93f9888f93-config\") pod \"kube-apiserver-operator-766d6c64bb-x5nfr\" (UID: \"3a0bfe9c-4f8c-47f6-945b-0f93f9888f93\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-x5nfr" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.301816 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/87f902e1-073b-4ccd-8b3a-717f802e9671-service-ca\") pod \"console-f9d7485db-tbxjm\" (UID: \"87f902e1-073b-4ccd-8b3a-717f802e9671\") " pod="openshift-console/console-f9d7485db-tbxjm" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.301868 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/1c634f86-dcd4-4cf7-aa6f-4245be5d0b57-metrics-tls\") pod \"ingress-operator-5b745b69d9-vb86f\" (UID: \"1c634f86-dcd4-4cf7-aa6f-4245be5d0b57\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-vb86f" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.301891 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3349a550-cd49-4627-8fd8-7bb82f26c0e4-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-v7qjg\" (UID: \"3349a550-cd49-4627-8fd8-7bb82f26c0e4\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-v7qjg" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.301931 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cjdn4\" (UniqueName: \"kubernetes.io/projected/87f902e1-073b-4ccd-8b3a-717f802e9671-kube-api-access-cjdn4\") pod \"console-f9d7485db-tbxjm\" (UID: \"87f902e1-073b-4ccd-8b3a-717f802e9671\") " pod="openshift-console/console-f9d7485db-tbxjm" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.302014 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/168f234d-da70-475a-b6df-2771ab11368e-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-vtbkx\" (UID: \"168f234d-da70-475a-b6df-2771ab11368e\") " pod="openshift-authentication/oauth-openshift-558db77b4-vtbkx" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.302660 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/30599c42-eef7-4967-b84f-95b49a225bd6-trusted-ca\") pod \"image-registry-697d97f7c8-n794d\" (UID: \"30599c42-eef7-4967-b84f-95b49a225bd6\") " pod="openshift-image-registry/image-registry-697d97f7c8-n794d" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.303741 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/32727d31-2207-4688-b70c-6045b674538b-serving-cert\") pod \"controller-manager-879f6c89f-d9sdq\" (UID: \"32727d31-2207-4688-b70c-6045b674538b\") " pod="openshift-controller-manager/controller-manager-879f6c89f-d9sdq" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.303828 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.303886 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/30599c42-eef7-4967-b84f-95b49a225bd6-registry-tls\") pod \"image-registry-697d97f7c8-n794d\" (UID: \"30599c42-eef7-4967-b84f-95b49a225bd6\") " pod="openshift-image-registry/image-registry-697d97f7c8-n794d" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.305723 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/eb155f7a-3c80-42c5-adfe-69f854a2d032-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-fsxn2\" (UID: \"eb155f7a-3c80-42c5-adfe-69f854a2d032\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-fsxn2" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.306036 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/87f902e1-073b-4ccd-8b3a-717f802e9671-service-ca\") pod \"console-f9d7485db-tbxjm\" (UID: \"87f902e1-073b-4ccd-8b3a-717f802e9671\") " pod="openshift-console/console-f9d7485db-tbxjm" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.307447 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/168f234d-da70-475a-b6df-2771ab11368e-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-vtbkx\" (UID: \"168f234d-da70-475a-b6df-2771ab11368e\") " pod="openshift-authentication/oauth-openshift-558db77b4-vtbkx" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.307503 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/3a73ff5c-5292-45f7-a7cf-97714a8a109d-csi-data-dir\") pod \"csi-hostpathplugin-rj5kj\" (UID: \"3a73ff5c-5292-45f7-a7cf-97714a8a109d\") " pod="hostpath-provisioner/csi-hostpathplugin-rj5kj" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.307555 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/400260cc-a84e-4d59-99f5-5e2359ceee1c-signing-key\") pod \"service-ca-9c57cc56f-59wgb\" (UID: \"400260cc-a84e-4d59-99f5-5e2359ceee1c\") " pod="openshift-service-ca/service-ca-9c57cc56f-59wgb" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.307603 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p9kjj\" (UniqueName: \"kubernetes.io/projected/f0014c51-8bc7-44e7-846b-3c7d97a67913-kube-api-access-p9kjj\") pod \"dns-default-x8hmr\" (UID: \"f0014c51-8bc7-44e7-846b-3c7d97a67913\") " pod="openshift-dns/dns-default-x8hmr" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.307642 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/2edcc7e5-5bfb-4c39-86c9-fabf007078f4-audit-policies\") pod \"apiserver-7bbb656c7d-njp2h\" (UID: \"2edcc7e5-5bfb-4c39-86c9-fabf007078f4\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-njp2h" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.307977 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rx75p\" (UniqueName: \"kubernetes.io/projected/a54a2524-099e-4a0f-9762-eafbc576dc56-kube-api-access-rx75p\") pod \"router-default-5444994796-xbr84\" (UID: \"a54a2524-099e-4a0f-9762-eafbc576dc56\") " pod="openshift-ingress/router-default-5444994796-xbr84" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.308029 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/168f234d-da70-475a-b6df-2771ab11368e-audit-dir\") pod \"oauth-openshift-558db77b4-vtbkx\" (UID: \"168f234d-da70-475a-b6df-2771ab11368e\") " pod="openshift-authentication/oauth-openshift-558db77b4-vtbkx" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.308170 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eb155f7a-3c80-42c5-adfe-69f854a2d032-config\") pod \"kube-controller-manager-operator-78b949d7b-fsxn2\" (UID: \"eb155f7a-3c80-42c5-adfe-69f854a2d032\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-fsxn2" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.308233 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/1c634f86-dcd4-4cf7-aa6f-4245be5d0b57-bound-sa-token\") pod \"ingress-operator-5b745b69d9-vb86f\" (UID: \"1c634f86-dcd4-4cf7-aa6f-4245be5d0b57\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-vb86f" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.308337 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/afdfb747-0bc0-40a4-89e6-dc6970617398-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-5fr58\" (UID: \"afdfb747-0bc0-40a4-89e6-dc6970617398\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-5fr58" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.308386 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ce7242d2-301f-4d8f-816a-a36418be67ca-serving-cert\") pod \"openshift-config-operator-7777fb866f-4z5n9\" (UID: \"ce7242d2-301f-4d8f-816a-a36418be67ca\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-4z5n9" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.308796 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lhgx7\" (UniqueName: \"kubernetes.io/projected/97be751b-9ee7-45b8-bb05-5db918750f72-kube-api-access-lhgx7\") pod \"authentication-operator-69f744f599-ztbpv\" (UID: \"97be751b-9ee7-45b8-bb05-5db918750f72\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-ztbpv" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.308858 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9j687\" (UniqueName: \"kubernetes.io/projected/400260cc-a84e-4d59-99f5-5e2359ceee1c-kube-api-access-9j687\") pod \"service-ca-9c57cc56f-59wgb\" (UID: \"400260cc-a84e-4d59-99f5-5e2359ceee1c\") " pod="openshift-service-ca/service-ca-9c57cc56f-59wgb" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.308884 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/364b3e42-dafa-45cd-bf38-545cc2eb9e21-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-tsmf5\" (UID: \"364b3e42-dafa-45cd-bf38-545cc2eb9e21\") " pod="openshift-marketplace/marketplace-operator-79b997595-tsmf5" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.308912 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2edcc7e5-5bfb-4c39-86c9-fabf007078f4-serving-cert\") pod \"apiserver-7bbb656c7d-njp2h\" (UID: \"2edcc7e5-5bfb-4c39-86c9-fabf007078f4\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-njp2h" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.308964 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/9d51f527-2205-4113-9b65-655f3fab2e1c-machine-approver-tls\") pod \"machine-approver-56656f9798-km4wr\" (UID: \"9d51f527-2205-4113-9b65-655f3fab2e1c\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-km4wr" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.308985 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/fd02555c-e6fc-4825-a06a-53497a2cfeda-cert\") pod \"ingress-canary-t4g4w\" (UID: \"fd02555c-e6fc-4825-a06a-53497a2cfeda\") " pod="openshift-ingress-canary/ingress-canary-t4g4w" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.309182 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eb155f7a-3c80-42c5-adfe-69f854a2d032-config\") pod \"kube-controller-manager-operator-78b949d7b-fsxn2\" (UID: \"eb155f7a-3c80-42c5-adfe-69f854a2d032\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-fsxn2" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.309393 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/bbe26e55-76a5-4f66-b3c6-5f8933372332-auth-proxy-config\") pod \"machine-config-operator-74547568cd-dssrb\" (UID: \"bbe26e55-76a5-4f66-b3c6-5f8933372332\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-dssrb" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.309442 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/bbe26e55-76a5-4f66-b3c6-5f8933372332-proxy-tls\") pod \"machine-config-operator-74547568cd-dssrb\" (UID: \"bbe26e55-76a5-4f66-b3c6-5f8933372332\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-dssrb" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.309509 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/87f902e1-073b-4ccd-8b3a-717f802e9671-trusted-ca-bundle\") pod \"console-f9d7485db-tbxjm\" (UID: \"87f902e1-073b-4ccd-8b3a-717f802e9671\") " pod="openshift-console/console-f9d7485db-tbxjm" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.309543 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d51f527-2205-4113-9b65-655f3fab2e1c-config\") pod \"machine-approver-56656f9798-km4wr\" (UID: \"9d51f527-2205-4113-9b65-655f3fab2e1c\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-km4wr" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.309573 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/3a73ff5c-5292-45f7-a7cf-97714a8a109d-plugins-dir\") pod \"csi-hostpathplugin-rj5kj\" (UID: \"3a73ff5c-5292-45f7-a7cf-97714a8a109d\") " pod="hostpath-provisioner/csi-hostpathplugin-rj5kj" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.311288 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gk9kr\" (UniqueName: \"kubernetes.io/projected/bbe26e55-76a5-4f66-b3c6-5f8933372332-kube-api-access-gk9kr\") pod \"machine-config-operator-74547568cd-dssrb\" (UID: \"bbe26e55-76a5-4f66-b3c6-5f8933372332\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-dssrb" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.311378 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/97be751b-9ee7-45b8-bb05-5db918750f72-service-ca-bundle\") pod \"authentication-operator-69f744f599-ztbpv\" (UID: \"97be751b-9ee7-45b8-bb05-5db918750f72\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-ztbpv" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.311411 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/30599c42-eef7-4967-b84f-95b49a225bd6-ca-trust-extracted\") pod \"image-registry-697d97f7c8-n794d\" (UID: \"30599c42-eef7-4967-b84f-95b49a225bd6\") " pod="openshift-image-registry/image-registry-697d97f7c8-n794d" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.311432 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/3a73ff5c-5292-45f7-a7cf-97714a8a109d-mountpoint-dir\") pod \"csi-hostpathplugin-rj5kj\" (UID: \"3a73ff5c-5292-45f7-a7cf-97714a8a109d\") " pod="hostpath-provisioner/csi-hostpathplugin-rj5kj" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.311450 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/a356b9d0-54fe-4bac-9589-027e7cbfeb87-tmpfs\") pod \"packageserver-d55dfcdfc-npx2b\" (UID: \"a356b9d0-54fe-4bac-9589-027e7cbfeb87\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-npx2b" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.311563 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/3a73ff5c-5292-45f7-a7cf-97714a8a109d-socket-dir\") pod \"csi-hostpathplugin-rj5kj\" (UID: \"3a73ff5c-5292-45f7-a7cf-97714a8a109d\") " pod="hostpath-provisioner/csi-hostpathplugin-rj5kj" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.311657 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q5wx5\" (UniqueName: \"kubernetes.io/projected/d879a534-2c8a-463b-bbf6-75213cb4d554-kube-api-access-q5wx5\") pod \"cluster-samples-operator-665b6dd947-bbnxh\" (UID: \"d879a534-2c8a-463b-bbf6-75213cb4d554\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-bbnxh" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.311681 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/400260cc-a84e-4d59-99f5-5e2359ceee1c-signing-cabundle\") pod \"service-ca-9c57cc56f-59wgb\" (UID: \"400260cc-a84e-4d59-99f5-5e2359ceee1c\") " pod="openshift-service-ca/service-ca-9c57cc56f-59wgb" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.311700 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/168f234d-da70-475a-b6df-2771ab11368e-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-vtbkx\" (UID: \"168f234d-da70-475a-b6df-2771ab11368e\") " pod="openshift-authentication/oauth-openshift-558db77b4-vtbkx" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.311756 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/eb155f7a-3c80-42c5-adfe-69f854a2d032-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-fsxn2\" (UID: \"eb155f7a-3c80-42c5-adfe-69f854a2d032\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-fsxn2" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.311783 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/1559d3ac-0229-4e31-9d0b-ebf633409384-proxy-tls\") pod \"machine-config-controller-84d6567774-7ftcq\" (UID: \"1559d3ac-0229-4e31-9d0b-ebf633409384\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-7ftcq" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.311881 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/560047dc-48dd-40d2-b7b3-8a8e4db0d7c6-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-4kfg8\" (UID: \"560047dc-48dd-40d2-b7b3-8a8e4db0d7c6\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-4kfg8" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.311913 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2edcc7e5-5bfb-4c39-86c9-fabf007078f4-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-njp2h\" (UID: \"2edcc7e5-5bfb-4c39-86c9-fabf007078f4\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-njp2h" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.311934 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3a0bfe9c-4f8c-47f6-945b-0f93f9888f93-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-x5nfr\" (UID: \"3a0bfe9c-4f8c-47f6-945b-0f93f9888f93\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-x5nfr" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.311961 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/7b5ec71d-6e2e-4cf9-af45-b0147e7598b6-srv-cert\") pod \"olm-operator-6b444d44fb-z89jx\" (UID: \"7b5ec71d-6e2e-4cf9-af45-b0147e7598b6\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-z89jx" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.311999 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lx559\" (UniqueName: \"kubernetes.io/projected/364b3e42-dafa-45cd-bf38-545cc2eb9e21-kube-api-access-lx559\") pod \"marketplace-operator-79b997595-tsmf5\" (UID: \"364b3e42-dafa-45cd-bf38-545cc2eb9e21\") " pod="openshift-marketplace/marketplace-operator-79b997595-tsmf5" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.312032 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gf28v\" (UniqueName: \"kubernetes.io/projected/ce7242d2-301f-4d8f-816a-a36418be67ca-kube-api-access-gf28v\") pod \"openshift-config-operator-7777fb866f-4z5n9\" (UID: \"ce7242d2-301f-4d8f-816a-a36418be67ca\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-4z5n9" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.312060 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h5j9c\" (UniqueName: \"kubernetes.io/projected/a9e033b5-0aef-4e45-924c-338d2a914c5a-kube-api-access-h5j9c\") pod \"catalog-operator-68c6474976-f8pr4\" (UID: \"a9e033b5-0aef-4e45-924c-338d2a914c5a\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-f8pr4" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.312323 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/30599c42-eef7-4967-b84f-95b49a225bd6-bound-sa-token\") pod \"image-registry-697d97f7c8-n794d\" (UID: \"30599c42-eef7-4967-b84f-95b49a225bd6\") " pod="openshift-image-registry/image-registry-697d97f7c8-n794d" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.312375 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/1559d3ac-0229-4e31-9d0b-ebf633409384-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-7ftcq\" (UID: \"1559d3ac-0229-4e31-9d0b-ebf633409384\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-7ftcq" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.312405 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/89ad3a24-065a-4210-bc90-737b51139e8c-node-bootstrap-token\") pod \"machine-config-server-4zfpn\" (UID: \"89ad3a24-065a-4210-bc90-737b51139e8c\") " pod="openshift-machine-config-operator/machine-config-server-4zfpn" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.312412 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d51f527-2205-4113-9b65-655f3fab2e1c-config\") pod \"machine-approver-56656f9798-km4wr\" (UID: \"9d51f527-2205-4113-9b65-655f3fab2e1c\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-km4wr" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.312434 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hl4jx\" (UniqueName: \"kubernetes.io/projected/e8247ed1-a90a-409b-a326-07bb154a4d16-kube-api-access-hl4jx\") pod \"migrator-59844c95c7-xjmsp\" (UID: \"e8247ed1-a90a-409b-a326-07bb154a4d16\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-xjmsp" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.312510 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1c634f86-dcd4-4cf7-aa6f-4245be5d0b57-trusted-ca\") pod \"ingress-operator-5b745b69d9-vb86f\" (UID: \"1c634f86-dcd4-4cf7-aa6f-4245be5d0b57\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-vb86f" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.312546 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kwtmc\" (UniqueName: \"kubernetes.io/projected/a356b9d0-54fe-4bac-9589-027e7cbfeb87-kube-api-access-kwtmc\") pod \"packageserver-d55dfcdfc-npx2b\" (UID: \"a356b9d0-54fe-4bac-9589-027e7cbfeb87\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-npx2b" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.312575 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/9d51f527-2205-4113-9b65-655f3fab2e1c-machine-approver-tls\") pod \"machine-approver-56656f9798-km4wr\" (UID: \"9d51f527-2205-4113-9b65-655f3fab2e1c\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-km4wr" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.312630 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/168f234d-da70-475a-b6df-2771ab11368e-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-vtbkx\" (UID: \"168f234d-da70-475a-b6df-2771ab11368e\") " pod="openshift-authentication/oauth-openshift-558db77b4-vtbkx" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.312670 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/2edcc7e5-5bfb-4c39-86c9-fabf007078f4-etcd-client\") pod \"apiserver-7bbb656c7d-njp2h\" (UID: \"2edcc7e5-5bfb-4c39-86c9-fabf007078f4\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-njp2h" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.312713 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/98d1a535-6aba-4633-8091-42e633b865b1-config\") pod \"service-ca-operator-777779d784-hplr8\" (UID: \"98d1a535-6aba-4633-8091-42e633b865b1\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-hplr8" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.312974 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5ffb9e3b-f114-44b1-9521-096b538ce9bf-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-fzqpc\" (UID: \"5ffb9e3b-f114-44b1-9521-096b538ce9bf\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-fzqpc" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.312999 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/30599c42-eef7-4967-b84f-95b49a225bd6-ca-trust-extracted\") pod \"image-registry-697d97f7c8-n794d\" (UID: \"30599c42-eef7-4967-b84f-95b49a225bd6\") " pod="openshift-image-registry/image-registry-697d97f7c8-n794d" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.313032 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/97be751b-9ee7-45b8-bb05-5db918750f72-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-ztbpv\" (UID: \"97be751b-9ee7-45b8-bb05-5db918750f72\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-ztbpv" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.313102 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/87f902e1-073b-4ccd-8b3a-717f802e9671-console-serving-cert\") pod \"console-f9d7485db-tbxjm\" (UID: \"87f902e1-073b-4ccd-8b3a-717f802e9671\") " pod="openshift-console/console-f9d7485db-tbxjm" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.313425 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/168f234d-da70-475a-b6df-2771ab11368e-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-vtbkx\" (UID: \"168f234d-da70-475a-b6df-2771ab11368e\") " pod="openshift-authentication/oauth-openshift-558db77b4-vtbkx" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.313471 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5pbvq\" (UniqueName: \"kubernetes.io/projected/2edcc7e5-5bfb-4c39-86c9-fabf007078f4-kube-api-access-5pbvq\") pod \"apiserver-7bbb656c7d-njp2h\" (UID: \"2edcc7e5-5bfb-4c39-86c9-fabf007078f4\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-njp2h" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.313529 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/2edcc7e5-5bfb-4c39-86c9-fabf007078f4-audit-dir\") pod \"apiserver-7bbb656c7d-njp2h\" (UID: \"2edcc7e5-5bfb-4c39-86c9-fabf007078f4\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-njp2h" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.313588 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/a9e033b5-0aef-4e45-924c-338d2a914c5a-profile-collector-cert\") pod \"catalog-operator-68c6474976-f8pr4\" (UID: \"a9e033b5-0aef-4e45-924c-338d2a914c5a\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-f8pr4" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.313612 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/89ad3a24-065a-4210-bc90-737b51139e8c-certs\") pod \"machine-config-server-4zfpn\" (UID: \"89ad3a24-065a-4210-bc90-737b51139e8c\") " pod="openshift-machine-config-operator/machine-config-server-4zfpn" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.313662 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/168f234d-da70-475a-b6df-2771ab11368e-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-vtbkx\" (UID: \"168f234d-da70-475a-b6df-2771ab11368e\") " pod="openshift-authentication/oauth-openshift-558db77b4-vtbkx" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.313714 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bmhrd\" (UniqueName: \"kubernetes.io/projected/168f234d-da70-475a-b6df-2771ab11368e-kube-api-access-bmhrd\") pod \"oauth-openshift-558db77b4-vtbkx\" (UID: \"168f234d-da70-475a-b6df-2771ab11368e\") " pod="openshift-authentication/oauth-openshift-558db77b4-vtbkx" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.313753 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/a9e033b5-0aef-4e45-924c-338d2a914c5a-srv-cert\") pod \"catalog-operator-68c6474976-f8pr4\" (UID: \"a9e033b5-0aef-4e45-924c-338d2a914c5a\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-f8pr4" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.313781 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8bdf9\" (UniqueName: \"kubernetes.io/projected/98d1a535-6aba-4633-8091-42e633b865b1-kube-api-access-8bdf9\") pod \"service-ca-operator-777779d784-hplr8\" (UID: \"98d1a535-6aba-4633-8091-42e633b865b1\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-hplr8" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.313873 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/87f902e1-073b-4ccd-8b3a-717f802e9671-oauth-serving-cert\") pod \"console-f9d7485db-tbxjm\" (UID: \"87f902e1-073b-4ccd-8b3a-717f802e9671\") " pod="openshift-console/console-f9d7485db-tbxjm" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.314036 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a356b9d0-54fe-4bac-9589-027e7cbfeb87-webhook-cert\") pod \"packageserver-d55dfcdfc-npx2b\" (UID: \"a356b9d0-54fe-4bac-9589-027e7cbfeb87\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-npx2b" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.314107 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-74mns\" (UniqueName: \"kubernetes.io/projected/32727d31-2207-4688-b70c-6045b674538b-kube-api-access-74mns\") pod \"controller-manager-879f6c89f-d9sdq\" (UID: \"32727d31-2207-4688-b70c-6045b674538b\") " pod="openshift-controller-manager/controller-manager-879f6c89f-d9sdq" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.314222 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/87f902e1-073b-4ccd-8b3a-717f802e9671-trusted-ca-bundle\") pod \"console-f9d7485db-tbxjm\" (UID: \"87f902e1-073b-4ccd-8b3a-717f802e9671\") " pod="openshift-console/console-f9d7485db-tbxjm" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.314265 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/168f234d-da70-475a-b6df-2771ab11368e-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-vtbkx\" (UID: \"168f234d-da70-475a-b6df-2771ab11368e\") " pod="openshift-authentication/oauth-openshift-558db77b4-vtbkx" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.314311 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p7xq9\" (UniqueName: \"kubernetes.io/projected/560047dc-48dd-40d2-b7b3-8a8e4db0d7c6-kube-api-access-p7xq9\") pod \"multus-admission-controller-857f4d67dd-4kfg8\" (UID: \"560047dc-48dd-40d2-b7b3-8a8e4db0d7c6\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-4kfg8" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.314344 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/30599c42-eef7-4967-b84f-95b49a225bd6-installation-pull-secrets\") pod \"image-registry-697d97f7c8-n794d\" (UID: \"30599c42-eef7-4967-b84f-95b49a225bd6\") " pod="openshift-image-registry/image-registry-697d97f7c8-n794d" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.314390 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/ce7242d2-301f-4d8f-816a-a36418be67ca-available-featuregates\") pod \"openshift-config-operator-7777fb866f-4z5n9\" (UID: \"ce7242d2-301f-4d8f-816a-a36418be67ca\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-4z5n9" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.314421 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f0014c51-8bc7-44e7-846b-3c7d97a67913-config-volume\") pod \"dns-default-x8hmr\" (UID: \"f0014c51-8bc7-44e7-846b-3c7d97a67913\") " pod="openshift-dns/dns-default-x8hmr" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.314562 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/87f902e1-073b-4ccd-8b3a-717f802e9671-oauth-serving-cert\") pod \"console-f9d7485db-tbxjm\" (UID: \"87f902e1-073b-4ccd-8b3a-717f802e9671\") " pod="openshift-console/console-f9d7485db-tbxjm" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.314638 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/87f902e1-073b-4ccd-8b3a-717f802e9671-console-oauth-config\") pod \"console-f9d7485db-tbxjm\" (UID: \"87f902e1-073b-4ccd-8b3a-717f802e9671\") " pod="openshift-console/console-f9d7485db-tbxjm" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.314691 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4sfj2\" (UniqueName: \"kubernetes.io/projected/89ad3a24-065a-4210-bc90-737b51139e8c-kube-api-access-4sfj2\") pod \"machine-config-server-4zfpn\" (UID: \"89ad3a24-065a-4210-bc90-737b51139e8c\") " pod="openshift-machine-config-operator/machine-config-server-4zfpn" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.314717 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/97be751b-9ee7-45b8-bb05-5db918750f72-serving-cert\") pod \"authentication-operator-69f744f599-ztbpv\" (UID: \"97be751b-9ee7-45b8-bb05-5db918750f72\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-ztbpv" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.314742 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/ce7242d2-301f-4d8f-816a-a36418be67ca-available-featuregates\") pod \"openshift-config-operator-7777fb866f-4z5n9\" (UID: \"ce7242d2-301f-4d8f-816a-a36418be67ca\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-4z5n9" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.314760 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nbjft\" (UniqueName: \"kubernetes.io/projected/7b5ec71d-6e2e-4cf9-af45-b0147e7598b6-kube-api-access-nbjft\") pod \"olm-operator-6b444d44fb-z89jx\" (UID: \"7b5ec71d-6e2e-4cf9-af45-b0147e7598b6\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-z89jx" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.314789 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/364b3e42-dafa-45cd-bf38-545cc2eb9e21-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-tsmf5\" (UID: \"364b3e42-dafa-45cd-bf38-545cc2eb9e21\") " pod="openshift-marketplace/marketplace-operator-79b997595-tsmf5" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.314869 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1afd949e-d0f2-41b8-9632-917df3468232-secret-volume\") pod \"collect-profiles-29399565-qwh9r\" (UID: \"1afd949e-d0f2-41b8-9632-917df3468232\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399565-qwh9r" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.314924 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/2edcc7e5-5bfb-4c39-86c9-fabf007078f4-encryption-config\") pod \"apiserver-7bbb656c7d-njp2h\" (UID: \"2edcc7e5-5bfb-4c39-86c9-fabf007078f4\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-njp2h" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.314985 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d4hmv\" (UniqueName: \"kubernetes.io/projected/30599c42-eef7-4967-b84f-95b49a225bd6-kube-api-access-d4hmv\") pod \"image-registry-697d97f7c8-n794d\" (UID: \"30599c42-eef7-4967-b84f-95b49a225bd6\") " pod="openshift-image-registry/image-registry-697d97f7c8-n794d" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.315034 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5ffb9e3b-f114-44b1-9521-096b538ce9bf-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-fzqpc\" (UID: \"5ffb9e3b-f114-44b1-9521-096b538ce9bf\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-fzqpc" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.315064 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/97be751b-9ee7-45b8-bb05-5db918750f72-config\") pod \"authentication-operator-69f744f599-ztbpv\" (UID: \"97be751b-9ee7-45b8-bb05-5db918750f72\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-ztbpv" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.315181 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/32727d31-2207-4688-b70c-6045b674538b-client-ca\") pod \"controller-manager-879f6c89f-d9sdq\" (UID: \"32727d31-2207-4688-b70c-6045b674538b\") " pod="openshift-controller-manager/controller-manager-879f6c89f-d9sdq" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.315535 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2qncw\" (UniqueName: \"kubernetes.io/projected/1559d3ac-0229-4e31-9d0b-ebf633409384-kube-api-access-2qncw\") pod \"machine-config-controller-84d6567774-7ftcq\" (UID: \"1559d3ac-0229-4e31-9d0b-ebf633409384\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-7ftcq" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.315583 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1afd949e-d0f2-41b8-9632-917df3468232-config-volume\") pod \"collect-profiles-29399565-qwh9r\" (UID: \"1afd949e-d0f2-41b8-9632-917df3468232\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399565-qwh9r" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.315615 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-228rg\" (UniqueName: \"kubernetes.io/projected/afdfb747-0bc0-40a4-89e6-dc6970617398-kube-api-access-228rg\") pod \"control-plane-machine-set-operator-78cbb6b69f-5fr58\" (UID: \"afdfb747-0bc0-40a4-89e6-dc6970617398\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-5fr58" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.315649 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/87f902e1-073b-4ccd-8b3a-717f802e9671-console-config\") pod \"console-f9d7485db-tbxjm\" (UID: \"87f902e1-073b-4ccd-8b3a-717f802e9671\") " pod="openshift-console/console-f9d7485db-tbxjm" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.315707 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a54a2524-099e-4a0f-9762-eafbc576dc56-service-ca-bundle\") pod \"router-default-5444994796-xbr84\" (UID: \"a54a2524-099e-4a0f-9762-eafbc576dc56\") " pod="openshift-ingress/router-default-5444994796-xbr84" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.315777 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/7b5ec71d-6e2e-4cf9-af45-b0147e7598b6-profile-collector-cert\") pod \"olm-operator-6b444d44fb-z89jx\" (UID: \"7b5ec71d-6e2e-4cf9-af45-b0147e7598b6\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-z89jx" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.315813 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/168f234d-da70-475a-b6df-2771ab11368e-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-vtbkx\" (UID: \"168f234d-da70-475a-b6df-2771ab11368e\") " pod="openshift-authentication/oauth-openshift-558db77b4-vtbkx" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.315841 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/3a73ff5c-5292-45f7-a7cf-97714a8a109d-registration-dir\") pod \"csi-hostpathplugin-rj5kj\" (UID: \"3a73ff5c-5292-45f7-a7cf-97714a8a109d\") " pod="hostpath-provisioner/csi-hostpathplugin-rj5kj" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.316379 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/87f902e1-073b-4ccd-8b3a-717f802e9671-console-config\") pod \"console-f9d7485db-tbxjm\" (UID: \"87f902e1-073b-4ccd-8b3a-717f802e9671\") " pod="openshift-console/console-f9d7485db-tbxjm" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.316757 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/32727d31-2207-4688-b70c-6045b674538b-client-ca\") pod \"controller-manager-879f6c89f-d9sdq\" (UID: \"32727d31-2207-4688-b70c-6045b674538b\") " pod="openshift-controller-manager/controller-manager-879f6c89f-d9sdq" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.317082 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/87f902e1-073b-4ccd-8b3a-717f802e9671-console-serving-cert\") pod \"console-f9d7485db-tbxjm\" (UID: \"87f902e1-073b-4ccd-8b3a-717f802e9671\") " pod="openshift-console/console-f9d7485db-tbxjm" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.317490 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/30599c42-eef7-4967-b84f-95b49a225bd6-installation-pull-secrets\") pod \"image-registry-697d97f7c8-n794d\" (UID: \"30599c42-eef7-4967-b84f-95b49a225bd6\") " pod="openshift-image-registry/image-registry-697d97f7c8-n794d" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.318281 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/87f902e1-073b-4ccd-8b3a-717f802e9671-console-oauth-config\") pod \"console-f9d7485db-tbxjm\" (UID: \"87f902e1-073b-4ccd-8b3a-717f802e9671\") " pod="openshift-console/console-f9d7485db-tbxjm" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.323265 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.342636 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.362896 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Nov 24 08:51:35 crc kubenswrapper[4886]: E1124 08:51:35.383659 4886 configmap.go:193] Couldn't get configMap openshift-apiserver/etcd-serving-ca: failed to sync configmap cache: timed out waiting for the condition Nov 24 08:51:35 crc kubenswrapper[4886]: E1124 08:51:35.383774 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/eb6743a4-a25e-4b1b-ae3b-3d2199cfbff2-etcd-serving-ca podName:eb6743a4-a25e-4b1b-ae3b-3d2199cfbff2 nodeName:}" failed. No retries permitted until 2025-11-24 08:51:35.883749484 +0000 UTC m=+151.770487619 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etcd-serving-ca" (UniqueName: "kubernetes.io/configmap/eb6743a4-a25e-4b1b-ae3b-3d2199cfbff2-etcd-serving-ca") pod "apiserver-76f77b778f-lz4ml" (UID: "eb6743a4-a25e-4b1b-ae3b-3d2199cfbff2") : failed to sync configmap cache: timed out waiting for the condition Nov 24 08:51:35 crc kubenswrapper[4886]: E1124 08:51:35.385076 4886 configmap.go:193] Couldn't get configMap openshift-apiserver/audit-1: failed to sync configmap cache: timed out waiting for the condition Nov 24 08:51:35 crc kubenswrapper[4886]: E1124 08:51:35.385108 4886 configmap.go:193] Couldn't get configMap openshift-apiserver/trusted-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Nov 24 08:51:35 crc kubenswrapper[4886]: E1124 08:51:35.385135 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/eb6743a4-a25e-4b1b-ae3b-3d2199cfbff2-audit podName:eb6743a4-a25e-4b1b-ae3b-3d2199cfbff2 nodeName:}" failed. No retries permitted until 2025-11-24 08:51:35.885117901 +0000 UTC m=+151.771856036 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "audit" (UniqueName: "kubernetes.io/configmap/eb6743a4-a25e-4b1b-ae3b-3d2199cfbff2-audit") pod "apiserver-76f77b778f-lz4ml" (UID: "eb6743a4-a25e-4b1b-ae3b-3d2199cfbff2") : failed to sync configmap cache: timed out waiting for the condition Nov 24 08:51:35 crc kubenswrapper[4886]: E1124 08:51:35.385137 4886 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/config: failed to sync configmap cache: timed out waiting for the condition Nov 24 08:51:35 crc kubenswrapper[4886]: E1124 08:51:35.385180 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/eb6743a4-a25e-4b1b-ae3b-3d2199cfbff2-trusted-ca-bundle podName:eb6743a4-a25e-4b1b-ae3b-3d2199cfbff2 nodeName:}" failed. No retries permitted until 2025-11-24 08:51:35.885165482 +0000 UTC m=+151.771903617 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/eb6743a4-a25e-4b1b-ae3b-3d2199cfbff2-trusted-ca-bundle") pod "apiserver-76f77b778f-lz4ml" (UID: "eb6743a4-a25e-4b1b-ae3b-3d2199cfbff2") : failed to sync configmap cache: timed out waiting for the condition Nov 24 08:51:35 crc kubenswrapper[4886]: E1124 08:51:35.385215 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b65ba9fc-0a0d-49f2-9991-319b054df0b0-config podName:b65ba9fc-0a0d-49f2-9991-319b054df0b0 nodeName:}" failed. No retries permitted until 2025-11-24 08:51:35.885203443 +0000 UTC m=+151.771941578 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/b65ba9fc-0a0d-49f2-9991-319b054df0b0-config") pod "route-controller-manager-6576b87f9c-4k7lh" (UID: "b65ba9fc-0a0d-49f2-9991-319b054df0b0") : failed to sync configmap cache: timed out waiting for the condition Nov 24 08:51:35 crc kubenswrapper[4886]: E1124 08:51:35.385727 4886 secret.go:188] Couldn't get secret openshift-route-controller-manager/serving-cert: failed to sync secret cache: timed out waiting for the condition Nov 24 08:51:35 crc kubenswrapper[4886]: E1124 08:51:35.385763 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b65ba9fc-0a0d-49f2-9991-319b054df0b0-serving-cert podName:b65ba9fc-0a0d-49f2-9991-319b054df0b0 nodeName:}" failed. No retries permitted until 2025-11-24 08:51:35.885755418 +0000 UTC m=+151.772493563 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/b65ba9fc-0a0d-49f2-9991-319b054df0b0-serving-cert") pod "route-controller-manager-6576b87f9c-4k7lh" (UID: "b65ba9fc-0a0d-49f2-9991-319b054df0b0") : failed to sync secret cache: timed out waiting for the condition Nov 24 08:51:35 crc kubenswrapper[4886]: E1124 08:51:35.385771 4886 configmap.go:193] Couldn't get configMap openshift-apiserver-operator/openshift-apiserver-operator-config: failed to sync configmap cache: timed out waiting for the condition Nov 24 08:51:35 crc kubenswrapper[4886]: E1124 08:51:35.385842 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/efe0b276-8633-4435-b8af-d4651276c24f-config podName:efe0b276-8633-4435-b8af-d4651276c24f nodeName:}" failed. No retries permitted until 2025-11-24 08:51:35.88582835 +0000 UTC m=+151.772566485 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/efe0b276-8633-4435-b8af-d4651276c24f-config") pod "openshift-apiserver-operator-796bbdcf4f-vz657" (UID: "efe0b276-8633-4435-b8af-d4651276c24f") : failed to sync configmap cache: timed out waiting for the condition Nov 24 08:51:35 crc kubenswrapper[4886]: E1124 08:51:35.386058 4886 configmap.go:193] Couldn't get configMap openshift-machine-api/kube-rbac-proxy: failed to sync configmap cache: timed out waiting for the condition Nov 24 08:51:35 crc kubenswrapper[4886]: E1124 08:51:35.386079 4886 secret.go:188] Couldn't get secret openshift-apiserver/serving-cert: failed to sync secret cache: timed out waiting for the condition Nov 24 08:51:35 crc kubenswrapper[4886]: E1124 08:51:35.386102 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e4cfb4a2-69e1-4c38-b07f-cb1e628cbc2f-config podName:e4cfb4a2-69e1-4c38-b07f-cb1e628cbc2f nodeName:}" failed. No retries permitted until 2025-11-24 08:51:35.886093828 +0000 UTC m=+151.772831963 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/e4cfb4a2-69e1-4c38-b07f-cb1e628cbc2f-config") pod "machine-api-operator-5694c8668f-fqpl9" (UID: "e4cfb4a2-69e1-4c38-b07f-cb1e628cbc2f") : failed to sync configmap cache: timed out waiting for the condition Nov 24 08:51:35 crc kubenswrapper[4886]: E1124 08:51:35.386119 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/eb6743a4-a25e-4b1b-ae3b-3d2199cfbff2-serving-cert podName:eb6743a4-a25e-4b1b-ae3b-3d2199cfbff2 nodeName:}" failed. No retries permitted until 2025-11-24 08:51:35.886109208 +0000 UTC m=+151.772847343 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/eb6743a4-a25e-4b1b-ae3b-3d2199cfbff2-serving-cert") pod "apiserver-76f77b778f-lz4ml" (UID: "eb6743a4-a25e-4b1b-ae3b-3d2199cfbff2") : failed to sync secret cache: timed out waiting for the condition Nov 24 08:51:35 crc kubenswrapper[4886]: E1124 08:51:35.389430 4886 configmap.go:193] Couldn't get configMap openshift-apiserver/image-import-ca: failed to sync configmap cache: timed out waiting for the condition Nov 24 08:51:35 crc kubenswrapper[4886]: E1124 08:51:35.389489 4886 configmap.go:193] Couldn't get configMap openshift-apiserver/config: failed to sync configmap cache: timed out waiting for the condition Nov 24 08:51:35 crc kubenswrapper[4886]: E1124 08:51:35.389513 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/eb6743a4-a25e-4b1b-ae3b-3d2199cfbff2-image-import-ca podName:eb6743a4-a25e-4b1b-ae3b-3d2199cfbff2 nodeName:}" failed. No retries permitted until 2025-11-24 08:51:35.889498001 +0000 UTC m=+151.776236146 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "image-import-ca" (UniqueName: "kubernetes.io/configmap/eb6743a4-a25e-4b1b-ae3b-3d2199cfbff2-image-import-ca") pod "apiserver-76f77b778f-lz4ml" (UID: "eb6743a4-a25e-4b1b-ae3b-3d2199cfbff2") : failed to sync configmap cache: timed out waiting for the condition Nov 24 08:51:35 crc kubenswrapper[4886]: E1124 08:51:35.389444 4886 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: failed to sync configmap cache: timed out waiting for the condition Nov 24 08:51:35 crc kubenswrapper[4886]: E1124 08:51:35.389534 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/eb6743a4-a25e-4b1b-ae3b-3d2199cfbff2-config podName:eb6743a4-a25e-4b1b-ae3b-3d2199cfbff2 nodeName:}" failed. No retries permitted until 2025-11-24 08:51:35.889523382 +0000 UTC m=+151.776261517 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/eb6743a4-a25e-4b1b-ae3b-3d2199cfbff2-config") pod "apiserver-76f77b778f-lz4ml" (UID: "eb6743a4-a25e-4b1b-ae3b-3d2199cfbff2") : failed to sync configmap cache: timed out waiting for the condition Nov 24 08:51:35 crc kubenswrapper[4886]: E1124 08:51:35.389554 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b65ba9fc-0a0d-49f2-9991-319b054df0b0-client-ca podName:b65ba9fc-0a0d-49f2-9991-319b054df0b0 nodeName:}" failed. No retries permitted until 2025-11-24 08:51:35.889544702 +0000 UTC m=+151.776282837 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/b65ba9fc-0a0d-49f2-9991-319b054df0b0-client-ca") pod "route-controller-manager-6576b87f9c-4k7lh" (UID: "b65ba9fc-0a0d-49f2-9991-319b054df0b0") : failed to sync configmap cache: timed out waiting for the condition Nov 24 08:51:35 crc kubenswrapper[4886]: E1124 08:51:35.389556 4886 secret.go:188] Couldn't get secret openshift-apiserver/encryption-config-1: failed to sync secret cache: timed out waiting for the condition Nov 24 08:51:35 crc kubenswrapper[4886]: E1124 08:51:35.389594 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/eb6743a4-a25e-4b1b-ae3b-3d2199cfbff2-encryption-config podName:eb6743a4-a25e-4b1b-ae3b-3d2199cfbff2 nodeName:}" failed. No retries permitted until 2025-11-24 08:51:35.889584113 +0000 UTC m=+151.776322248 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "encryption-config" (UniqueName: "kubernetes.io/secret/eb6743a4-a25e-4b1b-ae3b-3d2199cfbff2-encryption-config") pod "apiserver-76f77b778f-lz4ml" (UID: "eb6743a4-a25e-4b1b-ae3b-3d2199cfbff2") : failed to sync secret cache: timed out waiting for the condition Nov 24 08:51:35 crc kubenswrapper[4886]: E1124 08:51:35.389880 4886 secret.go:188] Couldn't get secret openshift-apiserver/etcd-client: failed to sync secret cache: timed out waiting for the condition Nov 24 08:51:35 crc kubenswrapper[4886]: E1124 08:51:35.389991 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/eb6743a4-a25e-4b1b-ae3b-3d2199cfbff2-etcd-client podName:eb6743a4-a25e-4b1b-ae3b-3d2199cfbff2 nodeName:}" failed. No retries permitted until 2025-11-24 08:51:35.889963514 +0000 UTC m=+151.776701719 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/eb6743a4-a25e-4b1b-ae3b-3d2199cfbff2-etcd-client") pod "apiserver-76f77b778f-lz4ml" (UID: "eb6743a4-a25e-4b1b-ae3b-3d2199cfbff2") : failed to sync secret cache: timed out waiting for the condition Nov 24 08:51:35 crc kubenswrapper[4886]: E1124 08:51:35.390875 4886 secret.go:188] Couldn't get secret openshift-apiserver-operator/openshift-apiserver-operator-serving-cert: failed to sync secret cache: timed out waiting for the condition Nov 24 08:51:35 crc kubenswrapper[4886]: E1124 08:51:35.390926 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/efe0b276-8633-4435-b8af-d4651276c24f-serving-cert podName:efe0b276-8633-4435-b8af-d4651276c24f nodeName:}" failed. No retries permitted until 2025-11-24 08:51:35.89091619 +0000 UTC m=+151.777654325 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/efe0b276-8633-4435-b8af-d4651276c24f-serving-cert") pod "openshift-apiserver-operator-796bbdcf4f-vz657" (UID: "efe0b276-8633-4435-b8af-d4651276c24f") : failed to sync secret cache: timed out waiting for the condition Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.392003 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.402326 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.416684 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 08:51:35 crc kubenswrapper[4886]: E1124 08:51:35.416908 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 08:51:35.916883301 +0000 UTC m=+151.803621436 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.416966 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/98d1a535-6aba-4633-8091-42e633b865b1-config\") pod \"service-ca-operator-777779d784-hplr8\" (UID: \"98d1a535-6aba-4633-8091-42e633b865b1\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-hplr8" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.417001 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/2edcc7e5-5bfb-4c39-86c9-fabf007078f4-etcd-client\") pod \"apiserver-7bbb656c7d-njp2h\" (UID: \"2edcc7e5-5bfb-4c39-86c9-fabf007078f4\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-njp2h" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.417043 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5ffb9e3b-f114-44b1-9521-096b538ce9bf-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-fzqpc\" (UID: \"5ffb9e3b-f114-44b1-9521-096b538ce9bf\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-fzqpc" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.417069 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/97be751b-9ee7-45b8-bb05-5db918750f72-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-ztbpv\" (UID: \"97be751b-9ee7-45b8-bb05-5db918750f72\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-ztbpv" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.417130 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5pbvq\" (UniqueName: \"kubernetes.io/projected/2edcc7e5-5bfb-4c39-86c9-fabf007078f4-kube-api-access-5pbvq\") pod \"apiserver-7bbb656c7d-njp2h\" (UID: \"2edcc7e5-5bfb-4c39-86c9-fabf007078f4\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-njp2h" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.417206 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/168f234d-da70-475a-b6df-2771ab11368e-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-vtbkx\" (UID: \"168f234d-da70-475a-b6df-2771ab11368e\") " pod="openshift-authentication/oauth-openshift-558db77b4-vtbkx" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.417259 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/a9e033b5-0aef-4e45-924c-338d2a914c5a-profile-collector-cert\") pod \"catalog-operator-68c6474976-f8pr4\" (UID: \"a9e033b5-0aef-4e45-924c-338d2a914c5a\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-f8pr4" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.417285 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/2edcc7e5-5bfb-4c39-86c9-fabf007078f4-audit-dir\") pod \"apiserver-7bbb656c7d-njp2h\" (UID: \"2edcc7e5-5bfb-4c39-86c9-fabf007078f4\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-njp2h" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.417304 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/a9e033b5-0aef-4e45-924c-338d2a914c5a-srv-cert\") pod \"catalog-operator-68c6474976-f8pr4\" (UID: \"a9e033b5-0aef-4e45-924c-338d2a914c5a\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-f8pr4" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.417352 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8bdf9\" (UniqueName: \"kubernetes.io/projected/98d1a535-6aba-4633-8091-42e633b865b1-kube-api-access-8bdf9\") pod \"service-ca-operator-777779d784-hplr8\" (UID: \"98d1a535-6aba-4633-8091-42e633b865b1\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-hplr8" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.417394 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/89ad3a24-065a-4210-bc90-737b51139e8c-certs\") pod \"machine-config-server-4zfpn\" (UID: \"89ad3a24-065a-4210-bc90-737b51139e8c\") " pod="openshift-machine-config-operator/machine-config-server-4zfpn" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.417447 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/168f234d-da70-475a-b6df-2771ab11368e-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-vtbkx\" (UID: \"168f234d-da70-475a-b6df-2771ab11368e\") " pod="openshift-authentication/oauth-openshift-558db77b4-vtbkx" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.417470 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bmhrd\" (UniqueName: \"kubernetes.io/projected/168f234d-da70-475a-b6df-2771ab11368e-kube-api-access-bmhrd\") pod \"oauth-openshift-558db77b4-vtbkx\" (UID: \"168f234d-da70-475a-b6df-2771ab11368e\") " pod="openshift-authentication/oauth-openshift-558db77b4-vtbkx" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.417523 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a356b9d0-54fe-4bac-9589-027e7cbfeb87-webhook-cert\") pod \"packageserver-d55dfcdfc-npx2b\" (UID: \"a356b9d0-54fe-4bac-9589-027e7cbfeb87\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-npx2b" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.417600 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/168f234d-da70-475a-b6df-2771ab11368e-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-vtbkx\" (UID: \"168f234d-da70-475a-b6df-2771ab11368e\") " pod="openshift-authentication/oauth-openshift-558db77b4-vtbkx" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.417632 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p7xq9\" (UniqueName: \"kubernetes.io/projected/560047dc-48dd-40d2-b7b3-8a8e4db0d7c6-kube-api-access-p7xq9\") pod \"multus-admission-controller-857f4d67dd-4kfg8\" (UID: \"560047dc-48dd-40d2-b7b3-8a8e4db0d7c6\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-4kfg8" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.417692 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f0014c51-8bc7-44e7-846b-3c7d97a67913-config-volume\") pod \"dns-default-x8hmr\" (UID: \"f0014c51-8bc7-44e7-846b-3c7d97a67913\") " pod="openshift-dns/dns-default-x8hmr" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.417711 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4sfj2\" (UniqueName: \"kubernetes.io/projected/89ad3a24-065a-4210-bc90-737b51139e8c-kube-api-access-4sfj2\") pod \"machine-config-server-4zfpn\" (UID: \"89ad3a24-065a-4210-bc90-737b51139e8c\") " pod="openshift-machine-config-operator/machine-config-server-4zfpn" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.417751 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nbjft\" (UniqueName: \"kubernetes.io/projected/7b5ec71d-6e2e-4cf9-af45-b0147e7598b6-kube-api-access-nbjft\") pod \"olm-operator-6b444d44fb-z89jx\" (UID: \"7b5ec71d-6e2e-4cf9-af45-b0147e7598b6\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-z89jx" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.417993 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/364b3e42-dafa-45cd-bf38-545cc2eb9e21-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-tsmf5\" (UID: \"364b3e42-dafa-45cd-bf38-545cc2eb9e21\") " pod="openshift-marketplace/marketplace-operator-79b997595-tsmf5" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.418037 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/97be751b-9ee7-45b8-bb05-5db918750f72-serving-cert\") pod \"authentication-operator-69f744f599-ztbpv\" (UID: \"97be751b-9ee7-45b8-bb05-5db918750f72\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-ztbpv" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.418083 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1afd949e-d0f2-41b8-9632-917df3468232-secret-volume\") pod \"collect-profiles-29399565-qwh9r\" (UID: \"1afd949e-d0f2-41b8-9632-917df3468232\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399565-qwh9r" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.418110 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/2edcc7e5-5bfb-4c39-86c9-fabf007078f4-encryption-config\") pod \"apiserver-7bbb656c7d-njp2h\" (UID: \"2edcc7e5-5bfb-4c39-86c9-fabf007078f4\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-njp2h" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.418136 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/97be751b-9ee7-45b8-bb05-5db918750f72-config\") pod \"authentication-operator-69f744f599-ztbpv\" (UID: \"97be751b-9ee7-45b8-bb05-5db918750f72\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-ztbpv" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.418254 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5ffb9e3b-f114-44b1-9521-096b538ce9bf-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-fzqpc\" (UID: \"5ffb9e3b-f114-44b1-9521-096b538ce9bf\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-fzqpc" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.418317 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2qncw\" (UniqueName: \"kubernetes.io/projected/1559d3ac-0229-4e31-9d0b-ebf633409384-kube-api-access-2qncw\") pod \"machine-config-controller-84d6567774-7ftcq\" (UID: \"1559d3ac-0229-4e31-9d0b-ebf633409384\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-7ftcq" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.418349 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1afd949e-d0f2-41b8-9632-917df3468232-config-volume\") pod \"collect-profiles-29399565-qwh9r\" (UID: \"1afd949e-d0f2-41b8-9632-917df3468232\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399565-qwh9r" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.418404 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-228rg\" (UniqueName: \"kubernetes.io/projected/afdfb747-0bc0-40a4-89e6-dc6970617398-kube-api-access-228rg\") pod \"control-plane-machine-set-operator-78cbb6b69f-5fr58\" (UID: \"afdfb747-0bc0-40a4-89e6-dc6970617398\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-5fr58" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.418434 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/168f234d-da70-475a-b6df-2771ab11368e-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-vtbkx\" (UID: \"168f234d-da70-475a-b6df-2771ab11368e\") " pod="openshift-authentication/oauth-openshift-558db77b4-vtbkx" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.418489 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/3a73ff5c-5292-45f7-a7cf-97714a8a109d-registration-dir\") pod \"csi-hostpathplugin-rj5kj\" (UID: \"3a73ff5c-5292-45f7-a7cf-97714a8a109d\") " pod="hostpath-provisioner/csi-hostpathplugin-rj5kj" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.418532 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/97be751b-9ee7-45b8-bb05-5db918750f72-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-ztbpv\" (UID: \"97be751b-9ee7-45b8-bb05-5db918750f72\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-ztbpv" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.418550 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a54a2524-099e-4a0f-9762-eafbc576dc56-service-ca-bundle\") pod \"router-default-5444994796-xbr84\" (UID: \"a54a2524-099e-4a0f-9762-eafbc576dc56\") " pod="openshift-ingress/router-default-5444994796-xbr84" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.418626 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/7b5ec71d-6e2e-4cf9-af45-b0147e7598b6-profile-collector-cert\") pod \"olm-operator-6b444d44fb-z89jx\" (UID: \"7b5ec71d-6e2e-4cf9-af45-b0147e7598b6\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-z89jx" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.418669 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/421edb7e-dea2-4578-894e-32e9eb8aff3b-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-pxjkj\" (UID: \"421edb7e-dea2-4578-894e-32e9eb8aff3b\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-pxjkj" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.418708 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/168f234d-da70-475a-b6df-2771ab11368e-audit-policies\") pod \"oauth-openshift-558db77b4-vtbkx\" (UID: \"168f234d-da70-475a-b6df-2771ab11368e\") " pod="openshift-authentication/oauth-openshift-558db77b4-vtbkx" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.418735 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/bbe26e55-76a5-4f66-b3c6-5f8933372332-images\") pod \"machine-config-operator-74547568cd-dssrb\" (UID: \"bbe26e55-76a5-4f66-b3c6-5f8933372332\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-dssrb" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.418755 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/168f234d-da70-475a-b6df-2771ab11368e-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-vtbkx\" (UID: \"168f234d-da70-475a-b6df-2771ab11368e\") " pod="openshift-authentication/oauth-openshift-558db77b4-vtbkx" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.418766 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/168f234d-da70-475a-b6df-2771ab11368e-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-vtbkx\" (UID: \"168f234d-da70-475a-b6df-2771ab11368e\") " pod="openshift-authentication/oauth-openshift-558db77b4-vtbkx" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.418848 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/168f234d-da70-475a-b6df-2771ab11368e-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-vtbkx\" (UID: \"168f234d-da70-475a-b6df-2771ab11368e\") " pod="openshift-authentication/oauth-openshift-558db77b4-vtbkx" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.418948 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bgwgf\" (UniqueName: \"kubernetes.io/projected/3a73ff5c-5292-45f7-a7cf-97714a8a109d-kube-api-access-bgwgf\") pod \"csi-hostpathplugin-rj5kj\" (UID: \"3a73ff5c-5292-45f7-a7cf-97714a8a109d\") " pod="hostpath-provisioner/csi-hostpathplugin-rj5kj" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.419630 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/2edcc7e5-5bfb-4c39-86c9-fabf007078f4-audit-dir\") pod \"apiserver-7bbb656c7d-njp2h\" (UID: \"2edcc7e5-5bfb-4c39-86c9-fabf007078f4\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-njp2h" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.419782 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/97be751b-9ee7-45b8-bb05-5db918750f72-config\") pod \"authentication-operator-69f744f599-ztbpv\" (UID: \"97be751b-9ee7-45b8-bb05-5db918750f72\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-ztbpv" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.419812 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n794d\" (UID: \"30599c42-eef7-4967-b84f-95b49a225bd6\") " pod="openshift-image-registry/image-registry-697d97f7c8-n794d" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.419909 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a54a2524-099e-4a0f-9762-eafbc576dc56-service-ca-bundle\") pod \"router-default-5444994796-xbr84\" (UID: \"a54a2524-099e-4a0f-9762-eafbc576dc56\") " pod="openshift-ingress/router-default-5444994796-xbr84" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.419976 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/168f234d-da70-475a-b6df-2771ab11368e-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-vtbkx\" (UID: \"168f234d-da70-475a-b6df-2771ab11368e\") " pod="openshift-authentication/oauth-openshift-558db77b4-vtbkx" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.420007 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/2edcc7e5-5bfb-4c39-86c9-fabf007078f4-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-njp2h\" (UID: \"2edcc7e5-5bfb-4c39-86c9-fabf007078f4\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-njp2h" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.420039 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2rwmd\" (UniqueName: \"kubernetes.io/projected/fd02555c-e6fc-4825-a06a-53497a2cfeda-kube-api-access-2rwmd\") pod \"ingress-canary-t4g4w\" (UID: \"fd02555c-e6fc-4825-a06a-53497a2cfeda\") " pod="openshift-ingress-canary/ingress-canary-t4g4w" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.420234 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/2edcc7e5-5bfb-4c39-86c9-fabf007078f4-etcd-client\") pod \"apiserver-7bbb656c7d-njp2h\" (UID: \"2edcc7e5-5bfb-4c39-86c9-fabf007078f4\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-njp2h" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.420880 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/bbe26e55-76a5-4f66-b3c6-5f8933372332-images\") pod \"machine-config-operator-74547568cd-dssrb\" (UID: \"bbe26e55-76a5-4f66-b3c6-5f8933372332\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-dssrb" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.421074 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3a0bfe9c-4f8c-47f6-945b-0f93f9888f93-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-x5nfr\" (UID: \"3a0bfe9c-4f8c-47f6-945b-0f93f9888f93\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-x5nfr" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.421129 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/168f234d-da70-475a-b6df-2771ab11368e-audit-policies\") pod \"oauth-openshift-558db77b4-vtbkx\" (UID: \"168f234d-da70-475a-b6df-2771ab11368e\") " pod="openshift-authentication/oauth-openshift-558db77b4-vtbkx" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.421068 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5ffb9e3b-f114-44b1-9521-096b538ce9bf-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-fzqpc\" (UID: \"5ffb9e3b-f114-44b1-9521-096b538ce9bf\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-fzqpc" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.421369 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/3a73ff5c-5292-45f7-a7cf-97714a8a109d-registration-dir\") pod \"csi-hostpathplugin-rj5kj\" (UID: \"3a73ff5c-5292-45f7-a7cf-97714a8a109d\") " pod="hostpath-provisioner/csi-hostpathplugin-rj5kj" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.421448 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5ffb9e3b-f114-44b1-9521-096b538ce9bf-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-fzqpc\" (UID: \"5ffb9e3b-f114-44b1-9521-096b538ce9bf\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-fzqpc" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.421481 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/98d1a535-6aba-4633-8091-42e633b865b1-serving-cert\") pod \"service-ca-operator-777779d784-hplr8\" (UID: \"98d1a535-6aba-4633-8091-42e633b865b1\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-hplr8" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.421550 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5ffb9e3b-f114-44b1-9521-096b538ce9bf-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-fzqpc\" (UID: \"5ffb9e3b-f114-44b1-9521-096b538ce9bf\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-fzqpc" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.421629 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/a54a2524-099e-4a0f-9762-eafbc576dc56-stats-auth\") pod \"router-default-5444994796-xbr84\" (UID: \"a54a2524-099e-4a0f-9762-eafbc576dc56\") " pod="openshift-ingress/router-default-5444994796-xbr84" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.421656 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bt7cq\" (UniqueName: \"kubernetes.io/projected/1c634f86-dcd4-4cf7-aa6f-4245be5d0b57-kube-api-access-bt7cq\") pod \"ingress-operator-5b745b69d9-vb86f\" (UID: \"1c634f86-dcd4-4cf7-aa6f-4245be5d0b57\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-vb86f" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.422182 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qt8bb\" (UniqueName: \"kubernetes.io/projected/3349a550-cd49-4627-8fd8-7bb82f26c0e4-kube-api-access-qt8bb\") pod \"kube-storage-version-migrator-operator-b67b599dd-v7qjg\" (UID: \"3349a550-cd49-4627-8fd8-7bb82f26c0e4\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-v7qjg" Nov 24 08:51:35 crc kubenswrapper[4886]: E1124 08:51:35.422227 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 08:51:35.922203857 +0000 UTC m=+151.808941992 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n794d" (UID: "30599c42-eef7-4967-b84f-95b49a225bd6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.422084 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/168f234d-da70-475a-b6df-2771ab11368e-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-vtbkx\" (UID: \"168f234d-da70-475a-b6df-2771ab11368e\") " pod="openshift-authentication/oauth-openshift-558db77b4-vtbkx" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.422260 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/168f234d-da70-475a-b6df-2771ab11368e-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-vtbkx\" (UID: \"168f234d-da70-475a-b6df-2771ab11368e\") " pod="openshift-authentication/oauth-openshift-558db77b4-vtbkx" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.422383 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/f0014c51-8bc7-44e7-846b-3c7d97a67913-metrics-tls\") pod \"dns-default-x8hmr\" (UID: \"f0014c51-8bc7-44e7-846b-3c7d97a67913\") " pod="openshift-dns/dns-default-x8hmr" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.422451 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jdfnw\" (UniqueName: \"kubernetes.io/projected/421edb7e-dea2-4578-894e-32e9eb8aff3b-kube-api-access-jdfnw\") pod \"package-server-manager-789f6589d5-pxjkj\" (UID: \"421edb7e-dea2-4578-894e-32e9eb8aff3b\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-pxjkj" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.422508 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a356b9d0-54fe-4bac-9589-027e7cbfeb87-apiservice-cert\") pod \"packageserver-d55dfcdfc-npx2b\" (UID: \"a356b9d0-54fe-4bac-9589-027e7cbfeb87\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-npx2b" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.422548 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a54a2524-099e-4a0f-9762-eafbc576dc56-metrics-certs\") pod \"router-default-5444994796-xbr84\" (UID: \"a54a2524-099e-4a0f-9762-eafbc576dc56\") " pod="openshift-ingress/router-default-5444994796-xbr84" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.422608 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kgz6j\" (UniqueName: \"kubernetes.io/projected/1afd949e-d0f2-41b8-9632-917df3468232-kube-api-access-kgz6j\") pod \"collect-profiles-29399565-qwh9r\" (UID: \"1afd949e-d0f2-41b8-9632-917df3468232\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399565-qwh9r" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.422669 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3349a550-cd49-4627-8fd8-7bb82f26c0e4-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-v7qjg\" (UID: \"3349a550-cd49-4627-8fd8-7bb82f26c0e4\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-v7qjg" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.422715 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/a54a2524-099e-4a0f-9762-eafbc576dc56-default-certificate\") pod \"router-default-5444994796-xbr84\" (UID: \"a54a2524-099e-4a0f-9762-eafbc576dc56\") " pod="openshift-ingress/router-default-5444994796-xbr84" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.422774 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3a0bfe9c-4f8c-47f6-945b-0f93f9888f93-config\") pod \"kube-apiserver-operator-766d6c64bb-x5nfr\" (UID: \"3a0bfe9c-4f8c-47f6-945b-0f93f9888f93\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-x5nfr" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.422833 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3349a550-cd49-4627-8fd8-7bb82f26c0e4-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-v7qjg\" (UID: \"3349a550-cd49-4627-8fd8-7bb82f26c0e4\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-v7qjg" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.422921 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/1c634f86-dcd4-4cf7-aa6f-4245be5d0b57-metrics-tls\") pod \"ingress-operator-5b745b69d9-vb86f\" (UID: \"1c634f86-dcd4-4cf7-aa6f-4245be5d0b57\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-vb86f" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.423009 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/168f234d-da70-475a-b6df-2771ab11368e-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-vtbkx\" (UID: \"168f234d-da70-475a-b6df-2771ab11368e\") " pod="openshift-authentication/oauth-openshift-558db77b4-vtbkx" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.423135 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/168f234d-da70-475a-b6df-2771ab11368e-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-vtbkx\" (UID: \"168f234d-da70-475a-b6df-2771ab11368e\") " pod="openshift-authentication/oauth-openshift-558db77b4-vtbkx" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.423244 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/3a73ff5c-5292-45f7-a7cf-97714a8a109d-csi-data-dir\") pod \"csi-hostpathplugin-rj5kj\" (UID: \"3a73ff5c-5292-45f7-a7cf-97714a8a109d\") " pod="hostpath-provisioner/csi-hostpathplugin-rj5kj" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.423277 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/400260cc-a84e-4d59-99f5-5e2359ceee1c-signing-key\") pod \"service-ca-9c57cc56f-59wgb\" (UID: \"400260cc-a84e-4d59-99f5-5e2359ceee1c\") " pod="openshift-service-ca/service-ca-9c57cc56f-59wgb" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.423333 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p9kjj\" (UniqueName: \"kubernetes.io/projected/f0014c51-8bc7-44e7-846b-3c7d97a67913-kube-api-access-p9kjj\") pod \"dns-default-x8hmr\" (UID: \"f0014c51-8bc7-44e7-846b-3c7d97a67913\") " pod="openshift-dns/dns-default-x8hmr" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.423370 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/2edcc7e5-5bfb-4c39-86c9-fabf007078f4-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-njp2h\" (UID: \"2edcc7e5-5bfb-4c39-86c9-fabf007078f4\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-njp2h" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.423388 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/2edcc7e5-5bfb-4c39-86c9-fabf007078f4-audit-policies\") pod \"apiserver-7bbb656c7d-njp2h\" (UID: \"2edcc7e5-5bfb-4c39-86c9-fabf007078f4\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-njp2h" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.423421 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rx75p\" (UniqueName: \"kubernetes.io/projected/a54a2524-099e-4a0f-9762-eafbc576dc56-kube-api-access-rx75p\") pod \"router-default-5444994796-xbr84\" (UID: \"a54a2524-099e-4a0f-9762-eafbc576dc56\") " pod="openshift-ingress/router-default-5444994796-xbr84" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.423461 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/168f234d-da70-475a-b6df-2771ab11368e-audit-dir\") pod \"oauth-openshift-558db77b4-vtbkx\" (UID: \"168f234d-da70-475a-b6df-2771ab11368e\") " pod="openshift-authentication/oauth-openshift-558db77b4-vtbkx" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.423497 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/1c634f86-dcd4-4cf7-aa6f-4245be5d0b57-bound-sa-token\") pod \"ingress-operator-5b745b69d9-vb86f\" (UID: \"1c634f86-dcd4-4cf7-aa6f-4245be5d0b57\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-vb86f" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.423543 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/afdfb747-0bc0-40a4-89e6-dc6970617398-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-5fr58\" (UID: \"afdfb747-0bc0-40a4-89e6-dc6970617398\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-5fr58" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.423578 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9j687\" (UniqueName: \"kubernetes.io/projected/400260cc-a84e-4d59-99f5-5e2359ceee1c-kube-api-access-9j687\") pod \"service-ca-9c57cc56f-59wgb\" (UID: \"400260cc-a84e-4d59-99f5-5e2359ceee1c\") " pod="openshift-service-ca/service-ca-9c57cc56f-59wgb" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.423606 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/364b3e42-dafa-45cd-bf38-545cc2eb9e21-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-tsmf5\" (UID: \"364b3e42-dafa-45cd-bf38-545cc2eb9e21\") " pod="openshift-marketplace/marketplace-operator-79b997595-tsmf5" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.423634 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lhgx7\" (UniqueName: \"kubernetes.io/projected/97be751b-9ee7-45b8-bb05-5db918750f72-kube-api-access-lhgx7\") pod \"authentication-operator-69f744f599-ztbpv\" (UID: \"97be751b-9ee7-45b8-bb05-5db918750f72\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-ztbpv" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.423665 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2edcc7e5-5bfb-4c39-86c9-fabf007078f4-serving-cert\") pod \"apiserver-7bbb656c7d-njp2h\" (UID: \"2edcc7e5-5bfb-4c39-86c9-fabf007078f4\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-njp2h" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.423705 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/bbe26e55-76a5-4f66-b3c6-5f8933372332-auth-proxy-config\") pod \"machine-config-operator-74547568cd-dssrb\" (UID: \"bbe26e55-76a5-4f66-b3c6-5f8933372332\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-dssrb" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.423774 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/fd02555c-e6fc-4825-a06a-53497a2cfeda-cert\") pod \"ingress-canary-t4g4w\" (UID: \"fd02555c-e6fc-4825-a06a-53497a2cfeda\") " pod="openshift-ingress-canary/ingress-canary-t4g4w" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.423855 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/bbe26e55-76a5-4f66-b3c6-5f8933372332-proxy-tls\") pod \"machine-config-operator-74547568cd-dssrb\" (UID: \"bbe26e55-76a5-4f66-b3c6-5f8933372332\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-dssrb" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.423898 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/3a73ff5c-5292-45f7-a7cf-97714a8a109d-plugins-dir\") pod \"csi-hostpathplugin-rj5kj\" (UID: \"3a73ff5c-5292-45f7-a7cf-97714a8a109d\") " pod="hostpath-provisioner/csi-hostpathplugin-rj5kj" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.423957 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gk9kr\" (UniqueName: \"kubernetes.io/projected/bbe26e55-76a5-4f66-b3c6-5f8933372332-kube-api-access-gk9kr\") pod \"machine-config-operator-74547568cd-dssrb\" (UID: \"bbe26e55-76a5-4f66-b3c6-5f8933372332\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-dssrb" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.423989 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/97be751b-9ee7-45b8-bb05-5db918750f72-service-ca-bundle\") pod \"authentication-operator-69f744f599-ztbpv\" (UID: \"97be751b-9ee7-45b8-bb05-5db918750f72\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-ztbpv" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.424060 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/3a73ff5c-5292-45f7-a7cf-97714a8a109d-mountpoint-dir\") pod \"csi-hostpathplugin-rj5kj\" (UID: \"3a73ff5c-5292-45f7-a7cf-97714a8a109d\") " pod="hostpath-provisioner/csi-hostpathplugin-rj5kj" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.424112 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/a356b9d0-54fe-4bac-9589-027e7cbfeb87-tmpfs\") pod \"packageserver-d55dfcdfc-npx2b\" (UID: \"a356b9d0-54fe-4bac-9589-027e7cbfeb87\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-npx2b" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.424216 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/3a73ff5c-5292-45f7-a7cf-97714a8a109d-socket-dir\") pod \"csi-hostpathplugin-rj5kj\" (UID: \"3a73ff5c-5292-45f7-a7cf-97714a8a109d\") " pod="hostpath-provisioner/csi-hostpathplugin-rj5kj" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.424277 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/400260cc-a84e-4d59-99f5-5e2359ceee1c-signing-cabundle\") pod \"service-ca-9c57cc56f-59wgb\" (UID: \"400260cc-a84e-4d59-99f5-5e2359ceee1c\") " pod="openshift-service-ca/service-ca-9c57cc56f-59wgb" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.424308 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/168f234d-da70-475a-b6df-2771ab11368e-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-vtbkx\" (UID: \"168f234d-da70-475a-b6df-2771ab11368e\") " pod="openshift-authentication/oauth-openshift-558db77b4-vtbkx" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.424390 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/1559d3ac-0229-4e31-9d0b-ebf633409384-proxy-tls\") pod \"machine-config-controller-84d6567774-7ftcq\" (UID: \"1559d3ac-0229-4e31-9d0b-ebf633409384\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-7ftcq" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.424453 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/560047dc-48dd-40d2-b7b3-8a8e4db0d7c6-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-4kfg8\" (UID: \"560047dc-48dd-40d2-b7b3-8a8e4db0d7c6\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-4kfg8" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.424498 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/168f234d-da70-475a-b6df-2771ab11368e-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-vtbkx\" (UID: \"168f234d-da70-475a-b6df-2771ab11368e\") " pod="openshift-authentication/oauth-openshift-558db77b4-vtbkx" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.424525 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/7b5ec71d-6e2e-4cf9-af45-b0147e7598b6-srv-cert\") pod \"olm-operator-6b444d44fb-z89jx\" (UID: \"7b5ec71d-6e2e-4cf9-af45-b0147e7598b6\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-z89jx" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.424549 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3349a550-cd49-4627-8fd8-7bb82f26c0e4-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-v7qjg\" (UID: \"3349a550-cd49-4627-8fd8-7bb82f26c0e4\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-v7qjg" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.424595 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lx559\" (UniqueName: \"kubernetes.io/projected/364b3e42-dafa-45cd-bf38-545cc2eb9e21-kube-api-access-lx559\") pod \"marketplace-operator-79b997595-tsmf5\" (UID: \"364b3e42-dafa-45cd-bf38-545cc2eb9e21\") " pod="openshift-marketplace/marketplace-operator-79b997595-tsmf5" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.424626 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2edcc7e5-5bfb-4c39-86c9-fabf007078f4-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-njp2h\" (UID: \"2edcc7e5-5bfb-4c39-86c9-fabf007078f4\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-njp2h" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.424655 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3a0bfe9c-4f8c-47f6-945b-0f93f9888f93-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-x5nfr\" (UID: \"3a0bfe9c-4f8c-47f6-945b-0f93f9888f93\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-x5nfr" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.424703 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h5j9c\" (UniqueName: \"kubernetes.io/projected/a9e033b5-0aef-4e45-924c-338d2a914c5a-kube-api-access-h5j9c\") pod \"catalog-operator-68c6474976-f8pr4\" (UID: \"a9e033b5-0aef-4e45-924c-338d2a914c5a\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-f8pr4" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.424750 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1c634f86-dcd4-4cf7-aa6f-4245be5d0b57-trusted-ca\") pod \"ingress-operator-5b745b69d9-vb86f\" (UID: \"1c634f86-dcd4-4cf7-aa6f-4245be5d0b57\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-vb86f" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.424778 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kwtmc\" (UniqueName: \"kubernetes.io/projected/a356b9d0-54fe-4bac-9589-027e7cbfeb87-kube-api-access-kwtmc\") pod \"packageserver-d55dfcdfc-npx2b\" (UID: \"a356b9d0-54fe-4bac-9589-027e7cbfeb87\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-npx2b" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.424812 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/1559d3ac-0229-4e31-9d0b-ebf633409384-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-7ftcq\" (UID: \"1559d3ac-0229-4e31-9d0b-ebf633409384\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-7ftcq" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.424846 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/89ad3a24-065a-4210-bc90-737b51139e8c-node-bootstrap-token\") pod \"machine-config-server-4zfpn\" (UID: \"89ad3a24-065a-4210-bc90-737b51139e8c\") " pod="openshift-machine-config-operator/machine-config-server-4zfpn" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.424877 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hl4jx\" (UniqueName: \"kubernetes.io/projected/e8247ed1-a90a-409b-a326-07bb154a4d16-kube-api-access-hl4jx\") pod \"migrator-59844c95c7-xjmsp\" (UID: \"e8247ed1-a90a-409b-a326-07bb154a4d16\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-xjmsp" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.424906 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/168f234d-da70-475a-b6df-2771ab11368e-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-vtbkx\" (UID: \"168f234d-da70-475a-b6df-2771ab11368e\") " pod="openshift-authentication/oauth-openshift-558db77b4-vtbkx" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.425183 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/168f234d-da70-475a-b6df-2771ab11368e-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-vtbkx\" (UID: \"168f234d-da70-475a-b6df-2771ab11368e\") " pod="openshift-authentication/oauth-openshift-558db77b4-vtbkx" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.425357 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/168f234d-da70-475a-b6df-2771ab11368e-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-vtbkx\" (UID: \"168f234d-da70-475a-b6df-2771ab11368e\") " pod="openshift-authentication/oauth-openshift-558db77b4-vtbkx" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.423906 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/168f234d-da70-475a-b6df-2771ab11368e-audit-dir\") pod \"oauth-openshift-558db77b4-vtbkx\" (UID: \"168f234d-da70-475a-b6df-2771ab11368e\") " pod="openshift-authentication/oauth-openshift-558db77b4-vtbkx" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.423632 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3a0bfe9c-4f8c-47f6-945b-0f93f9888f93-config\") pod \"kube-apiserver-operator-766d6c64bb-x5nfr\" (UID: \"3a0bfe9c-4f8c-47f6-945b-0f93f9888f93\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-x5nfr" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.426054 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2edcc7e5-5bfb-4c39-86c9-fabf007078f4-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-njp2h\" (UID: \"2edcc7e5-5bfb-4c39-86c9-fabf007078f4\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-njp2h" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.426852 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/2edcc7e5-5bfb-4c39-86c9-fabf007078f4-encryption-config\") pod \"apiserver-7bbb656c7d-njp2h\" (UID: \"2edcc7e5-5bfb-4c39-86c9-fabf007078f4\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-njp2h" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.426999 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3349a550-cd49-4627-8fd8-7bb82f26c0e4-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-v7qjg\" (UID: \"3349a550-cd49-4627-8fd8-7bb82f26c0e4\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-v7qjg" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.427046 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/a54a2524-099e-4a0f-9762-eafbc576dc56-stats-auth\") pod \"router-default-5444994796-xbr84\" (UID: \"a54a2524-099e-4a0f-9762-eafbc576dc56\") " pod="openshift-ingress/router-default-5444994796-xbr84" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.427303 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/364b3e42-dafa-45cd-bf38-545cc2eb9e21-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-tsmf5\" (UID: \"364b3e42-dafa-45cd-bf38-545cc2eb9e21\") " pod="openshift-marketplace/marketplace-operator-79b997595-tsmf5" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.427668 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/97be751b-9ee7-45b8-bb05-5db918750f72-serving-cert\") pod \"authentication-operator-69f744f599-ztbpv\" (UID: \"97be751b-9ee7-45b8-bb05-5db918750f72\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-ztbpv" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.428097 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/1559d3ac-0229-4e31-9d0b-ebf633409384-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-7ftcq\" (UID: \"1559d3ac-0229-4e31-9d0b-ebf633409384\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-7ftcq" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.428917 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/a54a2524-099e-4a0f-9762-eafbc576dc56-default-certificate\") pod \"router-default-5444994796-xbr84\" (UID: \"a54a2524-099e-4a0f-9762-eafbc576dc56\") " pod="openshift-ingress/router-default-5444994796-xbr84" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.429080 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/2edcc7e5-5bfb-4c39-86c9-fabf007078f4-audit-policies\") pod \"apiserver-7bbb656c7d-njp2h\" (UID: \"2edcc7e5-5bfb-4c39-86c9-fabf007078f4\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-njp2h" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.429343 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/3a73ff5c-5292-45f7-a7cf-97714a8a109d-mountpoint-dir\") pod \"csi-hostpathplugin-rj5kj\" (UID: \"3a73ff5c-5292-45f7-a7cf-97714a8a109d\") " pod="hostpath-provisioner/csi-hostpathplugin-rj5kj" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.429567 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/1c634f86-dcd4-4cf7-aa6f-4245be5d0b57-metrics-tls\") pod \"ingress-operator-5b745b69d9-vb86f\" (UID: \"1c634f86-dcd4-4cf7-aa6f-4245be5d0b57\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-vb86f" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.423857 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.429616 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/97be751b-9ee7-45b8-bb05-5db918750f72-service-ca-bundle\") pod \"authentication-operator-69f744f599-ztbpv\" (UID: \"97be751b-9ee7-45b8-bb05-5db918750f72\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-ztbpv" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.429991 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/3a73ff5c-5292-45f7-a7cf-97714a8a109d-csi-data-dir\") pod \"csi-hostpathplugin-rj5kj\" (UID: \"3a73ff5c-5292-45f7-a7cf-97714a8a109d\") " pod="hostpath-provisioner/csi-hostpathplugin-rj5kj" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.430078 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/3a73ff5c-5292-45f7-a7cf-97714a8a109d-socket-dir\") pod \"csi-hostpathplugin-rj5kj\" (UID: \"3a73ff5c-5292-45f7-a7cf-97714a8a109d\") " pod="hostpath-provisioner/csi-hostpathplugin-rj5kj" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.430464 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1c634f86-dcd4-4cf7-aa6f-4245be5d0b57-trusted-ca\") pod \"ingress-operator-5b745b69d9-vb86f\" (UID: \"1c634f86-dcd4-4cf7-aa6f-4245be5d0b57\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-vb86f" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.430727 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/a356b9d0-54fe-4bac-9589-027e7cbfeb87-tmpfs\") pod \"packageserver-d55dfcdfc-npx2b\" (UID: \"a356b9d0-54fe-4bac-9589-027e7cbfeb87\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-npx2b" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.431369 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/3a73ff5c-5292-45f7-a7cf-97714a8a109d-plugins-dir\") pod \"csi-hostpathplugin-rj5kj\" (UID: \"3a73ff5c-5292-45f7-a7cf-97714a8a109d\") " pod="hostpath-provisioner/csi-hostpathplugin-rj5kj" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.431473 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/bbe26e55-76a5-4f66-b3c6-5f8933372332-auth-proxy-config\") pod \"machine-config-operator-74547568cd-dssrb\" (UID: \"bbe26e55-76a5-4f66-b3c6-5f8933372332\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-dssrb" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.431770 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/168f234d-da70-475a-b6df-2771ab11368e-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-vtbkx\" (UID: \"168f234d-da70-475a-b6df-2771ab11368e\") " pod="openshift-authentication/oauth-openshift-558db77b4-vtbkx" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.431874 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3a0bfe9c-4f8c-47f6-945b-0f93f9888f93-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-x5nfr\" (UID: \"3a0bfe9c-4f8c-47f6-945b-0f93f9888f93\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-x5nfr" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.432300 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/bbe26e55-76a5-4f66-b3c6-5f8933372332-proxy-tls\") pod \"machine-config-operator-74547568cd-dssrb\" (UID: \"bbe26e55-76a5-4f66-b3c6-5f8933372332\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-dssrb" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.432404 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/168f234d-da70-475a-b6df-2771ab11368e-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-vtbkx\" (UID: \"168f234d-da70-475a-b6df-2771ab11368e\") " pod="openshift-authentication/oauth-openshift-558db77b4-vtbkx" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.432475 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2edcc7e5-5bfb-4c39-86c9-fabf007078f4-serving-cert\") pod \"apiserver-7bbb656c7d-njp2h\" (UID: \"2edcc7e5-5bfb-4c39-86c9-fabf007078f4\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-njp2h" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.432528 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/168f234d-da70-475a-b6df-2771ab11368e-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-vtbkx\" (UID: \"168f234d-da70-475a-b6df-2771ab11368e\") " pod="openshift-authentication/oauth-openshift-558db77b4-vtbkx" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.432676 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/168f234d-da70-475a-b6df-2771ab11368e-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-vtbkx\" (UID: \"168f234d-da70-475a-b6df-2771ab11368e\") " pod="openshift-authentication/oauth-openshift-558db77b4-vtbkx" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.433116 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/1559d3ac-0229-4e31-9d0b-ebf633409384-proxy-tls\") pod \"machine-config-controller-84d6567774-7ftcq\" (UID: \"1559d3ac-0229-4e31-9d0b-ebf633409384\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-7ftcq" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.433895 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a54a2524-099e-4a0f-9762-eafbc576dc56-metrics-certs\") pod \"router-default-5444994796-xbr84\" (UID: \"a54a2524-099e-4a0f-9762-eafbc576dc56\") " pod="openshift-ingress/router-default-5444994796-xbr84" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.436233 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/168f234d-da70-475a-b6df-2771ab11368e-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-vtbkx\" (UID: \"168f234d-da70-475a-b6df-2771ab11368e\") " pod="openshift-authentication/oauth-openshift-558db77b4-vtbkx" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.443826 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.463209 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.483796 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.503231 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.508591 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/7b5ec71d-6e2e-4cf9-af45-b0147e7598b6-srv-cert\") pod \"olm-operator-6b444d44fb-z89jx\" (UID: \"7b5ec71d-6e2e-4cf9-af45-b0147e7598b6\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-z89jx" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.523910 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.525560 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 08:51:35 crc kubenswrapper[4886]: E1124 08:51:35.525693 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 08:51:36.025675652 +0000 UTC m=+151.912413787 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.526486 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n794d\" (UID: \"30599c42-eef7-4967-b84f-95b49a225bd6\") " pod="openshift-image-registry/image-registry-697d97f7c8-n794d" Nov 24 08:51:35 crc kubenswrapper[4886]: E1124 08:51:35.526835 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 08:51:36.026825624 +0000 UTC m=+151.913563749 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n794d" (UID: "30599c42-eef7-4967-b84f-95b49a225bd6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.530962 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/a9e033b5-0aef-4e45-924c-338d2a914c5a-profile-collector-cert\") pod \"catalog-operator-68c6474976-f8pr4\" (UID: \"a9e033b5-0aef-4e45-924c-338d2a914c5a\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-f8pr4" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.533433 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/7b5ec71d-6e2e-4cf9-af45-b0147e7598b6-profile-collector-cert\") pod \"olm-operator-6b444d44fb-z89jx\" (UID: \"7b5ec71d-6e2e-4cf9-af45-b0147e7598b6\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-z89jx" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.534245 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1afd949e-d0f2-41b8-9632-917df3468232-secret-volume\") pod \"collect-profiles-29399565-qwh9r\" (UID: \"1afd949e-d0f2-41b8-9632-917df3468232\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399565-qwh9r" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.543416 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.563923 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.582548 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.603122 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.614621 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/364b3e42-dafa-45cd-bf38-545cc2eb9e21-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-tsmf5\" (UID: \"364b3e42-dafa-45cd-bf38-545cc2eb9e21\") " pod="openshift-marketplace/marketplace-operator-79b997595-tsmf5" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.623830 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.626982 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 08:51:35 crc kubenswrapper[4886]: E1124 08:51:35.627080 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 08:51:36.12705768 +0000 UTC m=+152.013795815 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.627978 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n794d\" (UID: \"30599c42-eef7-4967-b84f-95b49a225bd6\") " pod="openshift-image-registry/image-registry-697d97f7c8-n794d" Nov 24 08:51:35 crc kubenswrapper[4886]: E1124 08:51:35.628423 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 08:51:36.128412908 +0000 UTC m=+152.015151043 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n794d" (UID: "30599c42-eef7-4967-b84f-95b49a225bd6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.634142 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/560047dc-48dd-40d2-b7b3-8a8e4db0d7c6-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-4kfg8\" (UID: \"560047dc-48dd-40d2-b7b3-8a8e4db0d7c6\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-4kfg8" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.644455 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.662815 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.683475 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.689582 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/afdfb747-0bc0-40a4-89e6-dc6970617398-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-5fr58\" (UID: \"afdfb747-0bc0-40a4-89e6-dc6970617398\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-5fr58" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.704371 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.712786 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a356b9d0-54fe-4bac-9589-027e7cbfeb87-webhook-cert\") pod \"packageserver-d55dfcdfc-npx2b\" (UID: \"a356b9d0-54fe-4bac-9589-027e7cbfeb87\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-npx2b" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.715722 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a356b9d0-54fe-4bac-9589-027e7cbfeb87-apiservice-cert\") pod \"packageserver-d55dfcdfc-npx2b\" (UID: \"a356b9d0-54fe-4bac-9589-027e7cbfeb87\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-npx2b" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.723090 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.729121 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 08:51:35 crc kubenswrapper[4886]: E1124 08:51:35.729380 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 08:51:36.229337843 +0000 UTC m=+152.116075978 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.729650 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n794d\" (UID: \"30599c42-eef7-4967-b84f-95b49a225bd6\") " pod="openshift-image-registry/image-registry-697d97f7c8-n794d" Nov 24 08:51:35 crc kubenswrapper[4886]: E1124 08:51:35.730061 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 08:51:36.230045042 +0000 UTC m=+152.116783387 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n794d" (UID: "30599c42-eef7-4967-b84f-95b49a225bd6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.743885 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.755272 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/421edb7e-dea2-4578-894e-32e9eb8aff3b-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-pxjkj\" (UID: \"421edb7e-dea2-4578-894e-32e9eb8aff3b\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-pxjkj" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.763659 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.783380 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.804057 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.811612 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/400260cc-a84e-4d59-99f5-5e2359ceee1c-signing-key\") pod \"service-ca-9c57cc56f-59wgb\" (UID: \"400260cc-a84e-4d59-99f5-5e2359ceee1c\") " pod="openshift-service-ca/service-ca-9c57cc56f-59wgb" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.822772 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.830636 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/400260cc-a84e-4d59-99f5-5e2359ceee1c-signing-cabundle\") pod \"service-ca-9c57cc56f-59wgb\" (UID: \"400260cc-a84e-4d59-99f5-5e2359ceee1c\") " pod="openshift-service-ca/service-ca-9c57cc56f-59wgb" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.830867 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 08:51:35 crc kubenswrapper[4886]: E1124 08:51:35.832192 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 08:51:36.332132009 +0000 UTC m=+152.218870144 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.843013 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.862721 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.872411 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1afd949e-d0f2-41b8-9632-917df3468232-config-volume\") pod \"collect-profiles-29399565-qwh9r\" (UID: \"1afd949e-d0f2-41b8-9632-917df3468232\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399565-qwh9r" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.882810 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.902486 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.923255 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.933346 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/efe0b276-8633-4435-b8af-d4651276c24f-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-vz657\" (UID: \"efe0b276-8633-4435-b8af-d4651276c24f\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vz657" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.933444 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/eb6743a4-a25e-4b1b-ae3b-3d2199cfbff2-trusted-ca-bundle\") pod \"apiserver-76f77b778f-lz4ml\" (UID: \"eb6743a4-a25e-4b1b-ae3b-3d2199cfbff2\") " pod="openshift-apiserver/apiserver-76f77b778f-lz4ml" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.933494 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e4cfb4a2-69e1-4c38-b07f-cb1e628cbc2f-config\") pod \"machine-api-operator-5694c8668f-fqpl9\" (UID: \"e4cfb4a2-69e1-4c38-b07f-cb1e628cbc2f\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-fqpl9" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.933617 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/efe0b276-8633-4435-b8af-d4651276c24f-config\") pod \"openshift-apiserver-operator-796bbdcf4f-vz657\" (UID: \"efe0b276-8633-4435-b8af-d4651276c24f\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vz657" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.933675 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b65ba9fc-0a0d-49f2-9991-319b054df0b0-serving-cert\") pod \"route-controller-manager-6576b87f9c-4k7lh\" (UID: \"b65ba9fc-0a0d-49f2-9991-319b054df0b0\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4k7lh" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.933698 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/eb6743a4-a25e-4b1b-ae3b-3d2199cfbff2-serving-cert\") pod \"apiserver-76f77b778f-lz4ml\" (UID: \"eb6743a4-a25e-4b1b-ae3b-3d2199cfbff2\") " pod="openshift-apiserver/apiserver-76f77b778f-lz4ml" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.933761 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/eb6743a4-a25e-4b1b-ae3b-3d2199cfbff2-image-import-ca\") pod \"apiserver-76f77b778f-lz4ml\" (UID: \"eb6743a4-a25e-4b1b-ae3b-3d2199cfbff2\") " pod="openshift-apiserver/apiserver-76f77b778f-lz4ml" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.933794 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n794d\" (UID: \"30599c42-eef7-4967-b84f-95b49a225bd6\") " pod="openshift-image-registry/image-registry-697d97f7c8-n794d" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.933822 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/eb6743a4-a25e-4b1b-ae3b-3d2199cfbff2-encryption-config\") pod \"apiserver-76f77b778f-lz4ml\" (UID: \"eb6743a4-a25e-4b1b-ae3b-3d2199cfbff2\") " pod="openshift-apiserver/apiserver-76f77b778f-lz4ml" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.933840 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eb6743a4-a25e-4b1b-ae3b-3d2199cfbff2-config\") pod \"apiserver-76f77b778f-lz4ml\" (UID: \"eb6743a4-a25e-4b1b-ae3b-3d2199cfbff2\") " pod="openshift-apiserver/apiserver-76f77b778f-lz4ml" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.933914 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/fd02555c-e6fc-4825-a06a-53497a2cfeda-cert\") pod \"ingress-canary-t4g4w\" (UID: \"fd02555c-e6fc-4825-a06a-53497a2cfeda\") " pod="openshift-ingress-canary/ingress-canary-t4g4w" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.933948 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/eb6743a4-a25e-4b1b-ae3b-3d2199cfbff2-etcd-serving-ca\") pod \"apiserver-76f77b778f-lz4ml\" (UID: \"eb6743a4-a25e-4b1b-ae3b-3d2199cfbff2\") " pod="openshift-apiserver/apiserver-76f77b778f-lz4ml" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.934019 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b65ba9fc-0a0d-49f2-9991-319b054df0b0-config\") pod \"route-controller-manager-6576b87f9c-4k7lh\" (UID: \"b65ba9fc-0a0d-49f2-9991-319b054df0b0\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4k7lh" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.934071 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/eb6743a4-a25e-4b1b-ae3b-3d2199cfbff2-audit\") pod \"apiserver-76f77b778f-lz4ml\" (UID: \"eb6743a4-a25e-4b1b-ae3b-3d2199cfbff2\") " pod="openshift-apiserver/apiserver-76f77b778f-lz4ml" Nov 24 08:51:35 crc kubenswrapper[4886]: E1124 08:51:35.934214 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 08:51:36.434194165 +0000 UTC m=+152.320932390 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n794d" (UID: "30599c42-eef7-4967-b84f-95b49a225bd6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.934356 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b65ba9fc-0a0d-49f2-9991-319b054df0b0-client-ca\") pod \"route-controller-manager-6576b87f9c-4k7lh\" (UID: \"b65ba9fc-0a0d-49f2-9991-319b054df0b0\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4k7lh" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.934456 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/eb6743a4-a25e-4b1b-ae3b-3d2199cfbff2-etcd-client\") pod \"apiserver-76f77b778f-lz4ml\" (UID: \"eb6743a4-a25e-4b1b-ae3b-3d2199cfbff2\") " pod="openshift-apiserver/apiserver-76f77b778f-lz4ml" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.943603 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.963349 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Nov 24 08:51:35 crc kubenswrapper[4886]: I1124 08:51:35.983918 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Nov 24 08:51:36 crc kubenswrapper[4886]: I1124 08:51:36.002850 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Nov 24 08:51:36 crc kubenswrapper[4886]: I1124 08:51:36.023913 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Nov 24 08:51:36 crc kubenswrapper[4886]: I1124 08:51:36.035473 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 08:51:36 crc kubenswrapper[4886]: I1124 08:51:36.035681 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/98d1a535-6aba-4633-8091-42e633b865b1-serving-cert\") pod \"service-ca-operator-777779d784-hplr8\" (UID: \"98d1a535-6aba-4633-8091-42e633b865b1\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-hplr8" Nov 24 08:51:36 crc kubenswrapper[4886]: E1124 08:51:36.035715 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 08:51:36.535664626 +0000 UTC m=+152.422402781 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 08:51:36 crc kubenswrapper[4886]: I1124 08:51:36.036342 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n794d\" (UID: \"30599c42-eef7-4967-b84f-95b49a225bd6\") " pod="openshift-image-registry/image-registry-697d97f7c8-n794d" Nov 24 08:51:36 crc kubenswrapper[4886]: E1124 08:51:36.036773 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 08:51:36.536751706 +0000 UTC m=+152.423489991 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n794d" (UID: "30599c42-eef7-4967-b84f-95b49a225bd6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 08:51:36 crc kubenswrapper[4886]: I1124 08:51:36.043604 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Nov 24 08:51:36 crc kubenswrapper[4886]: I1124 08:51:36.048275 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/98d1a535-6aba-4633-8091-42e633b865b1-config\") pod \"service-ca-operator-777779d784-hplr8\" (UID: \"98d1a535-6aba-4633-8091-42e633b865b1\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-hplr8" Nov 24 08:51:36 crc kubenswrapper[4886]: I1124 08:51:36.063478 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Nov 24 08:51:36 crc kubenswrapper[4886]: I1124 08:51:36.084095 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Nov 24 08:51:36 crc kubenswrapper[4886]: I1124 08:51:36.093233 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/a9e033b5-0aef-4e45-924c-338d2a914c5a-srv-cert\") pod \"catalog-operator-68c6474976-f8pr4\" (UID: \"a9e033b5-0aef-4e45-924c-338d2a914c5a\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-f8pr4" Nov 24 08:51:36 crc kubenswrapper[4886]: I1124 08:51:36.118534 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9hnlj\" (UniqueName: \"kubernetes.io/projected/74df4af9-c3c0-4e2e-bd0f-aefb13ca53cd-kube-api-access-9hnlj\") pod \"etcd-operator-b45778765-zqjsm\" (UID: \"74df4af9-c3c0-4e2e-bd0f-aefb13ca53cd\") " pod="openshift-etcd-operator/etcd-operator-b45778765-zqjsm" Nov 24 08:51:36 crc kubenswrapper[4886]: I1124 08:51:36.138609 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 08:51:36 crc kubenswrapper[4886]: E1124 08:51:36.138764 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 08:51:36.63873118 +0000 UTC m=+152.525469325 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 08:51:36 crc kubenswrapper[4886]: I1124 08:51:36.139342 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n794d\" (UID: \"30599c42-eef7-4967-b84f-95b49a225bd6\") " pod="openshift-image-registry/image-registry-697d97f7c8-n794d" Nov 24 08:51:36 crc kubenswrapper[4886]: E1124 08:51:36.139778 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 08:51:36.639768958 +0000 UTC m=+152.526507093 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n794d" (UID: "30599c42-eef7-4967-b84f-95b49a225bd6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 08:51:36 crc kubenswrapper[4886]: I1124 08:51:36.157840 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l97qz\" (UniqueName: \"kubernetes.io/projected/ebe970f2-3e2a-4222-87b7-a7d3d6b5456c-kube-api-access-l97qz\") pod \"openshift-controller-manager-operator-756b6f6bc6-dd9nj\" (UID: \"ebe970f2-3e2a-4222-87b7-a7d3d6b5456c\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-dd9nj" Nov 24 08:51:36 crc kubenswrapper[4886]: I1124 08:51:36.176783 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/316d4bac-9349-4ec0-82ce-af715e7a3259-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-r79cd\" (UID: \"316d4bac-9349-4ec0-82ce-af715e7a3259\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-r79cd" Nov 24 08:51:36 crc kubenswrapper[4886]: I1124 08:51:36.197491 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fksxg\" (UniqueName: \"kubernetes.io/projected/123bf335-5130-413e-b3fa-8fa4ba9111da-kube-api-access-fksxg\") pod \"dns-operator-744455d44c-njfqk\" (UID: \"123bf335-5130-413e-b3fa-8fa4ba9111da\") " pod="openshift-dns-operator/dns-operator-744455d44c-njfqk" Nov 24 08:51:36 crc kubenswrapper[4886]: I1124 08:51:36.208386 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-dd9nj" Nov 24 08:51:36 crc kubenswrapper[4886]: I1124 08:51:36.238583 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pbw7n\" (UniqueName: \"kubernetes.io/projected/580c4fdc-bdb3-4099-b715-ac4c63acecb2-kube-api-access-pbw7n\") pod \"downloads-7954f5f757-w6cvz\" (UID: \"580c4fdc-bdb3-4099-b715-ac4c63acecb2\") " pod="openshift-console/downloads-7954f5f757-w6cvz" Nov 24 08:51:36 crc kubenswrapper[4886]: I1124 08:51:36.240287 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 08:51:36 crc kubenswrapper[4886]: E1124 08:51:36.241048 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 08:51:36.741030313 +0000 UTC m=+152.627768448 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 08:51:36 crc kubenswrapper[4886]: I1124 08:51:36.247566 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-w6cvz" Nov 24 08:51:36 crc kubenswrapper[4886]: I1124 08:51:36.258094 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dn6cm\" (UniqueName: \"kubernetes.io/projected/316d4bac-9349-4ec0-82ce-af715e7a3259-kube-api-access-dn6cm\") pod \"cluster-image-registry-operator-dc59b4c8b-r79cd\" (UID: \"316d4bac-9349-4ec0-82ce-af715e7a3259\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-r79cd" Nov 24 08:51:36 crc kubenswrapper[4886]: I1124 08:51:36.275384 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-zqjsm" Nov 24 08:51:36 crc kubenswrapper[4886]: I1124 08:51:36.289088 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-njfqk" Nov 24 08:51:36 crc kubenswrapper[4886]: I1124 08:51:36.301728 4886 request.go:700] Waited for 1.911871736s due to client-side throttling, not priority and fairness, request: POST:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/serviceaccounts/route-controller-manager-sa/token Nov 24 08:51:36 crc kubenswrapper[4886]: E1124 08:51:36.309274 4886 secret.go:188] Couldn't get secret openshift-config-operator/config-operator-serving-cert: failed to sync secret cache: timed out waiting for the condition Nov 24 08:51:36 crc kubenswrapper[4886]: E1124 08:51:36.309422 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ce7242d2-301f-4d8f-816a-a36418be67ca-serving-cert podName:ce7242d2-301f-4d8f-816a-a36418be67ca nodeName:}" failed. No retries permitted until 2025-11-24 08:51:36.809394566 +0000 UTC m=+152.696132701 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/ce7242d2-301f-4d8f-816a-a36418be67ca-serving-cert") pod "openshift-config-operator-7777fb866f-4z5n9" (UID: "ce7242d2-301f-4d8f-816a-a36418be67ca") : failed to sync secret cache: timed out waiting for the condition Nov 24 08:51:36 crc kubenswrapper[4886]: I1124 08:51:36.311345 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-r79cd" Nov 24 08:51:36 crc kubenswrapper[4886]: I1124 08:51:36.322137 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xshwm\" (UniqueName: \"kubernetes.io/projected/f8080498-de06-4f8f-9c35-3d296e28a021-kube-api-access-xshwm\") pod \"console-operator-58897d9998-845fz\" (UID: \"f8080498-de06-4f8f-9c35-3d296e28a021\") " pod="openshift-console-operator/console-operator-58897d9998-845fz" Nov 24 08:51:36 crc kubenswrapper[4886]: I1124 08:51:36.323902 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Nov 24 08:51:36 crc kubenswrapper[4886]: I1124 08:51:36.337976 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/89ad3a24-065a-4210-bc90-737b51139e8c-certs\") pod \"machine-config-server-4zfpn\" (UID: \"89ad3a24-065a-4210-bc90-737b51139e8c\") " pod="openshift-machine-config-operator/machine-config-server-4zfpn" Nov 24 08:51:36 crc kubenswrapper[4886]: I1124 08:51:36.347275 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n794d\" (UID: \"30599c42-eef7-4967-b84f-95b49a225bd6\") " pod="openshift-image-registry/image-registry-697d97f7c8-n794d" Nov 24 08:51:36 crc kubenswrapper[4886]: E1124 08:51:36.347922 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 08:51:36.847905001 +0000 UTC m=+152.734643136 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n794d" (UID: "30599c42-eef7-4967-b84f-95b49a225bd6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 08:51:36 crc kubenswrapper[4886]: I1124 08:51:36.367983 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Nov 24 08:51:36 crc kubenswrapper[4886]: I1124 08:51:36.369185 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Nov 24 08:51:36 crc kubenswrapper[4886]: I1124 08:51:36.374373 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/89ad3a24-065a-4210-bc90-737b51139e8c-node-bootstrap-token\") pod \"machine-config-server-4zfpn\" (UID: \"89ad3a24-065a-4210-bc90-737b51139e8c\") " pod="openshift-machine-config-operator/machine-config-server-4zfpn" Nov 24 08:51:36 crc kubenswrapper[4886]: I1124 08:51:36.384710 4886 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Nov 24 08:51:36 crc kubenswrapper[4886]: I1124 08:51:36.408802 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Nov 24 08:51:36 crc kubenswrapper[4886]: E1124 08:51:36.421099 4886 configmap.go:193] Couldn't get configMap openshift-dns/dns-default: failed to sync configmap cache: timed out waiting for the condition Nov 24 08:51:36 crc kubenswrapper[4886]: E1124 08:51:36.421233 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/f0014c51-8bc7-44e7-846b-3c7d97a67913-config-volume podName:f0014c51-8bc7-44e7-846b-3c7d97a67913 nodeName:}" failed. No retries permitted until 2025-11-24 08:51:36.92120733 +0000 UTC m=+152.807945475 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/f0014c51-8bc7-44e7-846b-3c7d97a67913-config-volume") pod "dns-default-x8hmr" (UID: "f0014c51-8bc7-44e7-846b-3c7d97a67913") : failed to sync configmap cache: timed out waiting for the condition Nov 24 08:51:36 crc kubenswrapper[4886]: E1124 08:51:36.422917 4886 secret.go:188] Couldn't get secret openshift-dns/dns-default-metrics-tls: failed to sync secret cache: timed out waiting for the condition Nov 24 08:51:36 crc kubenswrapper[4886]: E1124 08:51:36.422974 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f0014c51-8bc7-44e7-846b-3c7d97a67913-metrics-tls podName:f0014c51-8bc7-44e7-846b-3c7d97a67913 nodeName:}" failed. No retries permitted until 2025-11-24 08:51:36.922959608 +0000 UTC m=+152.809697743 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/f0014c51-8bc7-44e7-846b-3c7d97a67913-metrics-tls") pod "dns-default-x8hmr" (UID: "f0014c51-8bc7-44e7-846b-3c7d97a67913") : failed to sync secret cache: timed out waiting for the condition Nov 24 08:51:36 crc kubenswrapper[4886]: I1124 08:51:36.424435 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Nov 24 08:51:36 crc kubenswrapper[4886]: I1124 08:51:36.444414 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Nov 24 08:51:36 crc kubenswrapper[4886]: I1124 08:51:36.448605 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 08:51:36 crc kubenswrapper[4886]: E1124 08:51:36.449420 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 08:51:36.949401423 +0000 UTC m=+152.836139568 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 08:51:36 crc kubenswrapper[4886]: I1124 08:51:36.464493 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Nov 24 08:51:36 crc kubenswrapper[4886]: I1124 08:51:36.466289 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-845fz" Nov 24 08:51:36 crc kubenswrapper[4886]: I1124 08:51:36.483103 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Nov 24 08:51:36 crc kubenswrapper[4886]: I1124 08:51:36.483921 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-dd9nj"] Nov 24 08:51:36 crc kubenswrapper[4886]: I1124 08:51:36.502990 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Nov 24 08:51:36 crc kubenswrapper[4886]: I1124 08:51:36.508065 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b65ba9fc-0a0d-49f2-9991-319b054df0b0-serving-cert\") pod \"route-controller-manager-6576b87f9c-4k7lh\" (UID: \"b65ba9fc-0a0d-49f2-9991-319b054df0b0\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4k7lh" Nov 24 08:51:36 crc kubenswrapper[4886]: I1124 08:51:36.523205 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Nov 24 08:51:36 crc kubenswrapper[4886]: I1124 08:51:36.527102 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/eb6743a4-a25e-4b1b-ae3b-3d2199cfbff2-serving-cert\") pod \"apiserver-76f77b778f-lz4ml\" (UID: \"eb6743a4-a25e-4b1b-ae3b-3d2199cfbff2\") " pod="openshift-apiserver/apiserver-76f77b778f-lz4ml" Nov 24 08:51:36 crc kubenswrapper[4886]: I1124 08:51:36.544857 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Nov 24 08:51:36 crc kubenswrapper[4886]: I1124 08:51:36.552260 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n794d\" (UID: \"30599c42-eef7-4967-b84f-95b49a225bd6\") " pod="openshift-image-registry/image-registry-697d97f7c8-n794d" Nov 24 08:51:36 crc kubenswrapper[4886]: E1124 08:51:36.552784 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 08:51:37.052759775 +0000 UTC m=+152.939497910 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n794d" (UID: "30599c42-eef7-4967-b84f-95b49a225bd6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 08:51:36 crc kubenswrapper[4886]: I1124 08:51:36.570450 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-njfqk"] Nov 24 08:51:36 crc kubenswrapper[4886]: I1124 08:51:36.586375 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Nov 24 08:51:36 crc kubenswrapper[4886]: W1124 08:51:36.593568 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod123bf335_5130_413e_b3fa_8fa4ba9111da.slice/crio-c119d6c0f1af7e89d4fd42844cf5af4d63ebff19baf7a4a7cf227ddbb380e839 WatchSource:0}: Error finding container c119d6c0f1af7e89d4fd42844cf5af4d63ebff19baf7a4a7cf227ddbb380e839: Status 404 returned error can't find the container with id c119d6c0f1af7e89d4fd42844cf5af4d63ebff19baf7a4a7cf227ddbb380e839 Nov 24 08:51:36 crc kubenswrapper[4886]: I1124 08:51:36.602913 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Nov 24 08:51:36 crc kubenswrapper[4886]: I1124 08:51:36.604682 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/efe0b276-8633-4435-b8af-d4651276c24f-config\") pod \"openshift-apiserver-operator-796bbdcf4f-vz657\" (UID: \"efe0b276-8633-4435-b8af-d4651276c24f\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vz657" Nov 24 08:51:36 crc kubenswrapper[4886]: I1124 08:51:36.622964 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Nov 24 08:51:36 crc kubenswrapper[4886]: I1124 08:51:36.623206 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-zqjsm"] Nov 24 08:51:36 crc kubenswrapper[4886]: I1124 08:51:36.643831 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Nov 24 08:51:36 crc kubenswrapper[4886]: I1124 08:51:36.653471 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 08:51:36 crc kubenswrapper[4886]: E1124 08:51:36.654566 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 08:51:37.154542614 +0000 UTC m=+153.041280749 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 08:51:36 crc kubenswrapper[4886]: I1124 08:51:36.658879 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f9gp8\" (UniqueName: \"kubernetes.io/projected/e4cfb4a2-69e1-4c38-b07f-cb1e628cbc2f-kube-api-access-f9gp8\") pod \"machine-api-operator-5694c8668f-fqpl9\" (UID: \"e4cfb4a2-69e1-4c38-b07f-cb1e628cbc2f\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-fqpl9" Nov 24 08:51:36 crc kubenswrapper[4886]: I1124 08:51:36.662507 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Nov 24 08:51:36 crc kubenswrapper[4886]: I1124 08:51:36.675825 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-w6cvz"] Nov 24 08:51:36 crc kubenswrapper[4886]: I1124 08:51:36.683860 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Nov 24 08:51:36 crc kubenswrapper[4886]: W1124 08:51:36.691217 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod580c4fdc_bdb3_4099_b715_ac4c63acecb2.slice/crio-e5c5e336f6d008ae945c989f55b6a771e39deede4051f2f6e861172c10469a78 WatchSource:0}: Error finding container e5c5e336f6d008ae945c989f55b6a771e39deede4051f2f6e861172c10469a78: Status 404 returned error can't find the container with id e5c5e336f6d008ae945c989f55b6a771e39deede4051f2f6e861172c10469a78 Nov 24 08:51:36 crc kubenswrapper[4886]: I1124 08:51:36.693756 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/efe0b276-8633-4435-b8af-d4651276c24f-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-vz657\" (UID: \"efe0b276-8633-4435-b8af-d4651276c24f\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vz657" Nov 24 08:51:36 crc kubenswrapper[4886]: I1124 08:51:36.716300 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-845fz"] Nov 24 08:51:36 crc kubenswrapper[4886]: I1124 08:51:36.717000 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2tq6l\" (UniqueName: \"kubernetes.io/projected/9d51f527-2205-4113-9b65-655f3fab2e1c-kube-api-access-2tq6l\") pod \"machine-approver-56656f9798-km4wr\" (UID: \"9d51f527-2205-4113-9b65-655f3fab2e1c\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-km4wr" Nov 24 08:51:36 crc kubenswrapper[4886]: W1124 08:51:36.737899 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf8080498_de06_4f8f_9c35_3d296e28a021.slice/crio-1c27fe8860e9d6a603c990ad8c7151fad68ec79cf5f27ad149d62486f98c87dd WatchSource:0}: Error finding container 1c27fe8860e9d6a603c990ad8c7151fad68ec79cf5f27ad149d62486f98c87dd: Status 404 returned error can't find the container with id 1c27fe8860e9d6a603c990ad8c7151fad68ec79cf5f27ad149d62486f98c87dd Nov 24 08:51:36 crc kubenswrapper[4886]: I1124 08:51:36.739942 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cjdn4\" (UniqueName: \"kubernetes.io/projected/87f902e1-073b-4ccd-8b3a-717f802e9671-kube-api-access-cjdn4\") pod \"console-f9d7485db-tbxjm\" (UID: \"87f902e1-073b-4ccd-8b3a-717f802e9671\") " pod="openshift-console/console-f9d7485db-tbxjm" Nov 24 08:51:36 crc kubenswrapper[4886]: I1124 08:51:36.742575 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Nov 24 08:51:36 crc kubenswrapper[4886]: I1124 08:51:36.755524 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n794d\" (UID: \"30599c42-eef7-4967-b84f-95b49a225bd6\") " pod="openshift-image-registry/image-registry-697d97f7c8-n794d" Nov 24 08:51:36 crc kubenswrapper[4886]: E1124 08:51:36.756083 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 08:51:37.256064396 +0000 UTC m=+153.142802531 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n794d" (UID: "30599c42-eef7-4967-b84f-95b49a225bd6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 08:51:36 crc kubenswrapper[4886]: I1124 08:51:36.763112 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Nov 24 08:51:36 crc kubenswrapper[4886]: I1124 08:51:36.765235 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b65ba9fc-0a0d-49f2-9991-319b054df0b0-config\") pod \"route-controller-manager-6576b87f9c-4k7lh\" (UID: \"b65ba9fc-0a0d-49f2-9991-319b054df0b0\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4k7lh" Nov 24 08:51:36 crc kubenswrapper[4886]: I1124 08:51:36.806229 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q5wx5\" (UniqueName: \"kubernetes.io/projected/d879a534-2c8a-463b-bbf6-75213cb4d554-kube-api-access-q5wx5\") pod \"cluster-samples-operator-665b6dd947-bbnxh\" (UID: \"d879a534-2c8a-463b-bbf6-75213cb4d554\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-bbnxh" Nov 24 08:51:36 crc kubenswrapper[4886]: I1124 08:51:36.820353 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-r79cd"] Nov 24 08:51:36 crc kubenswrapper[4886]: I1124 08:51:36.838047 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-km4wr" Nov 24 08:51:36 crc kubenswrapper[4886]: I1124 08:51:36.843963 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/eb155f7a-3c80-42c5-adfe-69f854a2d032-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-fsxn2\" (UID: \"eb155f7a-3c80-42c5-adfe-69f854a2d032\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-fsxn2" Nov 24 08:51:36 crc kubenswrapper[4886]: I1124 08:51:36.856356 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-tbxjm" Nov 24 08:51:36 crc kubenswrapper[4886]: I1124 08:51:36.856602 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 08:51:36 crc kubenswrapper[4886]: E1124 08:51:36.856821 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 08:51:37.356793106 +0000 UTC m=+153.243531241 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 08:51:36 crc kubenswrapper[4886]: I1124 08:51:36.857060 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n794d\" (UID: \"30599c42-eef7-4967-b84f-95b49a225bd6\") " pod="openshift-image-registry/image-registry-697d97f7c8-n794d" Nov 24 08:51:36 crc kubenswrapper[4886]: I1124 08:51:36.857261 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ce7242d2-301f-4d8f-816a-a36418be67ca-serving-cert\") pod \"openshift-config-operator-7777fb866f-4z5n9\" (UID: \"ce7242d2-301f-4d8f-816a-a36418be67ca\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-4z5n9" Nov 24 08:51:36 crc kubenswrapper[4886]: E1124 08:51:36.857638 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 08:51:37.357610108 +0000 UTC m=+153.244348253 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n794d" (UID: "30599c42-eef7-4967-b84f-95b49a225bd6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 08:51:36 crc kubenswrapper[4886]: I1124 08:51:36.861299 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/30599c42-eef7-4967-b84f-95b49a225bd6-bound-sa-token\") pod \"image-registry-697d97f7c8-n794d\" (UID: \"30599c42-eef7-4967-b84f-95b49a225bd6\") " pod="openshift-image-registry/image-registry-697d97f7c8-n794d" Nov 24 08:51:36 crc kubenswrapper[4886]: I1124 08:51:36.879924 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-74mns\" (UniqueName: \"kubernetes.io/projected/32727d31-2207-4688-b70c-6045b674538b-kube-api-access-74mns\") pod \"controller-manager-879f6c89f-d9sdq\" (UID: \"32727d31-2207-4688-b70c-6045b674538b\") " pod="openshift-controller-manager/controller-manager-879f6c89f-d9sdq" Nov 24 08:51:36 crc kubenswrapper[4886]: I1124 08:51:36.894183 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-fsxn2" Nov 24 08:51:36 crc kubenswrapper[4886]: I1124 08:51:36.903082 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Nov 24 08:51:36 crc kubenswrapper[4886]: I1124 08:51:36.906305 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-d9sdq" Nov 24 08:51:36 crc kubenswrapper[4886]: I1124 08:51:36.923342 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Nov 24 08:51:36 crc kubenswrapper[4886]: I1124 08:51:36.931424 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ce7242d2-301f-4d8f-816a-a36418be67ca-serving-cert\") pod \"openshift-config-operator-7777fb866f-4z5n9\" (UID: \"ce7242d2-301f-4d8f-816a-a36418be67ca\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-4z5n9" Nov 24 08:51:36 crc kubenswrapper[4886]: E1124 08:51:36.934030 4886 secret.go:188] Couldn't get secret openshift-apiserver/encryption-config-1: failed to sync secret cache: timed out waiting for the condition Nov 24 08:51:36 crc kubenswrapper[4886]: E1124 08:51:36.934073 4886 configmap.go:193] Couldn't get configMap openshift-apiserver/trusted-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Nov 24 08:51:36 crc kubenswrapper[4886]: E1124 08:51:36.934041 4886 configmap.go:193] Couldn't get configMap openshift-machine-api/kube-rbac-proxy: failed to sync configmap cache: timed out waiting for the condition Nov 24 08:51:36 crc kubenswrapper[4886]: E1124 08:51:36.934126 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/eb6743a4-a25e-4b1b-ae3b-3d2199cfbff2-encryption-config podName:eb6743a4-a25e-4b1b-ae3b-3d2199cfbff2 nodeName:}" failed. No retries permitted until 2025-11-24 08:51:37.934104794 +0000 UTC m=+153.820842929 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "encryption-config" (UniqueName: "kubernetes.io/secret/eb6743a4-a25e-4b1b-ae3b-3d2199cfbff2-encryption-config") pod "apiserver-76f77b778f-lz4ml" (UID: "eb6743a4-a25e-4b1b-ae3b-3d2199cfbff2") : failed to sync secret cache: timed out waiting for the condition Nov 24 08:51:36 crc kubenswrapper[4886]: E1124 08:51:36.934168 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/eb6743a4-a25e-4b1b-ae3b-3d2199cfbff2-trusted-ca-bundle podName:eb6743a4-a25e-4b1b-ae3b-3d2199cfbff2 nodeName:}" failed. No retries permitted until 2025-11-24 08:51:37.934139665 +0000 UTC m=+153.820877790 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/eb6743a4-a25e-4b1b-ae3b-3d2199cfbff2-trusted-ca-bundle") pod "apiserver-76f77b778f-lz4ml" (UID: "eb6743a4-a25e-4b1b-ae3b-3d2199cfbff2") : failed to sync configmap cache: timed out waiting for the condition Nov 24 08:51:36 crc kubenswrapper[4886]: E1124 08:51:36.934197 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e4cfb4a2-69e1-4c38-b07f-cb1e628cbc2f-config podName:e4cfb4a2-69e1-4c38-b07f-cb1e628cbc2f nodeName:}" failed. No retries permitted until 2025-11-24 08:51:37.934185266 +0000 UTC m=+153.820923451 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/e4cfb4a2-69e1-4c38-b07f-cb1e628cbc2f-config") pod "machine-api-operator-5694c8668f-fqpl9" (UID: "e4cfb4a2-69e1-4c38-b07f-cb1e628cbc2f") : failed to sync configmap cache: timed out waiting for the condition Nov 24 08:51:36 crc kubenswrapper[4886]: E1124 08:51:36.934218 4886 configmap.go:193] Couldn't get configMap openshift-apiserver/etcd-serving-ca: failed to sync configmap cache: timed out waiting for the condition Nov 24 08:51:36 crc kubenswrapper[4886]: E1124 08:51:36.934282 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/eb6743a4-a25e-4b1b-ae3b-3d2199cfbff2-etcd-serving-ca podName:eb6743a4-a25e-4b1b-ae3b-3d2199cfbff2 nodeName:}" failed. No retries permitted until 2025-11-24 08:51:37.934260658 +0000 UTC m=+153.820998833 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etcd-serving-ca" (UniqueName: "kubernetes.io/configmap/eb6743a4-a25e-4b1b-ae3b-3d2199cfbff2-etcd-serving-ca") pod "apiserver-76f77b778f-lz4ml" (UID: "eb6743a4-a25e-4b1b-ae3b-3d2199cfbff2") : failed to sync configmap cache: timed out waiting for the condition Nov 24 08:51:36 crc kubenswrapper[4886]: E1124 08:51:36.934283 4886 configmap.go:193] Couldn't get configMap openshift-apiserver/audit-1: failed to sync configmap cache: timed out waiting for the condition Nov 24 08:51:36 crc kubenswrapper[4886]: E1124 08:51:36.934333 4886 configmap.go:193] Couldn't get configMap openshift-apiserver/image-import-ca: failed to sync configmap cache: timed out waiting for the condition Nov 24 08:51:36 crc kubenswrapper[4886]: E1124 08:51:36.934349 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/eb6743a4-a25e-4b1b-ae3b-3d2199cfbff2-audit podName:eb6743a4-a25e-4b1b-ae3b-3d2199cfbff2 nodeName:}" failed. No retries permitted until 2025-11-24 08:51:37.93432732 +0000 UTC m=+153.821065515 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "audit" (UniqueName: "kubernetes.io/configmap/eb6743a4-a25e-4b1b-ae3b-3d2199cfbff2-audit") pod "apiserver-76f77b778f-lz4ml" (UID: "eb6743a4-a25e-4b1b-ae3b-3d2199cfbff2") : failed to sync configmap cache: timed out waiting for the condition Nov 24 08:51:36 crc kubenswrapper[4886]: E1124 08:51:36.934348 4886 configmap.go:193] Couldn't get configMap openshift-apiserver/config: failed to sync configmap cache: timed out waiting for the condition Nov 24 08:51:36 crc kubenswrapper[4886]: E1124 08:51:36.934376 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/eb6743a4-a25e-4b1b-ae3b-3d2199cfbff2-image-import-ca podName:eb6743a4-a25e-4b1b-ae3b-3d2199cfbff2 nodeName:}" failed. No retries permitted until 2025-11-24 08:51:37.934360751 +0000 UTC m=+153.821098886 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "image-import-ca" (UniqueName: "kubernetes.io/configmap/eb6743a4-a25e-4b1b-ae3b-3d2199cfbff2-image-import-ca") pod "apiserver-76f77b778f-lz4ml" (UID: "eb6743a4-a25e-4b1b-ae3b-3d2199cfbff2") : failed to sync configmap cache: timed out waiting for the condition Nov 24 08:51:36 crc kubenswrapper[4886]: E1124 08:51:36.934472 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/eb6743a4-a25e-4b1b-ae3b-3d2199cfbff2-config podName:eb6743a4-a25e-4b1b-ae3b-3d2199cfbff2 nodeName:}" failed. No retries permitted until 2025-11-24 08:51:37.934446733 +0000 UTC m=+153.821184868 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/eb6743a4-a25e-4b1b-ae3b-3d2199cfbff2-config") pod "apiserver-76f77b778f-lz4ml" (UID: "eb6743a4-a25e-4b1b-ae3b-3d2199cfbff2") : failed to sync configmap cache: timed out waiting for the condition Nov 24 08:51:36 crc kubenswrapper[4886]: E1124 08:51:36.934534 4886 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: failed to sync configmap cache: timed out waiting for the condition Nov 24 08:51:36 crc kubenswrapper[4886]: E1124 08:51:36.934564 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b65ba9fc-0a0d-49f2-9991-319b054df0b0-client-ca podName:b65ba9fc-0a0d-49f2-9991-319b054df0b0 nodeName:}" failed. No retries permitted until 2025-11-24 08:51:37.934556576 +0000 UTC m=+153.821294811 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/b65ba9fc-0a0d-49f2-9991-319b054df0b0-client-ca") pod "route-controller-manager-6576b87f9c-4k7lh" (UID: "b65ba9fc-0a0d-49f2-9991-319b054df0b0") : failed to sync configmap cache: timed out waiting for the condition Nov 24 08:51:36 crc kubenswrapper[4886]: I1124 08:51:36.962409 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 08:51:36 crc kubenswrapper[4886]: I1124 08:51:36.962763 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f0014c51-8bc7-44e7-846b-3c7d97a67913-config-volume\") pod \"dns-default-x8hmr\" (UID: \"f0014c51-8bc7-44e7-846b-3c7d97a67913\") " pod="openshift-dns/dns-default-x8hmr" Nov 24 08:51:36 crc kubenswrapper[4886]: I1124 08:51:36.962869 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5pbvq\" (UniqueName: \"kubernetes.io/projected/2edcc7e5-5bfb-4c39-86c9-fabf007078f4-kube-api-access-5pbvq\") pod \"apiserver-7bbb656c7d-njp2h\" (UID: \"2edcc7e5-5bfb-4c39-86c9-fabf007078f4\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-njp2h" Nov 24 08:51:36 crc kubenswrapper[4886]: I1124 08:51:36.962916 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/f0014c51-8bc7-44e7-846b-3c7d97a67913-metrics-tls\") pod \"dns-default-x8hmr\" (UID: \"f0014c51-8bc7-44e7-846b-3c7d97a67913\") " pod="openshift-dns/dns-default-x8hmr" Nov 24 08:51:36 crc kubenswrapper[4886]: E1124 08:51:36.963023 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 08:51:37.462991206 +0000 UTC m=+153.349729401 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 08:51:36 crc kubenswrapper[4886]: I1124 08:51:36.963818 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f0014c51-8bc7-44e7-846b-3c7d97a67913-config-volume\") pod \"dns-default-x8hmr\" (UID: \"f0014c51-8bc7-44e7-846b-3c7d97a67913\") " pod="openshift-dns/dns-default-x8hmr" Nov 24 08:51:36 crc kubenswrapper[4886]: I1124 08:51:36.967344 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/f0014c51-8bc7-44e7-846b-3c7d97a67913-metrics-tls\") pod \"dns-default-x8hmr\" (UID: \"f0014c51-8bc7-44e7-846b-3c7d97a67913\") " pod="openshift-dns/dns-default-x8hmr" Nov 24 08:51:36 crc kubenswrapper[4886]: I1124 08:51:36.981669 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d4hmv\" (UniqueName: \"kubernetes.io/projected/30599c42-eef7-4967-b84f-95b49a225bd6-kube-api-access-d4hmv\") pod \"image-registry-697d97f7c8-n794d\" (UID: \"30599c42-eef7-4967-b84f-95b49a225bd6\") " pod="openshift-image-registry/image-registry-697d97f7c8-n794d" Nov 24 08:51:36 crc kubenswrapper[4886]: I1124 08:51:36.984565 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bmhrd\" (UniqueName: \"kubernetes.io/projected/168f234d-da70-475a-b6df-2771ab11368e-kube-api-access-bmhrd\") pod \"oauth-openshift-558db77b4-vtbkx\" (UID: \"168f234d-da70-475a-b6df-2771ab11368e\") " pod="openshift-authentication/oauth-openshift-558db77b4-vtbkx" Nov 24 08:51:36 crc kubenswrapper[4886]: I1124 08:51:36.989939 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/eb6743a4-a25e-4b1b-ae3b-3d2199cfbff2-etcd-client\") pod \"apiserver-76f77b778f-lz4ml\" (UID: \"eb6743a4-a25e-4b1b-ae3b-3d2199cfbff2\") " pod="openshift-apiserver/apiserver-76f77b778f-lz4ml" Nov 24 08:51:36 crc kubenswrapper[4886]: W1124 08:51:36.993842 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9d51f527_2205_4113_9b65_655f3fab2e1c.slice/crio-81c1360649c1c91a303ecb560c2b78a3c4868564b85770ac7bac3bc581bdf513 WatchSource:0}: Error finding container 81c1360649c1c91a303ecb560c2b78a3c4868564b85770ac7bac3bc581bdf513: Status 404 returned error can't find the container with id 81c1360649c1c91a303ecb560c2b78a3c4868564b85770ac7bac3bc581bdf513 Nov 24 08:51:36 crc kubenswrapper[4886]: I1124 08:51:36.999559 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8bdf9\" (UniqueName: \"kubernetes.io/projected/98d1a535-6aba-4633-8091-42e633b865b1-kube-api-access-8bdf9\") pod \"service-ca-operator-777779d784-hplr8\" (UID: \"98d1a535-6aba-4633-8091-42e633b865b1\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-hplr8" Nov 24 08:51:37 crc kubenswrapper[4886]: I1124 08:51:37.019584 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bgwgf\" (UniqueName: \"kubernetes.io/projected/3a73ff5c-5292-45f7-a7cf-97714a8a109d-kube-api-access-bgwgf\") pod \"csi-hostpathplugin-rj5kj\" (UID: \"3a73ff5c-5292-45f7-a7cf-97714a8a109d\") " pod="hostpath-provisioner/csi-hostpathplugin-rj5kj" Nov 24 08:51:37 crc kubenswrapper[4886]: I1124 08:51:37.043375 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2qncw\" (UniqueName: \"kubernetes.io/projected/1559d3ac-0229-4e31-9d0b-ebf633409384-kube-api-access-2qncw\") pod \"machine-config-controller-84d6567774-7ftcq\" (UID: \"1559d3ac-0229-4e31-9d0b-ebf633409384\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-7ftcq" Nov 24 08:51:37 crc kubenswrapper[4886]: I1124 08:51:37.065288 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p7xq9\" (UniqueName: \"kubernetes.io/projected/560047dc-48dd-40d2-b7b3-8a8e4db0d7c6-kube-api-access-p7xq9\") pod \"multus-admission-controller-857f4d67dd-4kfg8\" (UID: \"560047dc-48dd-40d2-b7b3-8a8e4db0d7c6\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-4kfg8" Nov 24 08:51:37 crc kubenswrapper[4886]: I1124 08:51:37.068438 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n794d\" (UID: \"30599c42-eef7-4967-b84f-95b49a225bd6\") " pod="openshift-image-registry/image-registry-697d97f7c8-n794d" Nov 24 08:51:37 crc kubenswrapper[4886]: E1124 08:51:37.068862 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 08:51:37.568845616 +0000 UTC m=+153.455583751 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n794d" (UID: "30599c42-eef7-4967-b84f-95b49a225bd6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 08:51:37 crc kubenswrapper[4886]: I1124 08:51:37.083983 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-hplr8" Nov 24 08:51:37 crc kubenswrapper[4886]: I1124 08:51:37.092804 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-228rg\" (UniqueName: \"kubernetes.io/projected/afdfb747-0bc0-40a4-89e6-dc6970617398-kube-api-access-228rg\") pod \"control-plane-machine-set-operator-78cbb6b69f-5fr58\" (UID: \"afdfb747-0bc0-40a4-89e6-dc6970617398\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-5fr58" Nov 24 08:51:37 crc kubenswrapper[4886]: I1124 08:51:37.097856 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-bbnxh" Nov 24 08:51:37 crc kubenswrapper[4886]: I1124 08:51:37.105799 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nbjft\" (UniqueName: \"kubernetes.io/projected/7b5ec71d-6e2e-4cf9-af45-b0147e7598b6-kube-api-access-nbjft\") pod \"olm-operator-6b444d44fb-z89jx\" (UID: \"7b5ec71d-6e2e-4cf9-af45-b0147e7598b6\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-z89jx" Nov 24 08:51:37 crc kubenswrapper[4886]: I1124 08:51:37.107132 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-r79cd" event={"ID":"316d4bac-9349-4ec0-82ce-af715e7a3259","Type":"ContainerStarted","Data":"3deea0198de35d63653de7bd47811d2dfe681928e2bb1592a8dda78bf36a2b96"} Nov 24 08:51:37 crc kubenswrapper[4886]: I1124 08:51:37.108071 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-njfqk" event={"ID":"123bf335-5130-413e-b3fa-8fa4ba9111da","Type":"ContainerStarted","Data":"c119d6c0f1af7e89d4fd42844cf5af4d63ebff19baf7a4a7cf227ddbb380e839"} Nov 24 08:51:37 crc kubenswrapper[4886]: I1124 08:51:37.112512 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-km4wr" event={"ID":"9d51f527-2205-4113-9b65-655f3fab2e1c","Type":"ContainerStarted","Data":"81c1360649c1c91a303ecb560c2b78a3c4868564b85770ac7bac3bc581bdf513"} Nov 24 08:51:37 crc kubenswrapper[4886]: I1124 08:51:37.116018 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-845fz" event={"ID":"f8080498-de06-4f8f-9c35-3d296e28a021","Type":"ContainerStarted","Data":"1c27fe8860e9d6a603c990ad8c7151fad68ec79cf5f27ad149d62486f98c87dd"} Nov 24 08:51:37 crc kubenswrapper[4886]: I1124 08:51:37.121032 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4sfj2\" (UniqueName: \"kubernetes.io/projected/89ad3a24-065a-4210-bc90-737b51139e8c-kube-api-access-4sfj2\") pod \"machine-config-server-4zfpn\" (UID: \"89ad3a24-065a-4210-bc90-737b51139e8c\") " pod="openshift-machine-config-operator/machine-config-server-4zfpn" Nov 24 08:51:37 crc kubenswrapper[4886]: I1124 08:51:37.123508 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Nov 24 08:51:37 crc kubenswrapper[4886]: I1124 08:51:37.130685 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-zqjsm" event={"ID":"74df4af9-c3c0-4e2e-bd0f-aefb13ca53cd","Type":"ContainerStarted","Data":"6676a85ed0f300c9ef2ba5f324bd483ae2186a44beeba0b08d89e930c149cc48"} Nov 24 08:51:37 crc kubenswrapper[4886]: E1124 08:51:37.133882 4886 projected.go:288] Couldn't get configMap openshift-apiserver-operator/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Nov 24 08:51:37 crc kubenswrapper[4886]: E1124 08:51:37.133934 4886 projected.go:194] Error preparing data for projected volume kube-api-access-qp4dt for pod openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vz657: failed to sync configmap cache: timed out waiting for the condition Nov 24 08:51:37 crc kubenswrapper[4886]: E1124 08:51:37.134008 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/efe0b276-8633-4435-b8af-d4651276c24f-kube-api-access-qp4dt podName:efe0b276-8633-4435-b8af-d4651276c24f nodeName:}" failed. No retries permitted until 2025-11-24 08:51:37.633983921 +0000 UTC m=+153.520722056 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-qp4dt" (UniqueName: "kubernetes.io/projected/efe0b276-8633-4435-b8af-d4651276c24f-kube-api-access-qp4dt") pod "openshift-apiserver-operator-796bbdcf4f-vz657" (UID: "efe0b276-8633-4435-b8af-d4651276c24f") : failed to sync configmap cache: timed out waiting for the condition Nov 24 08:51:37 crc kubenswrapper[4886]: I1124 08:51:37.135953 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-rj5kj" Nov 24 08:51:37 crc kubenswrapper[4886]: I1124 08:51:37.136265 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-tbxjm"] Nov 24 08:51:37 crc kubenswrapper[4886]: I1124 08:51:37.139248 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-w6cvz" event={"ID":"580c4fdc-bdb3-4099-b715-ac4c63acecb2","Type":"ContainerStarted","Data":"f69822470bbe523a2ecc1af4c27c3a21cc780e5b9a530b1faa388baf49a16bc6"} Nov 24 08:51:37 crc kubenswrapper[4886]: I1124 08:51:37.139312 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-w6cvz" event={"ID":"580c4fdc-bdb3-4099-b715-ac4c63acecb2","Type":"ContainerStarted","Data":"e5c5e336f6d008ae945c989f55b6a771e39deede4051f2f6e861172c10469a78"} Nov 24 08:51:37 crc kubenswrapper[4886]: I1124 08:51:37.147019 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-dd9nj" event={"ID":"ebe970f2-3e2a-4222-87b7-a7d3d6b5456c","Type":"ContainerStarted","Data":"956ae7c8f8c6981fa4048187daa0bbb163a38946543513467820edac0197fbf1"} Nov 24 08:51:37 crc kubenswrapper[4886]: I1124 08:51:37.147080 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-dd9nj" event={"ID":"ebe970f2-3e2a-4222-87b7-a7d3d6b5456c","Type":"ContainerStarted","Data":"89b1f8ebe0edb165f210bcd9e777f08aa36d2020f7c21a58491c6dd051fd5a9b"} Nov 24 08:51:37 crc kubenswrapper[4886]: I1124 08:51:37.174026 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 08:51:37 crc kubenswrapper[4886]: E1124 08:51:37.174664 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 08:51:37.674642555 +0000 UTC m=+153.561380690 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 08:51:37 crc kubenswrapper[4886]: I1124 08:51:37.177584 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2rwmd\" (UniqueName: \"kubernetes.io/projected/fd02555c-e6fc-4825-a06a-53497a2cfeda-kube-api-access-2rwmd\") pod \"ingress-canary-t4g4w\" (UID: \"fd02555c-e6fc-4825-a06a-53497a2cfeda\") " pod="openshift-ingress-canary/ingress-canary-t4g4w" Nov 24 08:51:37 crc kubenswrapper[4886]: I1124 08:51:37.209231 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5ffb9e3b-f114-44b1-9521-096b538ce9bf-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-fzqpc\" (UID: \"5ffb9e3b-f114-44b1-9521-096b538ce9bf\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-fzqpc" Nov 24 08:51:37 crc kubenswrapper[4886]: I1124 08:51:37.209747 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-fsxn2"] Nov 24 08:51:37 crc kubenswrapper[4886]: I1124 08:51:37.209827 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3a0bfe9c-4f8c-47f6-945b-0f93f9888f93-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-x5nfr\" (UID: \"3a0bfe9c-4f8c-47f6-945b-0f93f9888f93\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-x5nfr" Nov 24 08:51:37 crc kubenswrapper[4886]: E1124 08:51:37.216276 4886 projected.go:288] Couldn't get configMap openshift-apiserver/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Nov 24 08:51:37 crc kubenswrapper[4886]: I1124 08:51:37.224500 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bt7cq\" (UniqueName: \"kubernetes.io/projected/1c634f86-dcd4-4cf7-aa6f-4245be5d0b57-kube-api-access-bt7cq\") pod \"ingress-operator-5b745b69d9-vb86f\" (UID: \"1c634f86-dcd4-4cf7-aa6f-4245be5d0b57\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-vb86f" Nov 24 08:51:37 crc kubenswrapper[4886]: I1124 08:51:37.232948 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-vtbkx" Nov 24 08:51:37 crc kubenswrapper[4886]: I1124 08:51:37.240860 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-x5nfr" Nov 24 08:51:37 crc kubenswrapper[4886]: I1124 08:51:37.247431 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-7ftcq" Nov 24 08:51:37 crc kubenswrapper[4886]: I1124 08:51:37.249878 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-njp2h" Nov 24 08:51:37 crc kubenswrapper[4886]: I1124 08:51:37.251083 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qt8bb\" (UniqueName: \"kubernetes.io/projected/3349a550-cd49-4627-8fd8-7bb82f26c0e4-kube-api-access-qt8bb\") pod \"kube-storage-version-migrator-operator-b67b599dd-v7qjg\" (UID: \"3349a550-cd49-4627-8fd8-7bb82f26c0e4\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-v7qjg" Nov 24 08:51:37 crc kubenswrapper[4886]: I1124 08:51:37.263405 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jdfnw\" (UniqueName: \"kubernetes.io/projected/421edb7e-dea2-4578-894e-32e9eb8aff3b-kube-api-access-jdfnw\") pod \"package-server-manager-789f6589d5-pxjkj\" (UID: \"421edb7e-dea2-4578-894e-32e9eb8aff3b\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-pxjkj" Nov 24 08:51:37 crc kubenswrapper[4886]: I1124 08:51:37.267575 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-fzqpc" Nov 24 08:51:37 crc kubenswrapper[4886]: I1124 08:51:37.274194 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-v7qjg" Nov 24 08:51:37 crc kubenswrapper[4886]: I1124 08:51:37.276378 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n794d\" (UID: \"30599c42-eef7-4967-b84f-95b49a225bd6\") " pod="openshift-image-registry/image-registry-697d97f7c8-n794d" Nov 24 08:51:37 crc kubenswrapper[4886]: E1124 08:51:37.277324 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 08:51:37.777305368 +0000 UTC m=+153.664043503 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n794d" (UID: "30599c42-eef7-4967-b84f-95b49a225bd6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 08:51:37 crc kubenswrapper[4886]: I1124 08:51:37.288807 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kgz6j\" (UniqueName: \"kubernetes.io/projected/1afd949e-d0f2-41b8-9632-917df3468232-kube-api-access-kgz6j\") pod \"collect-profiles-29399565-qwh9r\" (UID: \"1afd949e-d0f2-41b8-9632-917df3468232\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399565-qwh9r" Nov 24 08:51:37 crc kubenswrapper[4886]: I1124 08:51:37.291639 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-4kfg8" Nov 24 08:51:37 crc kubenswrapper[4886]: I1124 08:51:37.293444 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-d9sdq"] Nov 24 08:51:37 crc kubenswrapper[4886]: I1124 08:51:37.314887 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-z89jx" Nov 24 08:51:37 crc kubenswrapper[4886]: I1124 08:51:37.320667 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/1c634f86-dcd4-4cf7-aa6f-4245be5d0b57-bound-sa-token\") pod \"ingress-operator-5b745b69d9-vb86f\" (UID: \"1c634f86-dcd4-4cf7-aa6f-4245be5d0b57\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-vb86f" Nov 24 08:51:37 crc kubenswrapper[4886]: I1124 08:51:37.320801 4886 request.go:700] Waited for 1.895386975s due to client-side throttling, not priority and fairness, request: POST:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/serviceaccounts/olm-operator-serviceaccount/token Nov 24 08:51:37 crc kubenswrapper[4886]: I1124 08:51:37.323247 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-5fr58" Nov 24 08:51:37 crc kubenswrapper[4886]: I1124 08:51:37.324129 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9j687\" (UniqueName: \"kubernetes.io/projected/400260cc-a84e-4d59-99f5-5e2359ceee1c-kube-api-access-9j687\") pod \"service-ca-9c57cc56f-59wgb\" (UID: \"400260cc-a84e-4d59-99f5-5e2359ceee1c\") " pod="openshift-service-ca/service-ca-9c57cc56f-59wgb" Nov 24 08:51:37 crc kubenswrapper[4886]: E1124 08:51:37.332667 4886 projected.go:288] Couldn't get configMap openshift-route-controller-manager/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Nov 24 08:51:37 crc kubenswrapper[4886]: E1124 08:51:37.332697 4886 projected.go:194] Error preparing data for projected volume kube-api-access-v68xl for pod openshift-route-controller-manager/route-controller-manager-6576b87f9c-4k7lh: failed to sync configmap cache: timed out waiting for the condition Nov 24 08:51:37 crc kubenswrapper[4886]: E1124 08:51:37.332826 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b65ba9fc-0a0d-49f2-9991-319b054df0b0-kube-api-access-v68xl podName:b65ba9fc-0a0d-49f2-9991-319b054df0b0 nodeName:}" failed. No retries permitted until 2025-11-24 08:51:37.832776288 +0000 UTC m=+153.719514583 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-v68xl" (UniqueName: "kubernetes.io/projected/b65ba9fc-0a0d-49f2-9991-319b054df0b0-kube-api-access-v68xl") pod "route-controller-manager-6576b87f9c-4k7lh" (UID: "b65ba9fc-0a0d-49f2-9991-319b054df0b0") : failed to sync configmap cache: timed out waiting for the condition Nov 24 08:51:37 crc kubenswrapper[4886]: I1124 08:51:37.340505 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-pxjkj" Nov 24 08:51:37 crc kubenswrapper[4886]: I1124 08:51:37.351718 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-59wgb" Nov 24 08:51:37 crc kubenswrapper[4886]: I1124 08:51:37.356283 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h5j9c\" (UniqueName: \"kubernetes.io/projected/a9e033b5-0aef-4e45-924c-338d2a914c5a-kube-api-access-h5j9c\") pod \"catalog-operator-68c6474976-f8pr4\" (UID: \"a9e033b5-0aef-4e45-924c-338d2a914c5a\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-f8pr4" Nov 24 08:51:37 crc kubenswrapper[4886]: I1124 08:51:37.359901 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29399565-qwh9r" Nov 24 08:51:37 crc kubenswrapper[4886]: I1124 08:51:37.368403 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lx559\" (UniqueName: \"kubernetes.io/projected/364b3e42-dafa-45cd-bf38-545cc2eb9e21-kube-api-access-lx559\") pod \"marketplace-operator-79b997595-tsmf5\" (UID: \"364b3e42-dafa-45cd-bf38-545cc2eb9e21\") " pod="openshift-marketplace/marketplace-operator-79b997595-tsmf5" Nov 24 08:51:37 crc kubenswrapper[4886]: I1124 08:51:37.377839 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-t4g4w" Nov 24 08:51:37 crc kubenswrapper[4886]: I1124 08:51:37.378342 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 08:51:37 crc kubenswrapper[4886]: E1124 08:51:37.378879 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 08:51:37.878845021 +0000 UTC m=+153.765583156 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 08:51:37 crc kubenswrapper[4886]: I1124 08:51:37.379443 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n794d\" (UID: \"30599c42-eef7-4967-b84f-95b49a225bd6\") " pod="openshift-image-registry/image-registry-697d97f7c8-n794d" Nov 24 08:51:37 crc kubenswrapper[4886]: E1124 08:51:37.380100 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 08:51:37.880085775 +0000 UTC m=+153.766823910 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n794d" (UID: "30599c42-eef7-4967-b84f-95b49a225bd6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 08:51:37 crc kubenswrapper[4886]: I1124 08:51:37.382408 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lhgx7\" (UniqueName: \"kubernetes.io/projected/97be751b-9ee7-45b8-bb05-5db918750f72-kube-api-access-lhgx7\") pod \"authentication-operator-69f744f599-ztbpv\" (UID: \"97be751b-9ee7-45b8-bb05-5db918750f72\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-ztbpv" Nov 24 08:51:37 crc kubenswrapper[4886]: W1124 08:51:37.382943 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod32727d31_2207_4688_b70c_6045b674538b.slice/crio-c5ec4e295cfb04d8f64542d7b1c5b0f0e56c6f6289195e25940474276d4b8e16 WatchSource:0}: Error finding container c5ec4e295cfb04d8f64542d7b1c5b0f0e56c6f6289195e25940474276d4b8e16: Status 404 returned error can't find the container with id c5ec4e295cfb04d8f64542d7b1c5b0f0e56c6f6289195e25940474276d4b8e16 Nov 24 08:51:37 crc kubenswrapper[4886]: I1124 08:51:37.391139 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-bbnxh"] Nov 24 08:51:37 crc kubenswrapper[4886]: I1124 08:51:37.399699 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-hplr8"] Nov 24 08:51:37 crc kubenswrapper[4886]: I1124 08:51:37.403981 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-f8pr4" Nov 24 08:51:37 crc kubenswrapper[4886]: I1124 08:51:37.407906 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gk9kr\" (UniqueName: \"kubernetes.io/projected/bbe26e55-76a5-4f66-b3c6-5f8933372332-kube-api-access-gk9kr\") pod \"machine-config-operator-74547568cd-dssrb\" (UID: \"bbe26e55-76a5-4f66-b3c6-5f8933372332\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-dssrb" Nov 24 08:51:37 crc kubenswrapper[4886]: I1124 08:51:37.415102 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-4zfpn" Nov 24 08:51:37 crc kubenswrapper[4886]: I1124 08:51:37.435043 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p9kjj\" (UniqueName: \"kubernetes.io/projected/f0014c51-8bc7-44e7-846b-3c7d97a67913-kube-api-access-p9kjj\") pod \"dns-default-x8hmr\" (UID: \"f0014c51-8bc7-44e7-846b-3c7d97a67913\") " pod="openshift-dns/dns-default-x8hmr" Nov 24 08:51:37 crc kubenswrapper[4886]: I1124 08:51:37.442876 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kwtmc\" (UniqueName: \"kubernetes.io/projected/a356b9d0-54fe-4bac-9589-027e7cbfeb87-kube-api-access-kwtmc\") pod \"packageserver-d55dfcdfc-npx2b\" (UID: \"a356b9d0-54fe-4bac-9589-027e7cbfeb87\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-npx2b" Nov 24 08:51:37 crc kubenswrapper[4886]: I1124 08:51:37.447397 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-x8hmr" Nov 24 08:51:37 crc kubenswrapper[4886]: I1124 08:51:37.462054 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rx75p\" (UniqueName: \"kubernetes.io/projected/a54a2524-099e-4a0f-9762-eafbc576dc56-kube-api-access-rx75p\") pod \"router-default-5444994796-xbr84\" (UID: \"a54a2524-099e-4a0f-9762-eafbc576dc56\") " pod="openshift-ingress/router-default-5444994796-xbr84" Nov 24 08:51:37 crc kubenswrapper[4886]: I1124 08:51:37.476486 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-rj5kj"] Nov 24 08:51:37 crc kubenswrapper[4886]: I1124 08:51:37.480761 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 08:51:37 crc kubenswrapper[4886]: I1124 08:51:37.480968 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hl4jx\" (UniqueName: \"kubernetes.io/projected/e8247ed1-a90a-409b-a326-07bb154a4d16-kube-api-access-hl4jx\") pod \"migrator-59844c95c7-xjmsp\" (UID: \"e8247ed1-a90a-409b-a326-07bb154a4d16\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-xjmsp" Nov 24 08:51:37 crc kubenswrapper[4886]: E1124 08:51:37.481067 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 08:51:37.981049521 +0000 UTC m=+153.867787656 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 08:51:37 crc kubenswrapper[4886]: I1124 08:51:37.481195 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n794d\" (UID: \"30599c42-eef7-4967-b84f-95b49a225bd6\") " pod="openshift-image-registry/image-registry-697d97f7c8-n794d" Nov 24 08:51:37 crc kubenswrapper[4886]: E1124 08:51:37.481672 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 08:51:37.981660958 +0000 UTC m=+153.868399093 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n794d" (UID: "30599c42-eef7-4967-b84f-95b49a225bd6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 08:51:37 crc kubenswrapper[4886]: I1124 08:51:37.486378 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Nov 24 08:51:37 crc kubenswrapper[4886]: I1124 08:51:37.505937 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Nov 24 08:51:37 crc kubenswrapper[4886]: I1124 08:51:37.518916 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-ztbpv" Nov 24 08:51:37 crc kubenswrapper[4886]: I1124 08:51:37.523915 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Nov 24 08:51:37 crc kubenswrapper[4886]: I1124 08:51:37.525563 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-xbr84" Nov 24 08:51:37 crc kubenswrapper[4886]: E1124 08:51:37.527324 4886 projected.go:194] Error preparing data for projected volume kube-api-access-vv2hw for pod openshift-apiserver/apiserver-76f77b778f-lz4ml: failed to sync configmap cache: timed out waiting for the condition Nov 24 08:51:37 crc kubenswrapper[4886]: E1124 08:51:37.527440 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/eb6743a4-a25e-4b1b-ae3b-3d2199cfbff2-kube-api-access-vv2hw podName:eb6743a4-a25e-4b1b-ae3b-3d2199cfbff2 nodeName:}" failed. No retries permitted until 2025-11-24 08:51:38.027415502 +0000 UTC m=+153.914153637 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-vv2hw" (UniqueName: "kubernetes.io/projected/eb6743a4-a25e-4b1b-ae3b-3d2199cfbff2-kube-api-access-vv2hw") pod "apiserver-76f77b778f-lz4ml" (UID: "eb6743a4-a25e-4b1b-ae3b-3d2199cfbff2") : failed to sync configmap cache: timed out waiting for the condition Nov 24 08:51:37 crc kubenswrapper[4886]: I1124 08:51:37.549563 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Nov 24 08:51:37 crc kubenswrapper[4886]: I1124 08:51:37.560830 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-vb86f" Nov 24 08:51:37 crc kubenswrapper[4886]: I1124 08:51:37.563740 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Nov 24 08:51:37 crc kubenswrapper[4886]: I1124 08:51:37.582204 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 08:51:37 crc kubenswrapper[4886]: E1124 08:51:37.582788 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 08:51:38.082770898 +0000 UTC m=+153.969509033 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 08:51:37 crc kubenswrapper[4886]: I1124 08:51:37.584275 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Nov 24 08:51:37 crc kubenswrapper[4886]: I1124 08:51:37.584812 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-dssrb" Nov 24 08:51:37 crc kubenswrapper[4886]: I1124 08:51:37.599686 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-tsmf5" Nov 24 08:51:37 crc kubenswrapper[4886]: I1124 08:51:37.602818 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Nov 24 08:51:37 crc kubenswrapper[4886]: W1124 08:51:37.604387 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3a73ff5c_5292_45f7_a7cf_97714a8a109d.slice/crio-517da5288c5a4052b8a3822598fb38606fdaa1849a0463768faf032598f23450 WatchSource:0}: Error finding container 517da5288c5a4052b8a3822598fb38606fdaa1849a0463768faf032598f23450: Status 404 returned error can't find the container with id 517da5288c5a4052b8a3822598fb38606fdaa1849a0463768faf032598f23450 Nov 24 08:51:37 crc kubenswrapper[4886]: I1124 08:51:37.606709 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-xjmsp" Nov 24 08:51:37 crc kubenswrapper[4886]: I1124 08:51:37.625577 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Nov 24 08:51:37 crc kubenswrapper[4886]: I1124 08:51:37.636938 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-npx2b" Nov 24 08:51:37 crc kubenswrapper[4886]: I1124 08:51:37.639375 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-x5nfr"] Nov 24 08:51:37 crc kubenswrapper[4886]: I1124 08:51:37.645546 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Nov 24 08:51:37 crc kubenswrapper[4886]: I1124 08:51:37.663900 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Nov 24 08:51:37 crc kubenswrapper[4886]: I1124 08:51:37.678941 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-vtbkx"] Nov 24 08:51:37 crc kubenswrapper[4886]: I1124 08:51:37.693888 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n794d\" (UID: \"30599c42-eef7-4967-b84f-95b49a225bd6\") " pod="openshift-image-registry/image-registry-697d97f7c8-n794d" Nov 24 08:51:37 crc kubenswrapper[4886]: I1124 08:51:37.694026 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qp4dt\" (UniqueName: \"kubernetes.io/projected/efe0b276-8633-4435-b8af-d4651276c24f-kube-api-access-qp4dt\") pod \"openshift-apiserver-operator-796bbdcf4f-vz657\" (UID: \"efe0b276-8633-4435-b8af-d4651276c24f\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vz657" Nov 24 08:51:37 crc kubenswrapper[4886]: E1124 08:51:37.694418 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 08:51:38.194388967 +0000 UTC m=+154.081127102 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n794d" (UID: "30599c42-eef7-4967-b84f-95b49a225bd6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 08:51:37 crc kubenswrapper[4886]: I1124 08:51:37.694930 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Nov 24 08:51:37 crc kubenswrapper[4886]: I1124 08:51:37.704530 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qp4dt\" (UniqueName: \"kubernetes.io/projected/efe0b276-8633-4435-b8af-d4651276c24f-kube-api-access-qp4dt\") pod \"openshift-apiserver-operator-796bbdcf4f-vz657\" (UID: \"efe0b276-8633-4435-b8af-d4651276c24f\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vz657" Nov 24 08:51:37 crc kubenswrapper[4886]: I1124 08:51:37.704626 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Nov 24 08:51:37 crc kubenswrapper[4886]: I1124 08:51:37.706374 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gf28v\" (UniqueName: \"kubernetes.io/projected/ce7242d2-301f-4d8f-816a-a36418be67ca-kube-api-access-gf28v\") pod \"openshift-config-operator-7777fb866f-4z5n9\" (UID: \"ce7242d2-301f-4d8f-816a-a36418be67ca\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-4z5n9" Nov 24 08:51:37 crc kubenswrapper[4886]: I1124 08:51:37.712599 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-7ftcq"] Nov 24 08:51:37 crc kubenswrapper[4886]: I1124 08:51:37.726587 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Nov 24 08:51:37 crc kubenswrapper[4886]: I1124 08:51:37.753746 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Nov 24 08:51:37 crc kubenswrapper[4886]: I1124 08:51:37.794880 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 08:51:37 crc kubenswrapper[4886]: E1124 08:51:37.795412 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 08:51:38.295393334 +0000 UTC m=+154.182131469 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 08:51:37 crc kubenswrapper[4886]: I1124 08:51:37.811276 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-5fr58"] Nov 24 08:51:37 crc kubenswrapper[4886]: I1124 08:51:37.828951 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-4z5n9" Nov 24 08:51:37 crc kubenswrapper[4886]: I1124 08:51:37.897164 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n794d\" (UID: \"30599c42-eef7-4967-b84f-95b49a225bd6\") " pod="openshift-image-registry/image-registry-697d97f7c8-n794d" Nov 24 08:51:37 crc kubenswrapper[4886]: I1124 08:51:37.897225 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v68xl\" (UniqueName: \"kubernetes.io/projected/b65ba9fc-0a0d-49f2-9991-319b054df0b0-kube-api-access-v68xl\") pod \"route-controller-manager-6576b87f9c-4k7lh\" (UID: \"b65ba9fc-0a0d-49f2-9991-319b054df0b0\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4k7lh" Nov 24 08:51:37 crc kubenswrapper[4886]: E1124 08:51:37.897771 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 08:51:38.397748449 +0000 UTC m=+154.284486584 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n794d" (UID: "30599c42-eef7-4967-b84f-95b49a225bd6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 08:51:37 crc kubenswrapper[4886]: I1124 08:51:37.911895 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vz657" Nov 24 08:51:37 crc kubenswrapper[4886]: I1124 08:51:37.914760 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v68xl\" (UniqueName: \"kubernetes.io/projected/b65ba9fc-0a0d-49f2-9991-319b054df0b0-kube-api-access-v68xl\") pod \"route-controller-manager-6576b87f9c-4k7lh\" (UID: \"b65ba9fc-0a0d-49f2-9991-319b054df0b0\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4k7lh" Nov 24 08:51:37 crc kubenswrapper[4886]: I1124 08:51:37.984992 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-fzqpc"] Nov 24 08:51:38 crc kubenswrapper[4886]: I1124 08:51:38.010256 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 08:51:38 crc kubenswrapper[4886]: I1124 08:51:38.010467 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/eb6743a4-a25e-4b1b-ae3b-3d2199cfbff2-trusted-ca-bundle\") pod \"apiserver-76f77b778f-lz4ml\" (UID: \"eb6743a4-a25e-4b1b-ae3b-3d2199cfbff2\") " pod="openshift-apiserver/apiserver-76f77b778f-lz4ml" Nov 24 08:51:38 crc kubenswrapper[4886]: I1124 08:51:38.010503 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e4cfb4a2-69e1-4c38-b07f-cb1e628cbc2f-config\") pod \"machine-api-operator-5694c8668f-fqpl9\" (UID: \"e4cfb4a2-69e1-4c38-b07f-cb1e628cbc2f\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-fqpl9" Nov 24 08:51:38 crc kubenswrapper[4886]: I1124 08:51:38.010544 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/eb6743a4-a25e-4b1b-ae3b-3d2199cfbff2-image-import-ca\") pod \"apiserver-76f77b778f-lz4ml\" (UID: \"eb6743a4-a25e-4b1b-ae3b-3d2199cfbff2\") " pod="openshift-apiserver/apiserver-76f77b778f-lz4ml" Nov 24 08:51:38 crc kubenswrapper[4886]: I1124 08:51:38.010577 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eb6743a4-a25e-4b1b-ae3b-3d2199cfbff2-config\") pod \"apiserver-76f77b778f-lz4ml\" (UID: \"eb6743a4-a25e-4b1b-ae3b-3d2199cfbff2\") " pod="openshift-apiserver/apiserver-76f77b778f-lz4ml" Nov 24 08:51:38 crc kubenswrapper[4886]: I1124 08:51:38.010593 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/eb6743a4-a25e-4b1b-ae3b-3d2199cfbff2-encryption-config\") pod \"apiserver-76f77b778f-lz4ml\" (UID: \"eb6743a4-a25e-4b1b-ae3b-3d2199cfbff2\") " pod="openshift-apiserver/apiserver-76f77b778f-lz4ml" Nov 24 08:51:38 crc kubenswrapper[4886]: I1124 08:51:38.010620 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/eb6743a4-a25e-4b1b-ae3b-3d2199cfbff2-etcd-serving-ca\") pod \"apiserver-76f77b778f-lz4ml\" (UID: \"eb6743a4-a25e-4b1b-ae3b-3d2199cfbff2\") " pod="openshift-apiserver/apiserver-76f77b778f-lz4ml" Nov 24 08:51:38 crc kubenswrapper[4886]: I1124 08:51:38.010636 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/eb6743a4-a25e-4b1b-ae3b-3d2199cfbff2-audit\") pod \"apiserver-76f77b778f-lz4ml\" (UID: \"eb6743a4-a25e-4b1b-ae3b-3d2199cfbff2\") " pod="openshift-apiserver/apiserver-76f77b778f-lz4ml" Nov 24 08:51:38 crc kubenswrapper[4886]: I1124 08:51:38.010677 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b65ba9fc-0a0d-49f2-9991-319b054df0b0-client-ca\") pod \"route-controller-manager-6576b87f9c-4k7lh\" (UID: \"b65ba9fc-0a0d-49f2-9991-319b054df0b0\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4k7lh" Nov 24 08:51:38 crc kubenswrapper[4886]: E1124 08:51:38.011484 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 08:51:38.511437934 +0000 UTC m=+154.398176079 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 08:51:38 crc kubenswrapper[4886]: I1124 08:51:38.023512 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e4cfb4a2-69e1-4c38-b07f-cb1e628cbc2f-config\") pod \"machine-api-operator-5694c8668f-fqpl9\" (UID: \"e4cfb4a2-69e1-4c38-b07f-cb1e628cbc2f\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-fqpl9" Nov 24 08:51:38 crc kubenswrapper[4886]: I1124 08:51:38.023928 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b65ba9fc-0a0d-49f2-9991-319b054df0b0-client-ca\") pod \"route-controller-manager-6576b87f9c-4k7lh\" (UID: \"b65ba9fc-0a0d-49f2-9991-319b054df0b0\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4k7lh" Nov 24 08:51:38 crc kubenswrapper[4886]: I1124 08:51:38.026470 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/eb6743a4-a25e-4b1b-ae3b-3d2199cfbff2-etcd-serving-ca\") pod \"apiserver-76f77b778f-lz4ml\" (UID: \"eb6743a4-a25e-4b1b-ae3b-3d2199cfbff2\") " pod="openshift-apiserver/apiserver-76f77b778f-lz4ml" Nov 24 08:51:38 crc kubenswrapper[4886]: I1124 08:51:38.028707 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eb6743a4-a25e-4b1b-ae3b-3d2199cfbff2-config\") pod \"apiserver-76f77b778f-lz4ml\" (UID: \"eb6743a4-a25e-4b1b-ae3b-3d2199cfbff2\") " pod="openshift-apiserver/apiserver-76f77b778f-lz4ml" Nov 24 08:51:38 crc kubenswrapper[4886]: I1124 08:51:38.030977 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/eb6743a4-a25e-4b1b-ae3b-3d2199cfbff2-audit\") pod \"apiserver-76f77b778f-lz4ml\" (UID: \"eb6743a4-a25e-4b1b-ae3b-3d2199cfbff2\") " pod="openshift-apiserver/apiserver-76f77b778f-lz4ml" Nov 24 08:51:38 crc kubenswrapper[4886]: I1124 08:51:38.031386 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/eb6743a4-a25e-4b1b-ae3b-3d2199cfbff2-trusted-ca-bundle\") pod \"apiserver-76f77b778f-lz4ml\" (UID: \"eb6743a4-a25e-4b1b-ae3b-3d2199cfbff2\") " pod="openshift-apiserver/apiserver-76f77b778f-lz4ml" Nov 24 08:51:38 crc kubenswrapper[4886]: I1124 08:51:38.031764 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/eb6743a4-a25e-4b1b-ae3b-3d2199cfbff2-image-import-ca\") pod \"apiserver-76f77b778f-lz4ml\" (UID: \"eb6743a4-a25e-4b1b-ae3b-3d2199cfbff2\") " pod="openshift-apiserver/apiserver-76f77b778f-lz4ml" Nov 24 08:51:38 crc kubenswrapper[4886]: I1124 08:51:38.047583 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/eb6743a4-a25e-4b1b-ae3b-3d2199cfbff2-encryption-config\") pod \"apiserver-76f77b778f-lz4ml\" (UID: \"eb6743a4-a25e-4b1b-ae3b-3d2199cfbff2\") " pod="openshift-apiserver/apiserver-76f77b778f-lz4ml" Nov 24 08:51:38 crc kubenswrapper[4886]: I1124 08:51:38.113608 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n794d\" (UID: \"30599c42-eef7-4967-b84f-95b49a225bd6\") " pod="openshift-image-registry/image-registry-697d97f7c8-n794d" Nov 24 08:51:38 crc kubenswrapper[4886]: E1124 08:51:38.114826 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 08:51:38.614800607 +0000 UTC m=+154.501538742 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n794d" (UID: "30599c42-eef7-4967-b84f-95b49a225bd6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 08:51:38 crc kubenswrapper[4886]: I1124 08:51:38.116656 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vv2hw\" (UniqueName: \"kubernetes.io/projected/eb6743a4-a25e-4b1b-ae3b-3d2199cfbff2-kube-api-access-vv2hw\") pod \"apiserver-76f77b778f-lz4ml\" (UID: \"eb6743a4-a25e-4b1b-ae3b-3d2199cfbff2\") " pod="openshift-apiserver/apiserver-76f77b778f-lz4ml" Nov 24 08:51:38 crc kubenswrapper[4886]: I1124 08:51:38.135011 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vv2hw\" (UniqueName: \"kubernetes.io/projected/eb6743a4-a25e-4b1b-ae3b-3d2199cfbff2-kube-api-access-vv2hw\") pod \"apiserver-76f77b778f-lz4ml\" (UID: \"eb6743a4-a25e-4b1b-ae3b-3d2199cfbff2\") " pod="openshift-apiserver/apiserver-76f77b778f-lz4ml" Nov 24 08:51:38 crc kubenswrapper[4886]: I1124 08:51:38.154773 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-lz4ml" Nov 24 08:51:38 crc kubenswrapper[4886]: I1124 08:51:38.163283 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4k7lh" Nov 24 08:51:38 crc kubenswrapper[4886]: I1124 08:51:38.169983 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-hplr8" event={"ID":"98d1a535-6aba-4633-8091-42e633b865b1","Type":"ContainerStarted","Data":"1ecaa1c74f9357cda54943ec12097fb1775b1a9f821686361ab20e0597efa009"} Nov 24 08:51:38 crc kubenswrapper[4886]: I1124 08:51:38.172214 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-km4wr" event={"ID":"9d51f527-2205-4113-9b65-655f3fab2e1c","Type":"ContainerStarted","Data":"3f0c12f24e1d94e22b10ef374b22c1c5b077c3e610853971b36a50e962144960"} Nov 24 08:51:38 crc kubenswrapper[4886]: I1124 08:51:38.175300 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-bbnxh" event={"ID":"d879a534-2c8a-463b-bbf6-75213cb4d554","Type":"ContainerStarted","Data":"207f65a2739539357074cffb59ad53706062c8e6fd4f01361942293cd489525a"} Nov 24 08:51:38 crc kubenswrapper[4886]: I1124 08:51:38.177016 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-845fz" event={"ID":"f8080498-de06-4f8f-9c35-3d296e28a021","Type":"ContainerStarted","Data":"6d33e3dd089749680b0e4427c16b3fb13b5dd070bebb9cf38fba1cbfea38baab"} Nov 24 08:51:38 crc kubenswrapper[4886]: I1124 08:51:38.177455 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-845fz" Nov 24 08:51:38 crc kubenswrapper[4886]: I1124 08:51:38.178810 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-rj5kj" event={"ID":"3a73ff5c-5292-45f7-a7cf-97714a8a109d","Type":"ContainerStarted","Data":"517da5288c5a4052b8a3822598fb38606fdaa1849a0463768faf032598f23450"} Nov 24 08:51:38 crc kubenswrapper[4886]: I1124 08:51:38.182252 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-tbxjm" event={"ID":"87f902e1-073b-4ccd-8b3a-717f802e9671","Type":"ContainerStarted","Data":"8eea4918e58f77f6deb36f7042881dc2205041311fa6910fec0b42ced659d9f9"} Nov 24 08:51:38 crc kubenswrapper[4886]: I1124 08:51:38.182313 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-tbxjm" event={"ID":"87f902e1-073b-4ccd-8b3a-717f802e9671","Type":"ContainerStarted","Data":"545e8bd5ff09b34f5aae73ec10886c2efd2047f54b1945232ccc5886e0f42486"} Nov 24 08:51:38 crc kubenswrapper[4886]: I1124 08:51:38.185548 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-d9sdq" event={"ID":"32727d31-2207-4688-b70c-6045b674538b","Type":"ContainerStarted","Data":"cb4617b1031f737a531df633c01416bcf80b241c706e8e32b34439c75f45c002"} Nov 24 08:51:38 crc kubenswrapper[4886]: I1124 08:51:38.185578 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-d9sdq" event={"ID":"32727d31-2207-4688-b70c-6045b674538b","Type":"ContainerStarted","Data":"c5ec4e295cfb04d8f64542d7b1c5b0f0e56c6f6289195e25940474276d4b8e16"} Nov 24 08:51:38 crc kubenswrapper[4886]: I1124 08:51:38.188904 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-r79cd" event={"ID":"316d4bac-9349-4ec0-82ce-af715e7a3259","Type":"ContainerStarted","Data":"cb144ea9ce742e245ef9cc8e1ba9ffbc633544bdc88b06d422afc17efaddc37b"} Nov 24 08:51:38 crc kubenswrapper[4886]: I1124 08:51:38.192185 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-4zfpn" event={"ID":"89ad3a24-065a-4210-bc90-737b51139e8c","Type":"ContainerStarted","Data":"879752d860805432e1a2ea6d3aab94993365acf0197143705ef3003914e03481"} Nov 24 08:51:38 crc kubenswrapper[4886]: I1124 08:51:38.196380 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-zqjsm" event={"ID":"74df4af9-c3c0-4e2e-bd0f-aefb13ca53cd","Type":"ContainerStarted","Data":"779eb24c708d03d005326fdd28b52be7751116fabceacb22cac731b1b3904853"} Nov 24 08:51:38 crc kubenswrapper[4886]: I1124 08:51:38.198061 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-fzqpc" event={"ID":"5ffb9e3b-f114-44b1-9521-096b538ce9bf","Type":"ContainerStarted","Data":"0adabae8e274e44933d3eb344a74398c27a58ebd3e3b54b9e1fa4e96ede4b576"} Nov 24 08:51:38 crc kubenswrapper[4886]: I1124 08:51:38.214336 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-5fr58" event={"ID":"afdfb747-0bc0-40a4-89e6-dc6970617398","Type":"ContainerStarted","Data":"790ea003069355e12d7cccd9aa69bdb521f19d9b6d1fb2473383a8ebb97c6819"} Nov 24 08:51:38 crc kubenswrapper[4886]: I1124 08:51:38.216434 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-7ftcq" event={"ID":"1559d3ac-0229-4e31-9d0b-ebf633409384","Type":"ContainerStarted","Data":"e88ab6ff898bc14f9f433ae9912f98dc78e508e12e669aa8ffefc9758ac8fe8e"} Nov 24 08:51:38 crc kubenswrapper[4886]: I1124 08:51:38.217596 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 08:51:38 crc kubenswrapper[4886]: E1124 08:51:38.217816 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 08:51:38.717778548 +0000 UTC m=+154.604516683 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 08:51:38 crc kubenswrapper[4886]: I1124 08:51:38.217975 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n794d\" (UID: \"30599c42-eef7-4967-b84f-95b49a225bd6\") " pod="openshift-image-registry/image-registry-697d97f7c8-n794d" Nov 24 08:51:38 crc kubenswrapper[4886]: E1124 08:51:38.218706 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 08:51:38.718685283 +0000 UTC m=+154.605423418 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n794d" (UID: "30599c42-eef7-4967-b84f-95b49a225bd6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 08:51:38 crc kubenswrapper[4886]: I1124 08:51:38.220924 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-x5nfr" event={"ID":"3a0bfe9c-4f8c-47f6-945b-0f93f9888f93","Type":"ContainerStarted","Data":"738e374c76f93ce0391e58363e8c7c507791056ef70faef5e894e99de8fa2c5d"} Nov 24 08:51:38 crc kubenswrapper[4886]: I1124 08:51:38.222399 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-xbr84" event={"ID":"a54a2524-099e-4a0f-9762-eafbc576dc56","Type":"ContainerStarted","Data":"622063cd6251ff66262d0b7eb50ed41eefedf476c4aaad7c370f036ecee9febc"} Nov 24 08:51:38 crc kubenswrapper[4886]: I1124 08:51:38.224802 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-fsxn2" event={"ID":"eb155f7a-3c80-42c5-adfe-69f854a2d032","Type":"ContainerStarted","Data":"3fc2a367b7420b9c0a4b68b6b17563b9b60ec174634c8bb509a491a33729859e"} Nov 24 08:51:38 crc kubenswrapper[4886]: I1124 08:51:38.227990 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-njfqk" event={"ID":"123bf335-5130-413e-b3fa-8fa4ba9111da","Type":"ContainerStarted","Data":"9a4b15662b60fd7e7047784d09e8e19c6b4589d5596a27d7e46e44b93c1483c5"} Nov 24 08:51:38 crc kubenswrapper[4886]: I1124 08:51:38.229883 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-vtbkx" event={"ID":"168f234d-da70-475a-b6df-2771ab11368e","Type":"ContainerStarted","Data":"144cfe1bbe6e337034c9d8f295c50e3c42ad810bc14f2d02242ec8d6774217e8"} Nov 24 08:51:38 crc kubenswrapper[4886]: I1124 08:51:38.230116 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-w6cvz" Nov 24 08:51:38 crc kubenswrapper[4886]: I1124 08:51:38.259584 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-fqpl9" Nov 24 08:51:38 crc kubenswrapper[4886]: I1124 08:51:38.267303 4886 patch_prober.go:28] interesting pod/downloads-7954f5f757-w6cvz container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Nov 24 08:51:38 crc kubenswrapper[4886]: I1124 08:51:38.267366 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-w6cvz" podUID="580c4fdc-bdb3-4099-b715-ac4c63acecb2" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Nov 24 08:51:38 crc kubenswrapper[4886]: I1124 08:51:38.267727 4886 patch_prober.go:28] interesting pod/console-operator-58897d9998-845fz container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.8:8443/readyz\": dial tcp 10.217.0.8:8443: connect: connection refused" start-of-body= Nov 24 08:51:38 crc kubenswrapper[4886]: I1124 08:51:38.269971 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-845fz" podUID="f8080498-de06-4f8f-9c35-3d296e28a021" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.8:8443/readyz\": dial tcp 10.217.0.8:8443: connect: connection refused" Nov 24 08:51:38 crc kubenswrapper[4886]: I1124 08:51:38.324389 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 08:51:38 crc kubenswrapper[4886]: E1124 08:51:38.364592 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 08:51:38.86454982 +0000 UTC m=+154.751287955 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 08:51:38 crc kubenswrapper[4886]: I1124 08:51:38.375734 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-v7qjg"] Nov 24 08:51:38 crc kubenswrapper[4886]: I1124 08:51:38.380384 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-z89jx"] Nov 24 08:51:38 crc kubenswrapper[4886]: I1124 08:51:38.396930 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-njp2h"] Nov 24 08:51:38 crc kubenswrapper[4886]: I1124 08:51:38.427237 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n794d\" (UID: \"30599c42-eef7-4967-b84f-95b49a225bd6\") " pod="openshift-image-registry/image-registry-697d97f7c8-n794d" Nov 24 08:51:38 crc kubenswrapper[4886]: E1124 08:51:38.428974 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 08:51:38.928954685 +0000 UTC m=+154.815692820 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n794d" (UID: "30599c42-eef7-4967-b84f-95b49a225bd6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 08:51:38 crc kubenswrapper[4886]: I1124 08:51:38.484955 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29399565-qwh9r"] Nov 24 08:51:38 crc kubenswrapper[4886]: I1124 08:51:38.486136 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-pxjkj"] Nov 24 08:51:38 crc kubenswrapper[4886]: I1124 08:51:38.529629 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 08:51:38 crc kubenswrapper[4886]: E1124 08:51:38.529923 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 08:51:39.029891221 +0000 UTC m=+154.916629356 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 08:51:38 crc kubenswrapper[4886]: I1124 08:51:38.532669 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n794d\" (UID: \"30599c42-eef7-4967-b84f-95b49a225bd6\") " pod="openshift-image-registry/image-registry-697d97f7c8-n794d" Nov 24 08:51:38 crc kubenswrapper[4886]: E1124 08:51:38.533101 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 08:51:39.033087788 +0000 UTC m=+154.919825923 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n794d" (UID: "30599c42-eef7-4967-b84f-95b49a225bd6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 08:51:38 crc kubenswrapper[4886]: I1124 08:51:38.633555 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 08:51:38 crc kubenswrapper[4886]: E1124 08:51:38.633763 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 08:51:39.133741206 +0000 UTC m=+155.020479341 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 08:51:38 crc kubenswrapper[4886]: I1124 08:51:38.634304 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n794d\" (UID: \"30599c42-eef7-4967-b84f-95b49a225bd6\") " pod="openshift-image-registry/image-registry-697d97f7c8-n794d" Nov 24 08:51:38 crc kubenswrapper[4886]: E1124 08:51:38.634597 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 08:51:39.134587119 +0000 UTC m=+155.021325254 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n794d" (UID: "30599c42-eef7-4967-b84f-95b49a225bd6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 08:51:38 crc kubenswrapper[4886]: I1124 08:51:38.736597 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 08:51:38 crc kubenswrapper[4886]: E1124 08:51:38.737065 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 08:51:39.237042157 +0000 UTC m=+155.123780302 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 08:51:38 crc kubenswrapper[4886]: I1124 08:51:38.745430 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-t4g4w"] Nov 24 08:51:38 crc kubenswrapper[4886]: I1124 08:51:38.752778 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-59wgb"] Nov 24 08:51:38 crc kubenswrapper[4886]: I1124 08:51:38.757107 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-4kfg8"] Nov 24 08:51:38 crc kubenswrapper[4886]: I1124 08:51:38.762766 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-tbxjm" podStartSLOduration=133.759374799 podStartE2EDuration="2m13.759374799s" podCreationTimestamp="2025-11-24 08:49:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 08:51:38.75577467 +0000 UTC m=+154.642512835" watchObservedRunningTime="2025-11-24 08:51:38.759374799 +0000 UTC m=+154.646112924" Nov 24 08:51:38 crc kubenswrapper[4886]: I1124 08:51:38.792068 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-f8pr4"] Nov 24 08:51:38 crc kubenswrapper[4886]: I1124 08:51:38.822379 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-x8hmr"] Nov 24 08:51:38 crc kubenswrapper[4886]: I1124 08:51:38.839675 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n794d\" (UID: \"30599c42-eef7-4967-b84f-95b49a225bd6\") " pod="openshift-image-registry/image-registry-697d97f7c8-n794d" Nov 24 08:51:38 crc kubenswrapper[4886]: E1124 08:51:38.840297 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 08:51:39.340278766 +0000 UTC m=+155.227016911 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n794d" (UID: "30599c42-eef7-4967-b84f-95b49a225bd6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 08:51:38 crc kubenswrapper[4886]: W1124 08:51:38.859574 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfd02555c_e6fc_4825_a06a_53497a2cfeda.slice/crio-5b630e7fb1f00d744cb4115d4b78ccbc40166df9943fdfe2130f80cf6dddf0e9 WatchSource:0}: Error finding container 5b630e7fb1f00d744cb4115d4b78ccbc40166df9943fdfe2130f80cf6dddf0e9: Status 404 returned error can't find the container with id 5b630e7fb1f00d744cb4115d4b78ccbc40166df9943fdfe2130f80cf6dddf0e9 Nov 24 08:51:38 crc kubenswrapper[4886]: I1124 08:51:38.863135 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-vb86f"] Nov 24 08:51:38 crc kubenswrapper[4886]: W1124 08:51:38.880202 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod560047dc_48dd_40d2_b7b3_8a8e4db0d7c6.slice/crio-4a494d694e4574eca7ea3763c7c8d1ea43bff42a12c681dab394a14bacccb0ad WatchSource:0}: Error finding container 4a494d694e4574eca7ea3763c7c8d1ea43bff42a12c681dab394a14bacccb0ad: Status 404 returned error can't find the container with id 4a494d694e4574eca7ea3763c7c8d1ea43bff42a12c681dab394a14bacccb0ad Nov 24 08:51:38 crc kubenswrapper[4886]: I1124 08:51:38.940965 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-845fz" podStartSLOduration=133.940938564 podStartE2EDuration="2m13.940938564s" podCreationTimestamp="2025-11-24 08:49:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 08:51:38.933180561 +0000 UTC m=+154.819918696" watchObservedRunningTime="2025-11-24 08:51:38.940938564 +0000 UTC m=+154.827676699" Nov 24 08:51:38 crc kubenswrapper[4886]: I1124 08:51:38.943435 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 08:51:38 crc kubenswrapper[4886]: E1124 08:51:38.944337 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 08:51:39.443938756 +0000 UTC m=+155.330676891 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 08:51:38 crc kubenswrapper[4886]: I1124 08:51:38.993978 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-dssrb"] Nov 24 08:51:39 crc kubenswrapper[4886]: I1124 08:51:39.029768 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-xjmsp"] Nov 24 08:51:39 crc kubenswrapper[4886]: I1124 08:51:39.034046 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-ztbpv"] Nov 24 08:51:39 crc kubenswrapper[4886]: I1124 08:51:39.045722 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n794d\" (UID: \"30599c42-eef7-4967-b84f-95b49a225bd6\") " pod="openshift-image-registry/image-registry-697d97f7c8-n794d" Nov 24 08:51:39 crc kubenswrapper[4886]: E1124 08:51:39.049214 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 08:51:39.54919753 +0000 UTC m=+155.435935665 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n794d" (UID: "30599c42-eef7-4967-b84f-95b49a225bd6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 08:51:39 crc kubenswrapper[4886]: I1124 08:51:39.095519 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-zqjsm" podStartSLOduration=134.095495339 podStartE2EDuration="2m14.095495339s" podCreationTimestamp="2025-11-24 08:49:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 08:51:39.093455673 +0000 UTC m=+154.980193828" watchObservedRunningTime="2025-11-24 08:51:39.095495339 +0000 UTC m=+154.982233464" Nov 24 08:51:39 crc kubenswrapper[4886]: I1124 08:51:39.100876 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-4k7lh"] Nov 24 08:51:39 crc kubenswrapper[4886]: I1124 08:51:39.102987 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vz657"] Nov 24 08:51:39 crc kubenswrapper[4886]: I1124 08:51:39.125967 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-r79cd" podStartSLOduration=134.125944413 podStartE2EDuration="2m14.125944413s" podCreationTimestamp="2025-11-24 08:49:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 08:51:39.120691679 +0000 UTC m=+155.007429814" watchObservedRunningTime="2025-11-24 08:51:39.125944413 +0000 UTC m=+155.012682548" Nov 24 08:51:39 crc kubenswrapper[4886]: I1124 08:51:39.128600 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-tsmf5"] Nov 24 08:51:39 crc kubenswrapper[4886]: I1124 08:51:39.133589 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-4z5n9"] Nov 24 08:51:39 crc kubenswrapper[4886]: I1124 08:51:39.147839 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 08:51:39 crc kubenswrapper[4886]: E1124 08:51:39.148296 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 08:51:39.648278145 +0000 UTC m=+155.535016280 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 08:51:39 crc kubenswrapper[4886]: I1124 08:51:39.162924 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-npx2b"] Nov 24 08:51:39 crc kubenswrapper[4886]: I1124 08:51:39.172746 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-fqpl9"] Nov 24 08:51:39 crc kubenswrapper[4886]: I1124 08:51:39.249959 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n794d\" (UID: \"30599c42-eef7-4967-b84f-95b49a225bd6\") " pod="openshift-image-registry/image-registry-697d97f7c8-n794d" Nov 24 08:51:39 crc kubenswrapper[4886]: E1124 08:51:39.250387 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 08:51:39.750372333 +0000 UTC m=+155.637110468 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n794d" (UID: "30599c42-eef7-4967-b84f-95b49a225bd6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 08:51:39 crc kubenswrapper[4886]: I1124 08:51:39.252617 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-59wgb" event={"ID":"400260cc-a84e-4d59-99f5-5e2359ceee1c","Type":"ContainerStarted","Data":"4061510bcbd9113db8187a8998e386a04915a7a8f0559f06355c02c356f4ecd2"} Nov 24 08:51:39 crc kubenswrapper[4886]: I1124 08:51:39.273500 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-lz4ml"] Nov 24 08:51:39 crc kubenswrapper[4886]: I1124 08:51:39.278386 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-dd9nj" podStartSLOduration=134.27836802 podStartE2EDuration="2m14.27836802s" podCreationTimestamp="2025-11-24 08:49:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 08:51:39.277559018 +0000 UTC m=+155.164297153" watchObservedRunningTime="2025-11-24 08:51:39.27836802 +0000 UTC m=+155.165106155" Nov 24 08:51:39 crc kubenswrapper[4886]: I1124 08:51:39.333886 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-w6cvz" podStartSLOduration=134.333859019 podStartE2EDuration="2m14.333859019s" podCreationTimestamp="2025-11-24 08:49:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 08:51:39.333431398 +0000 UTC m=+155.220169543" watchObservedRunningTime="2025-11-24 08:51:39.333859019 +0000 UTC m=+155.220597154" Nov 24 08:51:39 crc kubenswrapper[4886]: I1124 08:51:39.351599 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 08:51:39 crc kubenswrapper[4886]: E1124 08:51:39.352129 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 08:51:39.852107289 +0000 UTC m=+155.738845424 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 08:51:39 crc kubenswrapper[4886]: I1124 08:51:39.379026 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-t4g4w" event={"ID":"fd02555c-e6fc-4825-a06a-53497a2cfeda","Type":"ContainerStarted","Data":"5b630e7fb1f00d744cb4115d4b78ccbc40166df9943fdfe2130f80cf6dddf0e9"} Nov 24 08:51:39 crc kubenswrapper[4886]: I1124 08:51:39.403244 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-njfqk" event={"ID":"123bf335-5130-413e-b3fa-8fa4ba9111da","Type":"ContainerStarted","Data":"ee2d4526da695ea97206dcff51b6551a044db1d8c0ecd5dd6e7be543fbaf3f51"} Nov 24 08:51:39 crc kubenswrapper[4886]: I1124 08:51:39.421768 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-bbnxh" event={"ID":"d879a534-2c8a-463b-bbf6-75213cb4d554","Type":"ContainerStarted","Data":"83241c3639307232a6ee2e996a0ba992c218ec76f2cb4afd839840ea30adcf23"} Nov 24 08:51:39 crc kubenswrapper[4886]: I1124 08:51:39.439368 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-7ftcq" event={"ID":"1559d3ac-0229-4e31-9d0b-ebf633409384","Type":"ContainerStarted","Data":"64ab1d7c80b23478ed19ff7bf9515e9b477b77e979b0c557e6f8233561d5b105"} Nov 24 08:51:39 crc kubenswrapper[4886]: I1124 08:51:39.449571 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-vb86f" event={"ID":"1c634f86-dcd4-4cf7-aa6f-4245be5d0b57","Type":"ContainerStarted","Data":"5afeb03d22eb88574edac51781d4be4ed3ad0a0774e2d463dadc901cc4facccf"} Nov 24 08:51:39 crc kubenswrapper[4886]: I1124 08:51:39.453407 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n794d\" (UID: \"30599c42-eef7-4967-b84f-95b49a225bd6\") " pod="openshift-image-registry/image-registry-697d97f7c8-n794d" Nov 24 08:51:39 crc kubenswrapper[4886]: E1124 08:51:39.453782 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 08:51:39.953768215 +0000 UTC m=+155.840506350 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n794d" (UID: "30599c42-eef7-4967-b84f-95b49a225bd6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 08:51:39 crc kubenswrapper[4886]: I1124 08:51:39.464272 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-fsxn2" event={"ID":"eb155f7a-3c80-42c5-adfe-69f854a2d032","Type":"ContainerStarted","Data":"b2614d3070412ed6c8f00fa9781f4feeaa29d303bf826ffd238c91b8c77ccc1e"} Nov 24 08:51:39 crc kubenswrapper[4886]: I1124 08:51:39.482095 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-xbr84" event={"ID":"a54a2524-099e-4a0f-9762-eafbc576dc56","Type":"ContainerStarted","Data":"221beb3ac4681ca9a6111f5daa87f6285a17ff05f28f5ccd3f967e1b503a5767"} Nov 24 08:51:39 crc kubenswrapper[4886]: I1124 08:51:39.507402 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-4zfpn" event={"ID":"89ad3a24-065a-4210-bc90-737b51139e8c","Type":"ContainerStarted","Data":"e8a474d6aa401890de6f9362222146819a605778d1f14fcb191844c6b85e437b"} Nov 24 08:51:39 crc kubenswrapper[4886]: I1124 08:51:39.527077 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-xbr84" Nov 24 08:51:39 crc kubenswrapper[4886]: I1124 08:51:39.527133 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-xjmsp" event={"ID":"e8247ed1-a90a-409b-a326-07bb154a4d16","Type":"ContainerStarted","Data":"e0f5bab9b48b363fd63574511f3a879bc6dba297fe6aef0ff9ce49e04eaa97c7"} Nov 24 08:51:39 crc kubenswrapper[4886]: I1124 08:51:39.529957 4886 patch_prober.go:28] interesting pod/router-default-5444994796-xbr84 container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Nov 24 08:51:39 crc kubenswrapper[4886]: I1124 08:51:39.530032 4886 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xbr84" podUID="a54a2524-099e-4a0f-9762-eafbc576dc56" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Nov 24 08:51:39 crc kubenswrapper[4886]: I1124 08:51:39.543725 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29399565-qwh9r" event={"ID":"1afd949e-d0f2-41b8-9632-917df3468232","Type":"ContainerStarted","Data":"7b3dca954eaf4c9da537eafa5a081d7a3b680b6d0ded76a6a85c95f22d76af83"} Nov 24 08:51:39 crc kubenswrapper[4886]: I1124 08:51:39.543784 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29399565-qwh9r" event={"ID":"1afd949e-d0f2-41b8-9632-917df3468232","Type":"ContainerStarted","Data":"aa55b97334366ac064ff40f5cb14bfd699228dd32621f679bd52fd7658ae8b90"} Nov 24 08:51:39 crc kubenswrapper[4886]: I1124 08:51:39.561224 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 08:51:39 crc kubenswrapper[4886]: I1124 08:51:39.565428 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-fzqpc" event={"ID":"5ffb9e3b-f114-44b1-9521-096b538ce9bf","Type":"ContainerStarted","Data":"3ac2ae62286a967c4de2efe015c46a30f495626c3fe66b7daf90206517ec1074"} Nov 24 08:51:39 crc kubenswrapper[4886]: E1124 08:51:39.568550 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 08:51:40.068523539 +0000 UTC m=+155.955261674 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 08:51:39 crc kubenswrapper[4886]: I1124 08:51:39.593745 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4k7lh" event={"ID":"b65ba9fc-0a0d-49f2-9991-319b054df0b0","Type":"ContainerStarted","Data":"0aaabad0f46973659ca6fdd06412de9bf4454a5075d1e587845f09613f54e20d"} Nov 24 08:51:39 crc kubenswrapper[4886]: I1124 08:51:39.604750 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-x5nfr" event={"ID":"3a0bfe9c-4f8c-47f6-945b-0f93f9888f93","Type":"ContainerStarted","Data":"6866ab216bb9f0fdcc274202845c65a20e126b1afa00bcc73edd2c247d8204aa"} Nov 24 08:51:39 crc kubenswrapper[4886]: I1124 08:51:39.616312 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-4kfg8" event={"ID":"560047dc-48dd-40d2-b7b3-8a8e4db0d7c6","Type":"ContainerStarted","Data":"4a494d694e4574eca7ea3763c7c8d1ea43bff42a12c681dab394a14bacccb0ad"} Nov 24 08:51:39 crc kubenswrapper[4886]: I1124 08:51:39.628628 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-njp2h" event={"ID":"2edcc7e5-5bfb-4c39-86c9-fabf007078f4","Type":"ContainerStarted","Data":"52466a692636454ae9069a087dbce7d85e589e9cfa5dabba2fc0817efc5b3188"} Nov 24 08:51:39 crc kubenswrapper[4886]: I1124 08:51:39.633330 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-pxjkj" event={"ID":"421edb7e-dea2-4578-894e-32e9eb8aff3b","Type":"ContainerStarted","Data":"99410f363b649a386be7d6bcf7736962b9884a23ebce4ce33082e1dce25c7606"} Nov 24 08:51:39 crc kubenswrapper[4886]: I1124 08:51:39.633399 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-pxjkj" event={"ID":"421edb7e-dea2-4578-894e-32e9eb8aff3b","Type":"ContainerStarted","Data":"ab7b85ef8ccfcbd2e85371f84b53371815d66b68bade64233ffe49b745866d22"} Nov 24 08:51:39 crc kubenswrapper[4886]: I1124 08:51:39.643331 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-vtbkx" event={"ID":"168f234d-da70-475a-b6df-2771ab11368e","Type":"ContainerStarted","Data":"aa38250f3526b2779ae2dc43ce0fb9ee9b4e330642cbe8145c1656f4468c24fa"} Nov 24 08:51:39 crc kubenswrapper[4886]: I1124 08:51:39.646834 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-vtbkx" Nov 24 08:51:39 crc kubenswrapper[4886]: I1124 08:51:39.649427 4886 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-vtbkx container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.15:6443/healthz\": dial tcp 10.217.0.15:6443: connect: connection refused" start-of-body= Nov 24 08:51:39 crc kubenswrapper[4886]: I1124 08:51:39.649478 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-vtbkx" podUID="168f234d-da70-475a-b6df-2771ab11368e" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.15:6443/healthz\": dial tcp 10.217.0.15:6443: connect: connection refused" Nov 24 08:51:39 crc kubenswrapper[4886]: I1124 08:51:39.670298 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n794d\" (UID: \"30599c42-eef7-4967-b84f-95b49a225bd6\") " pod="openshift-image-registry/image-registry-697d97f7c8-n794d" Nov 24 08:51:39 crc kubenswrapper[4886]: I1124 08:51:39.670620 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-x8hmr" event={"ID":"f0014c51-8bc7-44e7-846b-3c7d97a67913","Type":"ContainerStarted","Data":"3725cb1d497dba7078c46b7c4987f5ced2c2b1f8b536ceff4e93b54301128a8a"} Nov 24 08:51:39 crc kubenswrapper[4886]: E1124 08:51:39.672987 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 08:51:40.17181211 +0000 UTC m=+156.058550255 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n794d" (UID: "30599c42-eef7-4967-b84f-95b49a225bd6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 08:51:39 crc kubenswrapper[4886]: I1124 08:51:39.698413 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-5fr58" event={"ID":"afdfb747-0bc0-40a4-89e6-dc6970617398","Type":"ContainerStarted","Data":"a77927af8a651cc6dfaf741a315c315e260781eb7d6ccca61408cdad76dcdff5"} Nov 24 08:51:39 crc kubenswrapper[4886]: I1124 08:51:39.716211 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-dssrb" event={"ID":"bbe26e55-76a5-4f66-b3c6-5f8933372332","Type":"ContainerStarted","Data":"f6acc02c187e889e5eab8a36f43e9638e9bf3acf109451169056b438dad0a3e7"} Nov 24 08:51:39 crc kubenswrapper[4886]: I1124 08:51:39.721233 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29399565-qwh9r" podStartSLOduration=134.721209473 podStartE2EDuration="2m14.721209473s" podCreationTimestamp="2025-11-24 08:49:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 08:51:39.719182948 +0000 UTC m=+155.605921093" watchObservedRunningTime="2025-11-24 08:51:39.721209473 +0000 UTC m=+155.607947608" Nov 24 08:51:39 crc kubenswrapper[4886]: I1124 08:51:39.721865 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vz657" event={"ID":"efe0b276-8633-4435-b8af-d4651276c24f","Type":"ContainerStarted","Data":"41f092a1f20bcf957201e35e16e32cf8f3e3e56106d0fbc4e31893ee992307e3"} Nov 24 08:51:39 crc kubenswrapper[4886]: I1124 08:51:39.728604 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-z89jx" event={"ID":"7b5ec71d-6e2e-4cf9-af45-b0147e7598b6","Type":"ContainerStarted","Data":"e706bc425323114ed683567aca42138d24f0fc4ef66da7c55dd20436608476bd"} Nov 24 08:51:39 crc kubenswrapper[4886]: I1124 08:51:39.728661 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-z89jx" event={"ID":"7b5ec71d-6e2e-4cf9-af45-b0147e7598b6","Type":"ContainerStarted","Data":"3c47e5a0c1e8b687de721e02807087fac34d5b1bd3c1141e3acfdf10c04eeb20"} Nov 24 08:51:39 crc kubenswrapper[4886]: I1124 08:51:39.729129 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-z89jx" Nov 24 08:51:39 crc kubenswrapper[4886]: I1124 08:51:39.735534 4886 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-z89jx container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.30:8443/healthz\": dial tcp 10.217.0.30:8443: connect: connection refused" start-of-body= Nov 24 08:51:39 crc kubenswrapper[4886]: I1124 08:51:39.735794 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-z89jx" podUID="7b5ec71d-6e2e-4cf9-af45-b0147e7598b6" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.30:8443/healthz\": dial tcp 10.217.0.30:8443: connect: connection refused" Nov 24 08:51:39 crc kubenswrapper[4886]: I1124 08:51:39.755070 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-v7qjg" event={"ID":"3349a550-cd49-4627-8fd8-7bb82f26c0e4","Type":"ContainerStarted","Data":"89a0c62524392a0df912deac0d673758123225e40868c9a883df50454e6f7e5b"} Nov 24 08:51:39 crc kubenswrapper[4886]: I1124 08:51:39.757548 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-fsxn2" podStartSLOduration=133.757527258 podStartE2EDuration="2m13.757527258s" podCreationTimestamp="2025-11-24 08:49:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 08:51:39.757171928 +0000 UTC m=+155.643910063" watchObservedRunningTime="2025-11-24 08:51:39.757527258 +0000 UTC m=+155.644265393" Nov 24 08:51:39 crc kubenswrapper[4886]: I1124 08:51:39.777790 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-ztbpv" event={"ID":"97be751b-9ee7-45b8-bb05-5db918750f72","Type":"ContainerStarted","Data":"f1d76c2cf929df24c04e0b3ac539a94705f82af9d2262b2c868f8453fd74b89f"} Nov 24 08:51:39 crc kubenswrapper[4886]: I1124 08:51:39.778844 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 08:51:39 crc kubenswrapper[4886]: E1124 08:51:39.782778 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 08:51:40.282754099 +0000 UTC m=+156.169492234 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 08:51:39 crc kubenswrapper[4886]: I1124 08:51:39.791430 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-f8pr4" event={"ID":"a9e033b5-0aef-4e45-924c-338d2a914c5a","Type":"ContainerStarted","Data":"e5927a05a08f896574392373f34f4a891a82dde066a640ef5f50e50f787b3e3a"} Nov 24 08:51:39 crc kubenswrapper[4886]: I1124 08:51:39.792550 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-f8pr4" Nov 24 08:51:39 crc kubenswrapper[4886]: I1124 08:51:39.798194 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-hplr8" event={"ID":"98d1a535-6aba-4633-8091-42e633b865b1","Type":"ContainerStarted","Data":"02b5364195e81c0719dfb598f6bcb88eb84e04207735f1b8b686dbb2689fcb77"} Nov 24 08:51:39 crc kubenswrapper[4886]: I1124 08:51:39.798253 4886 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-f8pr4 container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.36:8443/healthz\": dial tcp 10.217.0.36:8443: connect: connection refused" start-of-body= Nov 24 08:51:39 crc kubenswrapper[4886]: I1124 08:51:39.798529 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-f8pr4" podUID="a9e033b5-0aef-4e45-924c-338d2a914c5a" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.36:8443/healthz\": dial tcp 10.217.0.36:8443: connect: connection refused" Nov 24 08:51:39 crc kubenswrapper[4886]: I1124 08:51:39.799384 4886 patch_prober.go:28] interesting pod/downloads-7954f5f757-w6cvz container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Nov 24 08:51:39 crc kubenswrapper[4886]: I1124 08:51:39.799440 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-w6cvz" podUID="580c4fdc-bdb3-4099-b715-ac4c63acecb2" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Nov 24 08:51:39 crc kubenswrapper[4886]: I1124 08:51:39.806487 4886 patch_prober.go:28] interesting pod/console-operator-58897d9998-845fz container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.8:8443/readyz\": dial tcp 10.217.0.8:8443: connect: connection refused" start-of-body= Nov 24 08:51:39 crc kubenswrapper[4886]: I1124 08:51:39.806549 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-845fz" podUID="f8080498-de06-4f8f-9c35-3d296e28a021" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.8:8443/readyz\": dial tcp 10.217.0.8:8443: connect: connection refused" Nov 24 08:51:39 crc kubenswrapper[4886]: I1124 08:51:39.817709 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-vtbkx" podStartSLOduration=134.817685607 podStartE2EDuration="2m14.817685607s" podCreationTimestamp="2025-11-24 08:49:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 08:51:39.809915994 +0000 UTC m=+155.696654149" watchObservedRunningTime="2025-11-24 08:51:39.817685607 +0000 UTC m=+155.704423742" Nov 24 08:51:39 crc kubenswrapper[4886]: I1124 08:51:39.852548 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-x5nfr" podStartSLOduration=134.852497871 podStartE2EDuration="2m14.852497871s" podCreationTimestamp="2025-11-24 08:49:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 08:51:39.850550687 +0000 UTC m=+155.737288822" watchObservedRunningTime="2025-11-24 08:51:39.852497871 +0000 UTC m=+155.739236016" Nov 24 08:51:39 crc kubenswrapper[4886]: I1124 08:51:39.881249 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n794d\" (UID: \"30599c42-eef7-4967-b84f-95b49a225bd6\") " pod="openshift-image-registry/image-registry-697d97f7c8-n794d" Nov 24 08:51:39 crc kubenswrapper[4886]: E1124 08:51:39.881890 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 08:51:40.381873545 +0000 UTC m=+156.268611680 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n794d" (UID: "30599c42-eef7-4967-b84f-95b49a225bd6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 08:51:39 crc kubenswrapper[4886]: I1124 08:51:39.967115 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-7ftcq" podStartSLOduration=133.96708659 podStartE2EDuration="2m13.96708659s" podCreationTimestamp="2025-11-24 08:49:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 08:51:39.924648648 +0000 UTC m=+155.811386783" watchObservedRunningTime="2025-11-24 08:51:39.96708659 +0000 UTC m=+155.853824725" Nov 24 08:51:39 crc kubenswrapper[4886]: I1124 08:51:39.967905 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-xbr84" podStartSLOduration=134.967898163 podStartE2EDuration="2m14.967898163s" podCreationTimestamp="2025-11-24 08:49:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 08:51:39.966985318 +0000 UTC m=+155.853723453" watchObservedRunningTime="2025-11-24 08:51:39.967898163 +0000 UTC m=+155.854636298" Nov 24 08:51:39 crc kubenswrapper[4886]: I1124 08:51:39.983189 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 08:51:39 crc kubenswrapper[4886]: E1124 08:51:39.983317 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 08:51:40.483294275 +0000 UTC m=+156.370032410 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 08:51:39 crc kubenswrapper[4886]: I1124 08:51:39.983534 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n794d\" (UID: \"30599c42-eef7-4967-b84f-95b49a225bd6\") " pod="openshift-image-registry/image-registry-697d97f7c8-n794d" Nov 24 08:51:39 crc kubenswrapper[4886]: E1124 08:51:39.984187 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 08:51:40.484141208 +0000 UTC m=+156.370879343 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n794d" (UID: "30599c42-eef7-4967-b84f-95b49a225bd6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 08:51:40 crc kubenswrapper[4886]: I1124 08:51:40.008010 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-4zfpn" podStartSLOduration=6.007977101 podStartE2EDuration="6.007977101s" podCreationTimestamp="2025-11-24 08:51:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 08:51:39.998355447 +0000 UTC m=+155.885093602" watchObservedRunningTime="2025-11-24 08:51:40.007977101 +0000 UTC m=+155.894715236" Nov 24 08:51:40 crc kubenswrapper[4886]: I1124 08:51:40.052971 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-njfqk" podStartSLOduration=135.052936643 podStartE2EDuration="2m15.052936643s" podCreationTimestamp="2025-11-24 08:49:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 08:51:40.046504167 +0000 UTC m=+155.933242542" watchObservedRunningTime="2025-11-24 08:51:40.052936643 +0000 UTC m=+155.939674778" Nov 24 08:51:40 crc kubenswrapper[4886]: I1124 08:51:40.084354 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-fzqpc" podStartSLOduration=134.084334823 podStartE2EDuration="2m14.084334823s" podCreationTimestamp="2025-11-24 08:49:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 08:51:40.082638087 +0000 UTC m=+155.969376222" watchObservedRunningTime="2025-11-24 08:51:40.084334823 +0000 UTC m=+155.971072948" Nov 24 08:51:40 crc kubenswrapper[4886]: I1124 08:51:40.086542 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 08:51:40 crc kubenswrapper[4886]: E1124 08:51:40.086899 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 08:51:40.586880863 +0000 UTC m=+156.473618998 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 08:51:40 crc kubenswrapper[4886]: I1124 08:51:40.140272 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-d9sdq" podStartSLOduration=135.140251745 podStartE2EDuration="2m15.140251745s" podCreationTimestamp="2025-11-24 08:49:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 08:51:40.138470047 +0000 UTC m=+156.025208182" watchObservedRunningTime="2025-11-24 08:51:40.140251745 +0000 UTC m=+156.026989880" Nov 24 08:51:40 crc kubenswrapper[4886]: I1124 08:51:40.182820 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-v7qjg" podStartSLOduration=134.182795941 podStartE2EDuration="2m14.182795941s" podCreationTimestamp="2025-11-24 08:49:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 08:51:40.166735021 +0000 UTC m=+156.053473156" watchObservedRunningTime="2025-11-24 08:51:40.182795941 +0000 UTC m=+156.069534076" Nov 24 08:51:40 crc kubenswrapper[4886]: I1124 08:51:40.187855 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n794d\" (UID: \"30599c42-eef7-4967-b84f-95b49a225bd6\") " pod="openshift-image-registry/image-registry-697d97f7c8-n794d" Nov 24 08:51:40 crc kubenswrapper[4886]: E1124 08:51:40.190511 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 08:51:40.690486162 +0000 UTC m=+156.577224297 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n794d" (UID: "30599c42-eef7-4967-b84f-95b49a225bd6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 08:51:40 crc kubenswrapper[4886]: I1124 08:51:40.208221 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-5fr58" podStartSLOduration=134.208199047 podStartE2EDuration="2m14.208199047s" podCreationTimestamp="2025-11-24 08:49:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 08:51:40.20758049 +0000 UTC m=+156.094318625" watchObservedRunningTime="2025-11-24 08:51:40.208199047 +0000 UTC m=+156.094937182" Nov 24 08:51:40 crc kubenswrapper[4886]: I1124 08:51:40.239745 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-f8pr4" podStartSLOduration=134.239720961 podStartE2EDuration="2m14.239720961s" podCreationTimestamp="2025-11-24 08:49:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 08:51:40.238922349 +0000 UTC m=+156.125660494" watchObservedRunningTime="2025-11-24 08:51:40.239720961 +0000 UTC m=+156.126459096" Nov 24 08:51:40 crc kubenswrapper[4886]: I1124 08:51:40.286988 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-z89jx" podStartSLOduration=134.286966455 podStartE2EDuration="2m14.286966455s" podCreationTimestamp="2025-11-24 08:49:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 08:51:40.285023762 +0000 UTC m=+156.171761897" watchObservedRunningTime="2025-11-24 08:51:40.286966455 +0000 UTC m=+156.173704590" Nov 24 08:51:40 crc kubenswrapper[4886]: I1124 08:51:40.291699 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 08:51:40 crc kubenswrapper[4886]: E1124 08:51:40.292076 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 08:51:40.792053695 +0000 UTC m=+156.678791830 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 08:51:40 crc kubenswrapper[4886]: I1124 08:51:40.394939 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n794d\" (UID: \"30599c42-eef7-4967-b84f-95b49a225bd6\") " pod="openshift-image-registry/image-registry-697d97f7c8-n794d" Nov 24 08:51:40 crc kubenswrapper[4886]: E1124 08:51:40.395477 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 08:51:40.895454938 +0000 UTC m=+156.782193073 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n794d" (UID: "30599c42-eef7-4967-b84f-95b49a225bd6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 08:51:40 crc kubenswrapper[4886]: I1124 08:51:40.403810 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-hplr8" podStartSLOduration=134.403785816 podStartE2EDuration="2m14.403785816s" podCreationTimestamp="2025-11-24 08:49:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 08:51:40.395258033 +0000 UTC m=+156.281996168" watchObservedRunningTime="2025-11-24 08:51:40.403785816 +0000 UTC m=+156.290523951" Nov 24 08:51:40 crc kubenswrapper[4886]: I1124 08:51:40.497140 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 08:51:40 crc kubenswrapper[4886]: E1124 08:51:40.497642 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 08:51:40.997622058 +0000 UTC m=+156.884360193 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 08:51:40 crc kubenswrapper[4886]: I1124 08:51:40.544107 4886 patch_prober.go:28] interesting pod/router-default-5444994796-xbr84 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 24 08:51:40 crc kubenswrapper[4886]: [-]has-synced failed: reason withheld Nov 24 08:51:40 crc kubenswrapper[4886]: [+]process-running ok Nov 24 08:51:40 crc kubenswrapper[4886]: healthz check failed Nov 24 08:51:40 crc kubenswrapper[4886]: I1124 08:51:40.544204 4886 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xbr84" podUID="a54a2524-099e-4a0f-9762-eafbc576dc56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 24 08:51:40 crc kubenswrapper[4886]: I1124 08:51:40.598473 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n794d\" (UID: \"30599c42-eef7-4967-b84f-95b49a225bd6\") " pod="openshift-image-registry/image-registry-697d97f7c8-n794d" Nov 24 08:51:40 crc kubenswrapper[4886]: E1124 08:51:40.598886 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 08:51:41.098870942 +0000 UTC m=+156.985609077 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n794d" (UID: "30599c42-eef7-4967-b84f-95b49a225bd6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 08:51:40 crc kubenswrapper[4886]: I1124 08:51:40.700032 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 08:51:40 crc kubenswrapper[4886]: E1124 08:51:40.700216 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 08:51:41.200191408 +0000 UTC m=+157.086929543 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 08:51:40 crc kubenswrapper[4886]: I1124 08:51:40.700400 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n794d\" (UID: \"30599c42-eef7-4967-b84f-95b49a225bd6\") " pod="openshift-image-registry/image-registry-697d97f7c8-n794d" Nov 24 08:51:40 crc kubenswrapper[4886]: E1124 08:51:40.700918 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 08:51:41.200902588 +0000 UTC m=+157.087640773 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n794d" (UID: "30599c42-eef7-4967-b84f-95b49a225bd6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 08:51:40 crc kubenswrapper[4886]: I1124 08:51:40.802423 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 08:51:40 crc kubenswrapper[4886]: E1124 08:51:40.802675 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 08:51:41.302640396 +0000 UTC m=+157.189378531 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 08:51:40 crc kubenswrapper[4886]: I1124 08:51:40.802809 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n794d\" (UID: \"30599c42-eef7-4967-b84f-95b49a225bd6\") " pod="openshift-image-registry/image-registry-697d97f7c8-n794d" Nov 24 08:51:40 crc kubenswrapper[4886]: E1124 08:51:40.803376 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 08:51:41.303359245 +0000 UTC m=+157.190097370 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n794d" (UID: "30599c42-eef7-4967-b84f-95b49a225bd6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 08:51:40 crc kubenswrapper[4886]: I1124 08:51:40.809920 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-xjmsp" event={"ID":"e8247ed1-a90a-409b-a326-07bb154a4d16","Type":"ContainerStarted","Data":"ab9a18afa0acfc3741d220cd22d53d14b7fc9b99de465e481ca6f1fbb47e6f67"} Nov 24 08:51:40 crc kubenswrapper[4886]: I1124 08:51:40.809976 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-xjmsp" event={"ID":"e8247ed1-a90a-409b-a326-07bb154a4d16","Type":"ContainerStarted","Data":"3b4335008eae77acb33630d63437444f53043cff57baf35963ffe08252fb1142"} Nov 24 08:51:40 crc kubenswrapper[4886]: I1124 08:51:40.815315 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-x8hmr" event={"ID":"f0014c51-8bc7-44e7-846b-3c7d97a67913","Type":"ContainerStarted","Data":"5d315ba09f8c74eaa5cb616a61c85b4845e19fedb1e585e1e23aef1ad7694500"} Nov 24 08:51:40 crc kubenswrapper[4886]: I1124 08:51:40.815351 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-x8hmr" event={"ID":"f0014c51-8bc7-44e7-846b-3c7d97a67913","Type":"ContainerStarted","Data":"e28d8eec429fe3f062d2b89935bf26b139bc57c74d743d518cf967c57fecd4a2"} Nov 24 08:51:40 crc kubenswrapper[4886]: I1124 08:51:40.815467 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-x8hmr" Nov 24 08:51:40 crc kubenswrapper[4886]: I1124 08:51:40.830194 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vz657" event={"ID":"efe0b276-8633-4435-b8af-d4651276c24f","Type":"ContainerStarted","Data":"a60274d18244a06613609d7b060a6844bdd28732e10303b39dd61bad94f5caf0"} Nov 24 08:51:40 crc kubenswrapper[4886]: I1124 08:51:40.832072 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-59wgb" event={"ID":"400260cc-a84e-4d59-99f5-5e2359ceee1c","Type":"ContainerStarted","Data":"f9ae93c04dce5aad2d9e44491c91a5020f91b53174cfa0b71143472cc03fba55"} Nov 24 08:51:40 crc kubenswrapper[4886]: I1124 08:51:40.834976 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-t4g4w" event={"ID":"fd02555c-e6fc-4825-a06a-53497a2cfeda","Type":"ContainerStarted","Data":"afe31dc2a532f3b4ffb1bb13c13a015fc7ea22581799cf452b61384a0647c917"} Nov 24 08:51:40 crc kubenswrapper[4886]: I1124 08:51:40.838515 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-npx2b" event={"ID":"a356b9d0-54fe-4bac-9589-027e7cbfeb87","Type":"ContainerStarted","Data":"c7a48c5e000d76bb5184077ec86fd141e9fb3ba38d7ae986e43f6ed8136005fe"} Nov 24 08:51:40 crc kubenswrapper[4886]: I1124 08:51:40.838569 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-npx2b" event={"ID":"a356b9d0-54fe-4bac-9589-027e7cbfeb87","Type":"ContainerStarted","Data":"1fc883db6d26d506433a10bf328104726d3d851fae8cc2f9c2fb833f58dc2151"} Nov 24 08:51:40 crc kubenswrapper[4886]: I1124 08:51:40.838942 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-npx2b" Nov 24 08:51:40 crc kubenswrapper[4886]: I1124 08:51:40.840396 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-xjmsp" podStartSLOduration=134.84038376 podStartE2EDuration="2m14.84038376s" podCreationTimestamp="2025-11-24 08:49:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 08:51:40.838120018 +0000 UTC m=+156.724858153" watchObservedRunningTime="2025-11-24 08:51:40.84038376 +0000 UTC m=+156.727121895" Nov 24 08:51:40 crc kubenswrapper[4886]: I1124 08:51:40.842838 4886 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-npx2b container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.26:5443/healthz\": dial tcp 10.217.0.26:5443: connect: connection refused" start-of-body= Nov 24 08:51:40 crc kubenswrapper[4886]: I1124 08:51:40.842912 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-npx2b" podUID="a356b9d0-54fe-4bac-9589-027e7cbfeb87" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.26:5443/healthz\": dial tcp 10.217.0.26:5443: connect: connection refused" Nov 24 08:51:40 crc kubenswrapper[4886]: I1124 08:51:40.846705 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-bbnxh" event={"ID":"d879a534-2c8a-463b-bbf6-75213cb4d554","Type":"ContainerStarted","Data":"23b464734ae6f9f470a0d3d7691d2259885ef0e4cdd6bf9b120437444e9a455b"} Nov 24 08:51:40 crc kubenswrapper[4886]: I1124 08:51:40.860383 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-fqpl9" event={"ID":"e4cfb4a2-69e1-4c38-b07f-cb1e628cbc2f","Type":"ContainerStarted","Data":"40b8273daea6c6c338ad749402cc26f534da3831cb1ecdced5e16581d07e2073"} Nov 24 08:51:40 crc kubenswrapper[4886]: I1124 08:51:40.860801 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-fqpl9" event={"ID":"e4cfb4a2-69e1-4c38-b07f-cb1e628cbc2f","Type":"ContainerStarted","Data":"e4e33f5e92102d9531676016a3a36d08f22a2a5a0a99a3f0c83b93551d3c3c45"} Nov 24 08:51:40 crc kubenswrapper[4886]: I1124 08:51:40.860930 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-fqpl9" event={"ID":"e4cfb4a2-69e1-4c38-b07f-cb1e628cbc2f","Type":"ContainerStarted","Data":"8c7df89cdb55bf8c3aae7362316f972d0162e8a324cf2b62d8c9a78021b2cf6f"} Nov 24 08:51:40 crc kubenswrapper[4886]: I1124 08:51:40.879290 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-x8hmr" podStartSLOduration=6.879263445 podStartE2EDuration="6.879263445s" podCreationTimestamp="2025-11-24 08:51:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 08:51:40.877936179 +0000 UTC m=+156.764674334" watchObservedRunningTime="2025-11-24 08:51:40.879263445 +0000 UTC m=+156.766001580" Nov 24 08:51:40 crc kubenswrapper[4886]: I1124 08:51:40.880279 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-f8pr4" event={"ID":"a9e033b5-0aef-4e45-924c-338d2a914c5a","Type":"ContainerStarted","Data":"c61f7aaa9b4962c50ff283089afa2e8c62485e9dbd9536181c37a036766fbfcf"} Nov 24 08:51:40 crc kubenswrapper[4886]: I1124 08:51:40.881035 4886 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-f8pr4 container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.36:8443/healthz\": dial tcp 10.217.0.36:8443: connect: connection refused" start-of-body= Nov 24 08:51:40 crc kubenswrapper[4886]: I1124 08:51:40.882281 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-f8pr4" podUID="a9e033b5-0aef-4e45-924c-338d2a914c5a" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.36:8443/healthz\": dial tcp 10.217.0.36:8443: connect: connection refused" Nov 24 08:51:40 crc kubenswrapper[4886]: I1124 08:51:40.896431 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-dssrb" event={"ID":"bbe26e55-76a5-4f66-b3c6-5f8933372332","Type":"ContainerStarted","Data":"ce0308b079692b773933ff07e05cc09002939330a38bc67e79b1961d6e53c61a"} Nov 24 08:51:40 crc kubenswrapper[4886]: I1124 08:51:40.896485 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-dssrb" event={"ID":"bbe26e55-76a5-4f66-b3c6-5f8933372332","Type":"ContainerStarted","Data":"777e2c7895f65a68681d181b57862b3b8fffb40543187821fcb8a894e282dff2"} Nov 24 08:51:40 crc kubenswrapper[4886]: I1124 08:51:40.898933 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vz657" podStartSLOduration=135.898915104 podStartE2EDuration="2m15.898915104s" podCreationTimestamp="2025-11-24 08:49:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 08:51:40.897102564 +0000 UTC m=+156.783840699" watchObservedRunningTime="2025-11-24 08:51:40.898915104 +0000 UTC m=+156.785653239" Nov 24 08:51:40 crc kubenswrapper[4886]: I1124 08:51:40.909044 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 08:51:40 crc kubenswrapper[4886]: E1124 08:51:40.910564 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 08:51:41.410538512 +0000 UTC m=+157.297276667 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 08:51:40 crc kubenswrapper[4886]: I1124 08:51:40.919564 4886 generic.go:334] "Generic (PLEG): container finished" podID="2edcc7e5-5bfb-4c39-86c9-fabf007078f4" containerID="5a7bb6c18a0f698c46faeacebd7d1f3650e8ab25fa5124f87ecebe0ca03c7171" exitCode=0 Nov 24 08:51:40 crc kubenswrapper[4886]: I1124 08:51:40.920947 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-njp2h" event={"ID":"2edcc7e5-5bfb-4c39-86c9-fabf007078f4","Type":"ContainerDied","Data":"5a7bb6c18a0f698c46faeacebd7d1f3650e8ab25fa5124f87ecebe0ca03c7171"} Nov 24 08:51:40 crc kubenswrapper[4886]: I1124 08:51:40.932619 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-59wgb" podStartSLOduration=134.932597077 podStartE2EDuration="2m14.932597077s" podCreationTimestamp="2025-11-24 08:49:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 08:51:40.93126697 +0000 UTC m=+156.818005105" watchObservedRunningTime="2025-11-24 08:51:40.932597077 +0000 UTC m=+156.819335242" Nov 24 08:51:40 crc kubenswrapper[4886]: I1124 08:51:40.932916 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-rj5kj" event={"ID":"3a73ff5c-5292-45f7-a7cf-97714a8a109d","Type":"ContainerStarted","Data":"1882652e53ce053a1e9758ba6d21623454b1e76c4e9c86c9473840a5fe4ecdb5"} Nov 24 08:51:40 crc kubenswrapper[4886]: I1124 08:51:40.954572 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-4kfg8" event={"ID":"560047dc-48dd-40d2-b7b3-8a8e4db0d7c6","Type":"ContainerStarted","Data":"910b8a11ac6adb15ab1697238b4c2fde6570905452a3a59dc6490f8dc0f29355"} Nov 24 08:51:40 crc kubenswrapper[4886]: I1124 08:51:40.954642 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-4kfg8" event={"ID":"560047dc-48dd-40d2-b7b3-8a8e4db0d7c6","Type":"ContainerStarted","Data":"791fdf827da75096c77d480b68ab5ef2d9686aa3b3626a15dd8abf960c8528dd"} Nov 24 08:51:40 crc kubenswrapper[4886]: I1124 08:51:40.967492 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-ztbpv" event={"ID":"97be751b-9ee7-45b8-bb05-5db918750f72","Type":"ContainerStarted","Data":"0abc702bab3629d970ee078e2c427906188a50378cf85ae5262b36d47a8c20ba"} Nov 24 08:51:40 crc kubenswrapper[4886]: I1124 08:51:40.978908 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-657wc" Nov 24 08:51:40 crc kubenswrapper[4886]: I1124 08:51:40.986565 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-7ftcq" event={"ID":"1559d3ac-0229-4e31-9d0b-ebf633409384","Type":"ContainerStarted","Data":"f2c04ae86a4a66d825ede0c359929256b9ba4ed11ed81784d17ecd24da92729b"} Nov 24 08:51:40 crc kubenswrapper[4886]: I1124 08:51:40.988605 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-t4g4w" podStartSLOduration=6.988593431 podStartE2EDuration="6.988593431s" podCreationTimestamp="2025-11-24 08:51:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 08:51:40.955139274 +0000 UTC m=+156.841877409" watchObservedRunningTime="2025-11-24 08:51:40.988593431 +0000 UTC m=+156.875331566" Nov 24 08:51:40 crc kubenswrapper[4886]: I1124 08:51:40.992259 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-dssrb" podStartSLOduration=134.992242431 podStartE2EDuration="2m14.992242431s" podCreationTimestamp="2025-11-24 08:49:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 08:51:40.991261724 +0000 UTC m=+156.877999869" watchObservedRunningTime="2025-11-24 08:51:40.992242431 +0000 UTC m=+156.878980566" Nov 24 08:51:41 crc kubenswrapper[4886]: I1124 08:51:41.017555 4886 generic.go:334] "Generic (PLEG): container finished" podID="eb6743a4-a25e-4b1b-ae3b-3d2199cfbff2" containerID="2434d3018c9ab4bb0e74fc54419b97492ab98786f40ba291f73c31e2d37e2dee" exitCode=0 Nov 24 08:51:41 crc kubenswrapper[4886]: I1124 08:51:41.017881 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-lz4ml" event={"ID":"eb6743a4-a25e-4b1b-ae3b-3d2199cfbff2","Type":"ContainerDied","Data":"2434d3018c9ab4bb0e74fc54419b97492ab98786f40ba291f73c31e2d37e2dee"} Nov 24 08:51:41 crc kubenswrapper[4886]: I1124 08:51:41.017974 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-lz4ml" event={"ID":"eb6743a4-a25e-4b1b-ae3b-3d2199cfbff2","Type":"ContainerStarted","Data":"20925aefc07e518c2a7eb8adc9a26f82645edc9b080292f24744c8c7bbe42a01"} Nov 24 08:51:41 crc kubenswrapper[4886]: I1124 08:51:41.018251 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n794d\" (UID: \"30599c42-eef7-4967-b84f-95b49a225bd6\") " pod="openshift-image-registry/image-registry-697d97f7c8-n794d" Nov 24 08:51:41 crc kubenswrapper[4886]: E1124 08:51:41.020850 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 08:51:41.520826244 +0000 UTC m=+157.407564469 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n794d" (UID: "30599c42-eef7-4967-b84f-95b49a225bd6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 08:51:41 crc kubenswrapper[4886]: I1124 08:51:41.053328 4886 generic.go:334] "Generic (PLEG): container finished" podID="ce7242d2-301f-4d8f-816a-a36418be67ca" containerID="fe82fbcb532e8464fa273b60cc35157e5967b63fdaaf62219d5f617b92235899" exitCode=0 Nov 24 08:51:41 crc kubenswrapper[4886]: I1124 08:51:41.053755 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-4z5n9" event={"ID":"ce7242d2-301f-4d8f-816a-a36418be67ca","Type":"ContainerDied","Data":"fe82fbcb532e8464fa273b60cc35157e5967b63fdaaf62219d5f617b92235899"} Nov 24 08:51:41 crc kubenswrapper[4886]: I1124 08:51:41.053938 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-4z5n9" event={"ID":"ce7242d2-301f-4d8f-816a-a36418be67ca","Type":"ContainerStarted","Data":"e3145954ebfe54d5e3e47e235dd37dff43dc8e27e579750e02fdc4d13cff5383"} Nov 24 08:51:41 crc kubenswrapper[4886]: I1124 08:51:41.074021 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-npx2b" podStartSLOduration=135.073997551 podStartE2EDuration="2m15.073997551s" podCreationTimestamp="2025-11-24 08:49:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 08:51:41.039811794 +0000 UTC m=+156.926549929" watchObservedRunningTime="2025-11-24 08:51:41.073997551 +0000 UTC m=+156.960735686" Nov 24 08:51:41 crc kubenswrapper[4886]: I1124 08:51:41.074412 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-fqpl9" podStartSLOduration=135.074404882 podStartE2EDuration="2m15.074404882s" podCreationTimestamp="2025-11-24 08:49:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 08:51:41.070645619 +0000 UTC m=+156.957383764" watchObservedRunningTime="2025-11-24 08:51:41.074404882 +0000 UTC m=+156.961143037" Nov 24 08:51:41 crc kubenswrapper[4886]: I1124 08:51:41.079349 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-pxjkj" event={"ID":"421edb7e-dea2-4578-894e-32e9eb8aff3b","Type":"ContainerStarted","Data":"f144d2eb88a4990986576f2fe8ca03485a2c73a197ea9a4f30e28f5c39d38743"} Nov 24 08:51:41 crc kubenswrapper[4886]: I1124 08:51:41.080247 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-pxjkj" Nov 24 08:51:41 crc kubenswrapper[4886]: I1124 08:51:41.085481 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-v7qjg" event={"ID":"3349a550-cd49-4627-8fd8-7bb82f26c0e4","Type":"ContainerStarted","Data":"7eabab6598beb69c164e2e757ca9fce87163dfca0232ab350725b6f83e4b6af2"} Nov 24 08:51:41 crc kubenswrapper[4886]: I1124 08:51:41.095294 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4k7lh" event={"ID":"b65ba9fc-0a0d-49f2-9991-319b054df0b0","Type":"ContainerStarted","Data":"98637391d646e5def58fbedaa6d121a822611aa1c862a56bc29b7aaefd3ff356"} Nov 24 08:51:41 crc kubenswrapper[4886]: I1124 08:51:41.096168 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4k7lh" Nov 24 08:51:41 crc kubenswrapper[4886]: I1124 08:51:41.112576 4886 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-4k7lh container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.22:8443/healthz\": dial tcp 10.217.0.22:8443: connect: connection refused" start-of-body= Nov 24 08:51:41 crc kubenswrapper[4886]: I1124 08:51:41.112647 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4k7lh" podUID="b65ba9fc-0a0d-49f2-9991-319b054df0b0" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.22:8443/healthz\": dial tcp 10.217.0.22:8443: connect: connection refused" Nov 24 08:51:41 crc kubenswrapper[4886]: I1124 08:51:41.119013 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 08:51:41 crc kubenswrapper[4886]: E1124 08:51:41.124331 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 08:51:41.624300529 +0000 UTC m=+157.511038664 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 08:51:41 crc kubenswrapper[4886]: I1124 08:51:41.162583 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-vb86f" event={"ID":"1c634f86-dcd4-4cf7-aa6f-4245be5d0b57","Type":"ContainerStarted","Data":"4d14569ddb99bfff5b75e40becae19f1f0a81c8ac9a931ddd8e3133eb627b540"} Nov 24 08:51:41 crc kubenswrapper[4886]: I1124 08:51:41.163763 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-vb86f" event={"ID":"1c634f86-dcd4-4cf7-aa6f-4245be5d0b57","Type":"ContainerStarted","Data":"2e370c95bc4fc417f3ae106aee7518baf51dc567b943d1609311c639f595e08f"} Nov 24 08:51:41 crc kubenswrapper[4886]: I1124 08:51:41.170077 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-bbnxh" podStartSLOduration=136.170048493 podStartE2EDuration="2m16.170048493s" podCreationTimestamp="2025-11-24 08:49:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 08:51:41.113217896 +0000 UTC m=+156.999956031" watchObservedRunningTime="2025-11-24 08:51:41.170048493 +0000 UTC m=+157.056786638" Nov 24 08:51:41 crc kubenswrapper[4886]: I1124 08:51:41.171432 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-tsmf5" event={"ID":"364b3e42-dafa-45cd-bf38-545cc2eb9e21","Type":"ContainerStarted","Data":"c05e0230c62c59cb2853c62c63d283b0eef0c8e84b47e899b77f46f53237b732"} Nov 24 08:51:41 crc kubenswrapper[4886]: I1124 08:51:41.171500 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-tsmf5" event={"ID":"364b3e42-dafa-45cd-bf38-545cc2eb9e21","Type":"ContainerStarted","Data":"e377aca0889ce4180db9788171f0bda4367d64116a43c0d48d257c4b2e5ef494"} Nov 24 08:51:41 crc kubenswrapper[4886]: I1124 08:51:41.182225 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-tsmf5" Nov 24 08:51:41 crc kubenswrapper[4886]: I1124 08:51:41.187399 4886 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-tsmf5 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.32:8080/healthz\": dial tcp 10.217.0.32:8080: connect: connection refused" start-of-body= Nov 24 08:51:41 crc kubenswrapper[4886]: I1124 08:51:41.191377 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-tsmf5" podUID="364b3e42-dafa-45cd-bf38-545cc2eb9e21" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.32:8080/healthz\": dial tcp 10.217.0.32:8080: connect: connection refused" Nov 24 08:51:41 crc kubenswrapper[4886]: I1124 08:51:41.214489 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-km4wr" event={"ID":"9d51f527-2205-4113-9b65-655f3fab2e1c","Type":"ContainerStarted","Data":"e78540a8fdec0b28d2971feb4a2f9a3655249fe74b8d58d4a769083a0119477a"} Nov 24 08:51:41 crc kubenswrapper[4886]: I1124 08:51:41.218429 4886 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-z89jx container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.30:8443/healthz\": dial tcp 10.217.0.30:8443: connect: connection refused" start-of-body= Nov 24 08:51:41 crc kubenswrapper[4886]: I1124 08:51:41.218492 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-z89jx" podUID="7b5ec71d-6e2e-4cf9-af45-b0147e7598b6" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.30:8443/healthz\": dial tcp 10.217.0.30:8443: connect: connection refused" Nov 24 08:51:41 crc kubenswrapper[4886]: I1124 08:51:41.226213 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n794d\" (UID: \"30599c42-eef7-4967-b84f-95b49a225bd6\") " pod="openshift-image-registry/image-registry-697d97f7c8-n794d" Nov 24 08:51:41 crc kubenswrapper[4886]: I1124 08:51:41.231848 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-4kfg8" podStartSLOduration=135.231825536 podStartE2EDuration="2m15.231825536s" podCreationTimestamp="2025-11-24 08:49:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 08:51:41.201602048 +0000 UTC m=+157.088340183" watchObservedRunningTime="2025-11-24 08:51:41.231825536 +0000 UTC m=+157.118563671" Nov 24 08:51:41 crc kubenswrapper[4886]: E1124 08:51:41.232463 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 08:51:41.732450843 +0000 UTC m=+157.619188978 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n794d" (UID: "30599c42-eef7-4967-b84f-95b49a225bd6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 08:51:41 crc kubenswrapper[4886]: I1124 08:51:41.252201 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-pxjkj" podStartSLOduration=135.252141092 podStartE2EDuration="2m15.252141092s" podCreationTimestamp="2025-11-24 08:49:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 08:51:41.232136994 +0000 UTC m=+157.118875129" watchObservedRunningTime="2025-11-24 08:51:41.252141092 +0000 UTC m=+157.138879227" Nov 24 08:51:41 crc kubenswrapper[4886]: I1124 08:51:41.299982 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-vb86f" podStartSLOduration=136.299958033 podStartE2EDuration="2m16.299958033s" podCreationTimestamp="2025-11-24 08:49:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 08:51:41.298587205 +0000 UTC m=+157.185325340" watchObservedRunningTime="2025-11-24 08:51:41.299958033 +0000 UTC m=+157.186696168" Nov 24 08:51:41 crc kubenswrapper[4886]: I1124 08:51:41.327899 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4k7lh" podStartSLOduration=135.327875408 podStartE2EDuration="2m15.327875408s" podCreationTimestamp="2025-11-24 08:49:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 08:51:41.326569362 +0000 UTC m=+157.213307497" watchObservedRunningTime="2025-11-24 08:51:41.327875408 +0000 UTC m=+157.214613553" Nov 24 08:51:41 crc kubenswrapper[4886]: I1124 08:51:41.338040 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 08:51:41 crc kubenswrapper[4886]: E1124 08:51:41.338396 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 08:51:41.838372915 +0000 UTC m=+157.725111050 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 08:51:41 crc kubenswrapper[4886]: I1124 08:51:41.338727 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n794d\" (UID: \"30599c42-eef7-4967-b84f-95b49a225bd6\") " pod="openshift-image-registry/image-registry-697d97f7c8-n794d" Nov 24 08:51:41 crc kubenswrapper[4886]: E1124 08:51:41.340789 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 08:51:41.840774561 +0000 UTC m=+157.727512696 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n794d" (UID: "30599c42-eef7-4967-b84f-95b49a225bd6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 08:51:41 crc kubenswrapper[4886]: I1124 08:51:41.440312 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 08:51:41 crc kubenswrapper[4886]: E1124 08:51:41.440865 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 08:51:41.940842593 +0000 UTC m=+157.827580728 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 08:51:41 crc kubenswrapper[4886]: I1124 08:51:41.500274 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-tsmf5" podStartSLOduration=135.500247301 podStartE2EDuration="2m15.500247301s" podCreationTimestamp="2025-11-24 08:49:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 08:51:41.498177824 +0000 UTC m=+157.384915969" watchObservedRunningTime="2025-11-24 08:51:41.500247301 +0000 UTC m=+157.386985436" Nov 24 08:51:41 crc kubenswrapper[4886]: I1124 08:51:41.531806 4886 patch_prober.go:28] interesting pod/router-default-5444994796-xbr84 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 24 08:51:41 crc kubenswrapper[4886]: [-]has-synced failed: reason withheld Nov 24 08:51:41 crc kubenswrapper[4886]: [+]process-running ok Nov 24 08:51:41 crc kubenswrapper[4886]: healthz check failed Nov 24 08:51:41 crc kubenswrapper[4886]: I1124 08:51:41.532168 4886 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xbr84" podUID="a54a2524-099e-4a0f-9762-eafbc576dc56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 24 08:51:41 crc kubenswrapper[4886]: I1124 08:51:41.533747 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-ztbpv" podStartSLOduration=136.533720038 podStartE2EDuration="2m16.533720038s" podCreationTimestamp="2025-11-24 08:49:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 08:51:41.53124048 +0000 UTC m=+157.417978615" watchObservedRunningTime="2025-11-24 08:51:41.533720038 +0000 UTC m=+157.420458173" Nov 24 08:51:41 crc kubenswrapper[4886]: I1124 08:51:41.542466 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n794d\" (UID: \"30599c42-eef7-4967-b84f-95b49a225bd6\") " pod="openshift-image-registry/image-registry-697d97f7c8-n794d" Nov 24 08:51:41 crc kubenswrapper[4886]: E1124 08:51:41.542958 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 08:51:42.04293271 +0000 UTC m=+157.929670845 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n794d" (UID: "30599c42-eef7-4967-b84f-95b49a225bd6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 08:51:41 crc kubenswrapper[4886]: I1124 08:51:41.565714 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-km4wr" podStartSLOduration=136.565692754 podStartE2EDuration="2m16.565692754s" podCreationTimestamp="2025-11-24 08:49:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 08:51:41.564665166 +0000 UTC m=+157.451403301" watchObservedRunningTime="2025-11-24 08:51:41.565692754 +0000 UTC m=+157.452430889" Nov 24 08:51:41 crc kubenswrapper[4886]: I1124 08:51:41.645852 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 08:51:41 crc kubenswrapper[4886]: E1124 08:51:41.646197 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 08:51:42.146177699 +0000 UTC m=+158.032915834 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 08:51:41 crc kubenswrapper[4886]: I1124 08:51:41.747179 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n794d\" (UID: \"30599c42-eef7-4967-b84f-95b49a225bd6\") " pod="openshift-image-registry/image-registry-697d97f7c8-n794d" Nov 24 08:51:41 crc kubenswrapper[4886]: E1124 08:51:41.747585 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 08:51:42.247565638 +0000 UTC m=+158.134303763 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n794d" (UID: "30599c42-eef7-4967-b84f-95b49a225bd6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 08:51:41 crc kubenswrapper[4886]: I1124 08:51:41.758893 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-vtbkx" Nov 24 08:51:41 crc kubenswrapper[4886]: I1124 08:51:41.847740 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 08:51:41 crc kubenswrapper[4886]: E1124 08:51:41.848223 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 08:51:42.348206975 +0000 UTC m=+158.234945110 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 08:51:41 crc kubenswrapper[4886]: I1124 08:51:41.949061 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n794d\" (UID: \"30599c42-eef7-4967-b84f-95b49a225bd6\") " pod="openshift-image-registry/image-registry-697d97f7c8-n794d" Nov 24 08:51:41 crc kubenswrapper[4886]: E1124 08:51:41.949594 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 08:51:42.449568183 +0000 UTC m=+158.336306318 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n794d" (UID: "30599c42-eef7-4967-b84f-95b49a225bd6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 08:51:42 crc kubenswrapper[4886]: I1124 08:51:42.050047 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 08:51:42 crc kubenswrapper[4886]: E1124 08:51:42.050231 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 08:51:42.55020683 +0000 UTC m=+158.436944965 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 08:51:42 crc kubenswrapper[4886]: I1124 08:51:42.050842 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n794d\" (UID: \"30599c42-eef7-4967-b84f-95b49a225bd6\") " pod="openshift-image-registry/image-registry-697d97f7c8-n794d" Nov 24 08:51:42 crc kubenswrapper[4886]: E1124 08:51:42.051350 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 08:51:42.551337461 +0000 UTC m=+158.438075596 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n794d" (UID: "30599c42-eef7-4967-b84f-95b49a225bd6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 08:51:42 crc kubenswrapper[4886]: I1124 08:51:42.152579 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 08:51:42 crc kubenswrapper[4886]: E1124 08:51:42.152784 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 08:51:42.65275057 +0000 UTC m=+158.539488715 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 08:51:42 crc kubenswrapper[4886]: I1124 08:51:42.152914 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n794d\" (UID: \"30599c42-eef7-4967-b84f-95b49a225bd6\") " pod="openshift-image-registry/image-registry-697d97f7c8-n794d" Nov 24 08:51:42 crc kubenswrapper[4886]: E1124 08:51:42.153385 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 08:51:42.653376207 +0000 UTC m=+158.540114342 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n794d" (UID: "30599c42-eef7-4967-b84f-95b49a225bd6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 08:51:42 crc kubenswrapper[4886]: I1124 08:51:42.222622 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-lz4ml" event={"ID":"eb6743a4-a25e-4b1b-ae3b-3d2199cfbff2","Type":"ContainerStarted","Data":"bf484f62bed39076290683fe84a9079e761903cc854b4845a694b542d6c3ae6a"} Nov 24 08:51:42 crc kubenswrapper[4886]: I1124 08:51:42.222676 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-lz4ml" event={"ID":"eb6743a4-a25e-4b1b-ae3b-3d2199cfbff2","Type":"ContainerStarted","Data":"8268ad16e4e2617eb7462ad4877ebf2ed609d5178fb5de7ca33fbdd700a32a6e"} Nov 24 08:51:42 crc kubenswrapper[4886]: I1124 08:51:42.225767 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-njp2h" event={"ID":"2edcc7e5-5bfb-4c39-86c9-fabf007078f4","Type":"ContainerStarted","Data":"1a962e7acc14244212a1d23ccd442561c85143fcd121e0debf121731fbb40c0e"} Nov 24 08:51:42 crc kubenswrapper[4886]: I1124 08:51:42.227729 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-4z5n9" event={"ID":"ce7242d2-301f-4d8f-816a-a36418be67ca","Type":"ContainerStarted","Data":"f1f86ba81dfd3562b08d9452a02e538424a4ce2f7ea9ac19c27cd38a4c28ef54"} Nov 24 08:51:42 crc kubenswrapper[4886]: I1124 08:51:42.228542 4886 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-npx2b container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.26:5443/healthz\": dial tcp 10.217.0.26:5443: connect: connection refused" start-of-body= Nov 24 08:51:42 crc kubenswrapper[4886]: I1124 08:51:42.228586 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-npx2b" podUID="a356b9d0-54fe-4bac-9589-027e7cbfeb87" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.26:5443/healthz\": dial tcp 10.217.0.26:5443: connect: connection refused" Nov 24 08:51:42 crc kubenswrapper[4886]: I1124 08:51:42.229541 4886 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-tsmf5 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.32:8080/healthz\": dial tcp 10.217.0.32:8080: connect: connection refused" start-of-body= Nov 24 08:51:42 crc kubenswrapper[4886]: I1124 08:51:42.229683 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-tsmf5" podUID="364b3e42-dafa-45cd-bf38-545cc2eb9e21" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.32:8080/healthz\": dial tcp 10.217.0.32:8080: connect: connection refused" Nov 24 08:51:42 crc kubenswrapper[4886]: I1124 08:51:42.243909 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-f8pr4" Nov 24 08:51:42 crc kubenswrapper[4886]: I1124 08:51:42.251145 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-njp2h" Nov 24 08:51:42 crc kubenswrapper[4886]: I1124 08:51:42.251220 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-njp2h" Nov 24 08:51:42 crc kubenswrapper[4886]: I1124 08:51:42.253859 4886 patch_prober.go:28] interesting pod/apiserver-7bbb656c7d-njp2h container/oauth-apiserver namespace/openshift-oauth-apiserver: Startup probe status=failure output="Get \"https://10.217.0.29:8443/livez\": dial tcp 10.217.0.29:8443: connect: connection refused" start-of-body= Nov 24 08:51:42 crc kubenswrapper[4886]: I1124 08:51:42.254004 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 08:51:42 crc kubenswrapper[4886]: I1124 08:51:42.254211 4886 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-njp2h" podUID="2edcc7e5-5bfb-4c39-86c9-fabf007078f4" containerName="oauth-apiserver" probeResult="failure" output="Get \"https://10.217.0.29:8443/livez\": dial tcp 10.217.0.29:8443: connect: connection refused" Nov 24 08:51:42 crc kubenswrapper[4886]: E1124 08:51:42.254086 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 08:51:42.754070446 +0000 UTC m=+158.640808581 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 08:51:42 crc kubenswrapper[4886]: I1124 08:51:42.254432 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n794d\" (UID: \"30599c42-eef7-4967-b84f-95b49a225bd6\") " pod="openshift-image-registry/image-registry-697d97f7c8-n794d" Nov 24 08:51:42 crc kubenswrapper[4886]: E1124 08:51:42.254936 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 08:51:42.75491613 +0000 UTC m=+158.641654265 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n794d" (UID: "30599c42-eef7-4967-b84f-95b49a225bd6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 08:51:42 crc kubenswrapper[4886]: I1124 08:51:42.353831 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-lz4ml" podStartSLOduration=137.353807619 podStartE2EDuration="2m17.353807619s" podCreationTimestamp="2025-11-24 08:49:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 08:51:42.35238412 +0000 UTC m=+158.239122275" watchObservedRunningTime="2025-11-24 08:51:42.353807619 +0000 UTC m=+158.240545754" Nov 24 08:51:42 crc kubenswrapper[4886]: I1124 08:51:42.355660 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 08:51:42 crc kubenswrapper[4886]: E1124 08:51:42.356318 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 08:51:42.856289287 +0000 UTC m=+158.743027422 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 08:51:42 crc kubenswrapper[4886]: I1124 08:51:42.463862 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n794d\" (UID: \"30599c42-eef7-4967-b84f-95b49a225bd6\") " pod="openshift-image-registry/image-registry-697d97f7c8-n794d" Nov 24 08:51:42 crc kubenswrapper[4886]: E1124 08:51:42.464245 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 08:51:42.964232835 +0000 UTC m=+158.850970970 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n794d" (UID: "30599c42-eef7-4967-b84f-95b49a225bd6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 08:51:42 crc kubenswrapper[4886]: I1124 08:51:42.464382 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-4z5n9" podStartSLOduration=137.464362639 podStartE2EDuration="2m17.464362639s" podCreationTimestamp="2025-11-24 08:49:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 08:51:42.462717003 +0000 UTC m=+158.349455138" watchObservedRunningTime="2025-11-24 08:51:42.464362639 +0000 UTC m=+158.351100774" Nov 24 08:51:42 crc kubenswrapper[4886]: I1124 08:51:42.531281 4886 patch_prober.go:28] interesting pod/router-default-5444994796-xbr84 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 24 08:51:42 crc kubenswrapper[4886]: [-]has-synced failed: reason withheld Nov 24 08:51:42 crc kubenswrapper[4886]: [+]process-running ok Nov 24 08:51:42 crc kubenswrapper[4886]: healthz check failed Nov 24 08:51:42 crc kubenswrapper[4886]: I1124 08:51:42.531363 4886 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xbr84" podUID="a54a2524-099e-4a0f-9762-eafbc576dc56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 24 08:51:42 crc kubenswrapper[4886]: I1124 08:51:42.565059 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 08:51:42 crc kubenswrapper[4886]: E1124 08:51:42.565286 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 08:51:43.065253253 +0000 UTC m=+158.951991388 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 08:51:42 crc kubenswrapper[4886]: I1124 08:51:42.565695 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n794d\" (UID: \"30599c42-eef7-4967-b84f-95b49a225bd6\") " pod="openshift-image-registry/image-registry-697d97f7c8-n794d" Nov 24 08:51:42 crc kubenswrapper[4886]: E1124 08:51:42.566069 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 08:51:43.066061085 +0000 UTC m=+158.952799210 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n794d" (UID: "30599c42-eef7-4967-b84f-95b49a225bd6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 08:51:42 crc kubenswrapper[4886]: I1124 08:51:42.666734 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 08:51:42 crc kubenswrapper[4886]: E1124 08:51:42.666931 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 08:51:43.166894188 +0000 UTC m=+159.053632333 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 08:51:42 crc kubenswrapper[4886]: I1124 08:51:42.667038 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n794d\" (UID: \"30599c42-eef7-4967-b84f-95b49a225bd6\") " pod="openshift-image-registry/image-registry-697d97f7c8-n794d" Nov 24 08:51:42 crc kubenswrapper[4886]: E1124 08:51:42.667416 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 08:51:43.167401182 +0000 UTC m=+159.054139317 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n794d" (UID: "30599c42-eef7-4967-b84f-95b49a225bd6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 08:51:42 crc kubenswrapper[4886]: I1124 08:51:42.767946 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 08:51:42 crc kubenswrapper[4886]: E1124 08:51:42.768205 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 08:51:43.268174343 +0000 UTC m=+159.154912478 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 08:51:42 crc kubenswrapper[4886]: I1124 08:51:42.768410 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n794d\" (UID: \"30599c42-eef7-4967-b84f-95b49a225bd6\") " pod="openshift-image-registry/image-registry-697d97f7c8-n794d" Nov 24 08:51:42 crc kubenswrapper[4886]: E1124 08:51:42.768829 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 08:51:43.268812371 +0000 UTC m=+159.155550506 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n794d" (UID: "30599c42-eef7-4967-b84f-95b49a225bd6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 08:51:42 crc kubenswrapper[4886]: I1124 08:51:42.869621 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 08:51:42 crc kubenswrapper[4886]: E1124 08:51:42.870098 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 08:51:43.370060925 +0000 UTC m=+159.256799070 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 08:51:42 crc kubenswrapper[4886]: I1124 08:51:42.971754 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n794d\" (UID: \"30599c42-eef7-4967-b84f-95b49a225bd6\") " pod="openshift-image-registry/image-registry-697d97f7c8-n794d" Nov 24 08:51:42 crc kubenswrapper[4886]: E1124 08:51:42.972281 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 08:51:43.472260065 +0000 UTC m=+159.358998200 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n794d" (UID: "30599c42-eef7-4967-b84f-95b49a225bd6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 08:51:42 crc kubenswrapper[4886]: I1124 08:51:42.998957 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4k7lh" Nov 24 08:51:43 crc kubenswrapper[4886]: I1124 08:51:43.053284 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-njp2h" podStartSLOduration=137.053258204 podStartE2EDuration="2m17.053258204s" podCreationTimestamp="2025-11-24 08:49:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 08:51:42.572548243 +0000 UTC m=+158.459286378" watchObservedRunningTime="2025-11-24 08:51:43.053258204 +0000 UTC m=+158.939996339" Nov 24 08:51:43 crc kubenswrapper[4886]: I1124 08:51:43.072626 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 08:51:43 crc kubenswrapper[4886]: E1124 08:51:43.072827 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 08:51:43.572794049 +0000 UTC m=+159.459532174 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 08:51:43 crc kubenswrapper[4886]: I1124 08:51:43.073399 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n794d\" (UID: \"30599c42-eef7-4967-b84f-95b49a225bd6\") " pod="openshift-image-registry/image-registry-697d97f7c8-n794d" Nov 24 08:51:43 crc kubenswrapper[4886]: E1124 08:51:43.073764 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 08:51:43.573754166 +0000 UTC m=+159.460492291 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n794d" (UID: "30599c42-eef7-4967-b84f-95b49a225bd6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 08:51:43 crc kubenswrapper[4886]: I1124 08:51:43.155942 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-lz4ml" Nov 24 08:51:43 crc kubenswrapper[4886]: I1124 08:51:43.156013 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-lz4ml" Nov 24 08:51:43 crc kubenswrapper[4886]: I1124 08:51:43.157336 4886 patch_prober.go:28] interesting pod/apiserver-76f77b778f-lz4ml container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="Get \"https://10.217.0.6:8443/livez\": dial tcp 10.217.0.6:8443: connect: connection refused" start-of-body= Nov 24 08:51:43 crc kubenswrapper[4886]: I1124 08:51:43.157395 4886 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-lz4ml" podUID="eb6743a4-a25e-4b1b-ae3b-3d2199cfbff2" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.217.0.6:8443/livez\": dial tcp 10.217.0.6:8443: connect: connection refused" Nov 24 08:51:43 crc kubenswrapper[4886]: I1124 08:51:43.175023 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 08:51:43 crc kubenswrapper[4886]: E1124 08:51:43.175269 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 08:51:43.675233936 +0000 UTC m=+159.561972081 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 08:51:43 crc kubenswrapper[4886]: I1124 08:51:43.175768 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n794d\" (UID: \"30599c42-eef7-4967-b84f-95b49a225bd6\") " pod="openshift-image-registry/image-registry-697d97f7c8-n794d" Nov 24 08:51:43 crc kubenswrapper[4886]: E1124 08:51:43.176191 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 08:51:43.676178332 +0000 UTC m=+159.562916457 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n794d" (UID: "30599c42-eef7-4967-b84f-95b49a225bd6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 08:51:43 crc kubenswrapper[4886]: I1124 08:51:43.194887 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-l8vn8"] Nov 24 08:51:43 crc kubenswrapper[4886]: I1124 08:51:43.196089 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-l8vn8" Nov 24 08:51:43 crc kubenswrapper[4886]: I1124 08:51:43.199176 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Nov 24 08:51:43 crc kubenswrapper[4886]: I1124 08:51:43.212908 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-l8vn8"] Nov 24 08:51:43 crc kubenswrapper[4886]: I1124 08:51:43.235642 4886 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-tsmf5 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.32:8080/healthz\": dial tcp 10.217.0.32:8080: connect: connection refused" start-of-body= Nov 24 08:51:43 crc kubenswrapper[4886]: I1124 08:51:43.235727 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-tsmf5" podUID="364b3e42-dafa-45cd-bf38-545cc2eb9e21" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.32:8080/healthz\": dial tcp 10.217.0.32:8080: connect: connection refused" Nov 24 08:51:43 crc kubenswrapper[4886]: I1124 08:51:43.235947 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-4z5n9" Nov 24 08:51:43 crc kubenswrapper[4886]: I1124 08:51:43.276507 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 08:51:43 crc kubenswrapper[4886]: E1124 08:51:43.276724 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 08:51:43.776647875 +0000 UTC m=+159.663386010 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 08:51:43 crc kubenswrapper[4886]: I1124 08:51:43.276899 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n794d\" (UID: \"30599c42-eef7-4967-b84f-95b49a225bd6\") " pod="openshift-image-registry/image-registry-697d97f7c8-n794d" Nov 24 08:51:43 crc kubenswrapper[4886]: I1124 08:51:43.276946 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-74hf9\" (UniqueName: \"kubernetes.io/projected/d89bb378-d235-4377-9908-0008691b9174-kube-api-access-74hf9\") pod \"community-operators-l8vn8\" (UID: \"d89bb378-d235-4377-9908-0008691b9174\") " pod="openshift-marketplace/community-operators-l8vn8" Nov 24 08:51:43 crc kubenswrapper[4886]: I1124 08:51:43.276980 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d89bb378-d235-4377-9908-0008691b9174-catalog-content\") pod \"community-operators-l8vn8\" (UID: \"d89bb378-d235-4377-9908-0008691b9174\") " pod="openshift-marketplace/community-operators-l8vn8" Nov 24 08:51:43 crc kubenswrapper[4886]: I1124 08:51:43.277012 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d89bb378-d235-4377-9908-0008691b9174-utilities\") pod \"community-operators-l8vn8\" (UID: \"d89bb378-d235-4377-9908-0008691b9174\") " pod="openshift-marketplace/community-operators-l8vn8" Nov 24 08:51:43 crc kubenswrapper[4886]: E1124 08:51:43.277450 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 08:51:43.777442157 +0000 UTC m=+159.664180292 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n794d" (UID: "30599c42-eef7-4967-b84f-95b49a225bd6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 08:51:43 crc kubenswrapper[4886]: I1124 08:51:43.378671 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 08:51:43 crc kubenswrapper[4886]: E1124 08:51:43.378861 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 08:51:43.878831355 +0000 UTC m=+159.765569490 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 08:51:43 crc kubenswrapper[4886]: I1124 08:51:43.379373 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n794d\" (UID: \"30599c42-eef7-4967-b84f-95b49a225bd6\") " pod="openshift-image-registry/image-registry-697d97f7c8-n794d" Nov 24 08:51:43 crc kubenswrapper[4886]: I1124 08:51:43.380683 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-74hf9\" (UniqueName: \"kubernetes.io/projected/d89bb378-d235-4377-9908-0008691b9174-kube-api-access-74hf9\") pod \"community-operators-l8vn8\" (UID: \"d89bb378-d235-4377-9908-0008691b9174\") " pod="openshift-marketplace/community-operators-l8vn8" Nov 24 08:51:43 crc kubenswrapper[4886]: E1124 08:51:43.380727 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 08:51:43.880716647 +0000 UTC m=+159.767454782 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n794d" (UID: "30599c42-eef7-4967-b84f-95b49a225bd6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 08:51:43 crc kubenswrapper[4886]: I1124 08:51:43.380872 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d89bb378-d235-4377-9908-0008691b9174-catalog-content\") pod \"community-operators-l8vn8\" (UID: \"d89bb378-d235-4377-9908-0008691b9174\") " pod="openshift-marketplace/community-operators-l8vn8" Nov 24 08:51:43 crc kubenswrapper[4886]: I1124 08:51:43.381063 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d89bb378-d235-4377-9908-0008691b9174-utilities\") pod \"community-operators-l8vn8\" (UID: \"d89bb378-d235-4377-9908-0008691b9174\") " pod="openshift-marketplace/community-operators-l8vn8" Nov 24 08:51:43 crc kubenswrapper[4886]: I1124 08:51:43.381451 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d89bb378-d235-4377-9908-0008691b9174-catalog-content\") pod \"community-operators-l8vn8\" (UID: \"d89bb378-d235-4377-9908-0008691b9174\") " pod="openshift-marketplace/community-operators-l8vn8" Nov 24 08:51:43 crc kubenswrapper[4886]: I1124 08:51:43.382505 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d89bb378-d235-4377-9908-0008691b9174-utilities\") pod \"community-operators-l8vn8\" (UID: \"d89bb378-d235-4377-9908-0008691b9174\") " pod="openshift-marketplace/community-operators-l8vn8" Nov 24 08:51:43 crc kubenswrapper[4886]: I1124 08:51:43.396858 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-2s97s"] Nov 24 08:51:43 crc kubenswrapper[4886]: I1124 08:51:43.398067 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2s97s" Nov 24 08:51:43 crc kubenswrapper[4886]: I1124 08:51:43.401799 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Nov 24 08:51:43 crc kubenswrapper[4886]: I1124 08:51:43.408355 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-74hf9\" (UniqueName: \"kubernetes.io/projected/d89bb378-d235-4377-9908-0008691b9174-kube-api-access-74hf9\") pod \"community-operators-l8vn8\" (UID: \"d89bb378-d235-4377-9908-0008691b9174\") " pod="openshift-marketplace/community-operators-l8vn8" Nov 24 08:51:43 crc kubenswrapper[4886]: I1124 08:51:43.411045 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-2s97s"] Nov 24 08:51:43 crc kubenswrapper[4886]: I1124 08:51:43.484677 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 08:51:43 crc kubenswrapper[4886]: I1124 08:51:43.484901 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/44504d41-1a7d-4a15-a270-24325b0954a9-catalog-content\") pod \"certified-operators-2s97s\" (UID: \"44504d41-1a7d-4a15-a270-24325b0954a9\") " pod="openshift-marketplace/certified-operators-2s97s" Nov 24 08:51:43 crc kubenswrapper[4886]: I1124 08:51:43.484949 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/44504d41-1a7d-4a15-a270-24325b0954a9-utilities\") pod \"certified-operators-2s97s\" (UID: \"44504d41-1a7d-4a15-a270-24325b0954a9\") " pod="openshift-marketplace/certified-operators-2s97s" Nov 24 08:51:43 crc kubenswrapper[4886]: I1124 08:51:43.484989 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vkrdt\" (UniqueName: \"kubernetes.io/projected/44504d41-1a7d-4a15-a270-24325b0954a9-kube-api-access-vkrdt\") pod \"certified-operators-2s97s\" (UID: \"44504d41-1a7d-4a15-a270-24325b0954a9\") " pod="openshift-marketplace/certified-operators-2s97s" Nov 24 08:51:43 crc kubenswrapper[4886]: E1124 08:51:43.485100 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 08:51:43.985082336 +0000 UTC m=+159.871820471 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 08:51:43 crc kubenswrapper[4886]: I1124 08:51:43.512742 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-l8vn8" Nov 24 08:51:43 crc kubenswrapper[4886]: I1124 08:51:43.532417 4886 patch_prober.go:28] interesting pod/router-default-5444994796-xbr84 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 24 08:51:43 crc kubenswrapper[4886]: [-]has-synced failed: reason withheld Nov 24 08:51:43 crc kubenswrapper[4886]: [+]process-running ok Nov 24 08:51:43 crc kubenswrapper[4886]: healthz check failed Nov 24 08:51:43 crc kubenswrapper[4886]: I1124 08:51:43.532505 4886 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xbr84" podUID="a54a2524-099e-4a0f-9762-eafbc576dc56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 24 08:51:43 crc kubenswrapper[4886]: I1124 08:51:43.586074 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/44504d41-1a7d-4a15-a270-24325b0954a9-catalog-content\") pod \"certified-operators-2s97s\" (UID: \"44504d41-1a7d-4a15-a270-24325b0954a9\") " pod="openshift-marketplace/certified-operators-2s97s" Nov 24 08:51:43 crc kubenswrapper[4886]: I1124 08:51:43.586162 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/44504d41-1a7d-4a15-a270-24325b0954a9-utilities\") pod \"certified-operators-2s97s\" (UID: \"44504d41-1a7d-4a15-a270-24325b0954a9\") " pod="openshift-marketplace/certified-operators-2s97s" Nov 24 08:51:43 crc kubenswrapper[4886]: I1124 08:51:43.586205 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vkrdt\" (UniqueName: \"kubernetes.io/projected/44504d41-1a7d-4a15-a270-24325b0954a9-kube-api-access-vkrdt\") pod \"certified-operators-2s97s\" (UID: \"44504d41-1a7d-4a15-a270-24325b0954a9\") " pod="openshift-marketplace/certified-operators-2s97s" Nov 24 08:51:43 crc kubenswrapper[4886]: I1124 08:51:43.586248 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n794d\" (UID: \"30599c42-eef7-4967-b84f-95b49a225bd6\") " pod="openshift-image-registry/image-registry-697d97f7c8-n794d" Nov 24 08:51:43 crc kubenswrapper[4886]: E1124 08:51:43.586603 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 08:51:44.086587438 +0000 UTC m=+159.973325573 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n794d" (UID: "30599c42-eef7-4967-b84f-95b49a225bd6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 08:51:43 crc kubenswrapper[4886]: I1124 08:51:43.587493 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/44504d41-1a7d-4a15-a270-24325b0954a9-catalog-content\") pod \"certified-operators-2s97s\" (UID: \"44504d41-1a7d-4a15-a270-24325b0954a9\") " pod="openshift-marketplace/certified-operators-2s97s" Nov 24 08:51:43 crc kubenswrapper[4886]: I1124 08:51:43.587639 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/44504d41-1a7d-4a15-a270-24325b0954a9-utilities\") pod \"certified-operators-2s97s\" (UID: \"44504d41-1a7d-4a15-a270-24325b0954a9\") " pod="openshift-marketplace/certified-operators-2s97s" Nov 24 08:51:43 crc kubenswrapper[4886]: I1124 08:51:43.598724 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-ztc6l"] Nov 24 08:51:43 crc kubenswrapper[4886]: I1124 08:51:43.599694 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ztc6l" Nov 24 08:51:43 crc kubenswrapper[4886]: I1124 08:51:43.625451 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-ztc6l"] Nov 24 08:51:43 crc kubenswrapper[4886]: I1124 08:51:43.669470 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vkrdt\" (UniqueName: \"kubernetes.io/projected/44504d41-1a7d-4a15-a270-24325b0954a9-kube-api-access-vkrdt\") pod \"certified-operators-2s97s\" (UID: \"44504d41-1a7d-4a15-a270-24325b0954a9\") " pod="openshift-marketplace/certified-operators-2s97s" Nov 24 08:51:43 crc kubenswrapper[4886]: I1124 08:51:43.690892 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 08:51:43 crc kubenswrapper[4886]: I1124 08:51:43.691241 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/925c272d-d181-4754-a9cb-9b9b11e18f6c-catalog-content\") pod \"community-operators-ztc6l\" (UID: \"925c272d-d181-4754-a9cb-9b9b11e18f6c\") " pod="openshift-marketplace/community-operators-ztc6l" Nov 24 08:51:43 crc kubenswrapper[4886]: I1124 08:51:43.691299 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/925c272d-d181-4754-a9cb-9b9b11e18f6c-utilities\") pod \"community-operators-ztc6l\" (UID: \"925c272d-d181-4754-a9cb-9b9b11e18f6c\") " pod="openshift-marketplace/community-operators-ztc6l" Nov 24 08:51:43 crc kubenswrapper[4886]: I1124 08:51:43.691359 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6xsnr\" (UniqueName: \"kubernetes.io/projected/925c272d-d181-4754-a9cb-9b9b11e18f6c-kube-api-access-6xsnr\") pod \"community-operators-ztc6l\" (UID: \"925c272d-d181-4754-a9cb-9b9b11e18f6c\") " pod="openshift-marketplace/community-operators-ztc6l" Nov 24 08:51:43 crc kubenswrapper[4886]: E1124 08:51:43.691533 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 08:51:44.191497332 +0000 UTC m=+160.078235467 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 08:51:43 crc kubenswrapper[4886]: I1124 08:51:43.748663 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2s97s" Nov 24 08:51:43 crc kubenswrapper[4886]: I1124 08:51:43.793177 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/925c272d-d181-4754-a9cb-9b9b11e18f6c-utilities\") pod \"community-operators-ztc6l\" (UID: \"925c272d-d181-4754-a9cb-9b9b11e18f6c\") " pod="openshift-marketplace/community-operators-ztc6l" Nov 24 08:51:43 crc kubenswrapper[4886]: I1124 08:51:43.793272 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6xsnr\" (UniqueName: \"kubernetes.io/projected/925c272d-d181-4754-a9cb-9b9b11e18f6c-kube-api-access-6xsnr\") pod \"community-operators-ztc6l\" (UID: \"925c272d-d181-4754-a9cb-9b9b11e18f6c\") " pod="openshift-marketplace/community-operators-ztc6l" Nov 24 08:51:43 crc kubenswrapper[4886]: I1124 08:51:43.793345 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n794d\" (UID: \"30599c42-eef7-4967-b84f-95b49a225bd6\") " pod="openshift-image-registry/image-registry-697d97f7c8-n794d" Nov 24 08:51:43 crc kubenswrapper[4886]: I1124 08:51:43.793373 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/925c272d-d181-4754-a9cb-9b9b11e18f6c-catalog-content\") pod \"community-operators-ztc6l\" (UID: \"925c272d-d181-4754-a9cb-9b9b11e18f6c\") " pod="openshift-marketplace/community-operators-ztc6l" Nov 24 08:51:43 crc kubenswrapper[4886]: I1124 08:51:43.793890 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/925c272d-d181-4754-a9cb-9b9b11e18f6c-catalog-content\") pod \"community-operators-ztc6l\" (UID: \"925c272d-d181-4754-a9cb-9b9b11e18f6c\") " pod="openshift-marketplace/community-operators-ztc6l" Nov 24 08:51:43 crc kubenswrapper[4886]: E1124 08:51:43.794273 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 08:51:44.294255108 +0000 UTC m=+160.180993243 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n794d" (UID: "30599c42-eef7-4967-b84f-95b49a225bd6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 08:51:43 crc kubenswrapper[4886]: I1124 08:51:43.794547 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/925c272d-d181-4754-a9cb-9b9b11e18f6c-utilities\") pod \"community-operators-ztc6l\" (UID: \"925c272d-d181-4754-a9cb-9b9b11e18f6c\") " pod="openshift-marketplace/community-operators-ztc6l" Nov 24 08:51:43 crc kubenswrapper[4886]: I1124 08:51:43.821259 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-qjxlb"] Nov 24 08:51:43 crc kubenswrapper[4886]: I1124 08:51:43.825871 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qjxlb" Nov 24 08:51:43 crc kubenswrapper[4886]: I1124 08:51:43.843899 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-qjxlb"] Nov 24 08:51:43 crc kubenswrapper[4886]: I1124 08:51:43.863805 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6xsnr\" (UniqueName: \"kubernetes.io/projected/925c272d-d181-4754-a9cb-9b9b11e18f6c-kube-api-access-6xsnr\") pod \"community-operators-ztc6l\" (UID: \"925c272d-d181-4754-a9cb-9b9b11e18f6c\") " pod="openshift-marketplace/community-operators-ztc6l" Nov 24 08:51:43 crc kubenswrapper[4886]: I1124 08:51:43.894909 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 08:51:43 crc kubenswrapper[4886]: E1124 08:51:43.895251 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 08:51:44.395231285 +0000 UTC m=+160.281969420 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 08:51:43 crc kubenswrapper[4886]: I1124 08:51:43.895285 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n794d\" (UID: \"30599c42-eef7-4967-b84f-95b49a225bd6\") " pod="openshift-image-registry/image-registry-697d97f7c8-n794d" Nov 24 08:51:43 crc kubenswrapper[4886]: E1124 08:51:43.895581 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 08:51:44.395575004 +0000 UTC m=+160.282313139 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n794d" (UID: "30599c42-eef7-4967-b84f-95b49a225bd6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 08:51:43 crc kubenswrapper[4886]: I1124 08:51:43.899571 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Nov 24 08:51:43 crc kubenswrapper[4886]: I1124 08:51:43.900428 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 24 08:51:43 crc kubenswrapper[4886]: I1124 08:51:43.911650 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Nov 24 08:51:43 crc kubenswrapper[4886]: I1124 08:51:43.935866 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Nov 24 08:51:43 crc kubenswrapper[4886]: I1124 08:51:43.941202 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Nov 24 08:51:43 crc kubenswrapper[4886]: I1124 08:51:43.956506 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ztc6l" Nov 24 08:51:43 crc kubenswrapper[4886]: I1124 08:51:43.997035 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 08:51:43 crc kubenswrapper[4886]: I1124 08:51:43.997279 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8e1a52d4-abae-4519-8288-c1c56ea36e76-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"8e1a52d4-abae-4519-8288-c1c56ea36e76\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 24 08:51:43 crc kubenswrapper[4886]: I1124 08:51:43.997311 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8e1a52d4-abae-4519-8288-c1c56ea36e76-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"8e1a52d4-abae-4519-8288-c1c56ea36e76\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 24 08:51:43 crc kubenswrapper[4886]: I1124 08:51:43.997364 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aa41e796-f145-4455-b8fa-d751c98f7b5f-catalog-content\") pod \"certified-operators-qjxlb\" (UID: \"aa41e796-f145-4455-b8fa-d751c98f7b5f\") " pod="openshift-marketplace/certified-operators-qjxlb" Nov 24 08:51:43 crc kubenswrapper[4886]: I1124 08:51:43.997412 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-84x56\" (UniqueName: \"kubernetes.io/projected/aa41e796-f145-4455-b8fa-d751c98f7b5f-kube-api-access-84x56\") pod \"certified-operators-qjxlb\" (UID: \"aa41e796-f145-4455-b8fa-d751c98f7b5f\") " pod="openshift-marketplace/certified-operators-qjxlb" Nov 24 08:51:43 crc kubenswrapper[4886]: I1124 08:51:43.997440 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aa41e796-f145-4455-b8fa-d751c98f7b5f-utilities\") pod \"certified-operators-qjxlb\" (UID: \"aa41e796-f145-4455-b8fa-d751c98f7b5f\") " pod="openshift-marketplace/certified-operators-qjxlb" Nov 24 08:51:43 crc kubenswrapper[4886]: E1124 08:51:43.997566 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 08:51:44.497547359 +0000 UTC m=+160.384285494 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 08:51:44 crc kubenswrapper[4886]: I1124 08:51:44.099015 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8e1a52d4-abae-4519-8288-c1c56ea36e76-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"8e1a52d4-abae-4519-8288-c1c56ea36e76\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 24 08:51:44 crc kubenswrapper[4886]: I1124 08:51:44.099456 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aa41e796-f145-4455-b8fa-d751c98f7b5f-catalog-content\") pod \"certified-operators-qjxlb\" (UID: \"aa41e796-f145-4455-b8fa-d751c98f7b5f\") " pod="openshift-marketplace/certified-operators-qjxlb" Nov 24 08:51:44 crc kubenswrapper[4886]: I1124 08:51:44.099516 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n794d\" (UID: \"30599c42-eef7-4967-b84f-95b49a225bd6\") " pod="openshift-image-registry/image-registry-697d97f7c8-n794d" Nov 24 08:51:44 crc kubenswrapper[4886]: I1124 08:51:44.099551 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-84x56\" (UniqueName: \"kubernetes.io/projected/aa41e796-f145-4455-b8fa-d751c98f7b5f-kube-api-access-84x56\") pod \"certified-operators-qjxlb\" (UID: \"aa41e796-f145-4455-b8fa-d751c98f7b5f\") " pod="openshift-marketplace/certified-operators-qjxlb" Nov 24 08:51:44 crc kubenswrapper[4886]: I1124 08:51:44.099597 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aa41e796-f145-4455-b8fa-d751c98f7b5f-utilities\") pod \"certified-operators-qjxlb\" (UID: \"aa41e796-f145-4455-b8fa-d751c98f7b5f\") " pod="openshift-marketplace/certified-operators-qjxlb" Nov 24 08:51:44 crc kubenswrapper[4886]: I1124 08:51:44.099643 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8e1a52d4-abae-4519-8288-c1c56ea36e76-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"8e1a52d4-abae-4519-8288-c1c56ea36e76\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 24 08:51:44 crc kubenswrapper[4886]: I1124 08:51:44.100077 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8e1a52d4-abae-4519-8288-c1c56ea36e76-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"8e1a52d4-abae-4519-8288-c1c56ea36e76\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 24 08:51:44 crc kubenswrapper[4886]: I1124 08:51:44.100529 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aa41e796-f145-4455-b8fa-d751c98f7b5f-catalog-content\") pod \"certified-operators-qjxlb\" (UID: \"aa41e796-f145-4455-b8fa-d751c98f7b5f\") " pod="openshift-marketplace/certified-operators-qjxlb" Nov 24 08:51:44 crc kubenswrapper[4886]: E1124 08:51:44.100855 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 08:51:44.600841519 +0000 UTC m=+160.487579654 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n794d" (UID: "30599c42-eef7-4967-b84f-95b49a225bd6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 08:51:44 crc kubenswrapper[4886]: I1124 08:51:44.101445 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aa41e796-f145-4455-b8fa-d751c98f7b5f-utilities\") pod \"certified-operators-qjxlb\" (UID: \"aa41e796-f145-4455-b8fa-d751c98f7b5f\") " pod="openshift-marketplace/certified-operators-qjxlb" Nov 24 08:51:44 crc kubenswrapper[4886]: I1124 08:51:44.141106 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8e1a52d4-abae-4519-8288-c1c56ea36e76-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"8e1a52d4-abae-4519-8288-c1c56ea36e76\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 24 08:51:44 crc kubenswrapper[4886]: I1124 08:51:44.142995 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-84x56\" (UniqueName: \"kubernetes.io/projected/aa41e796-f145-4455-b8fa-d751c98f7b5f-kube-api-access-84x56\") pod \"certified-operators-qjxlb\" (UID: \"aa41e796-f145-4455-b8fa-d751c98f7b5f\") " pod="openshift-marketplace/certified-operators-qjxlb" Nov 24 08:51:44 crc kubenswrapper[4886]: I1124 08:51:44.157619 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qjxlb" Nov 24 08:51:44 crc kubenswrapper[4886]: I1124 08:51:44.201965 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 08:51:44 crc kubenswrapper[4886]: E1124 08:51:44.202350 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 08:51:44.70233383 +0000 UTC m=+160.589071965 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 08:51:44 crc kubenswrapper[4886]: I1124 08:51:44.259674 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-l8vn8"] Nov 24 08:51:44 crc kubenswrapper[4886]: I1124 08:51:44.262709 4886 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-4z5n9 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" start-of-body= Nov 24 08:51:44 crc kubenswrapper[4886]: I1124 08:51:44.262776 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-4z5n9" podUID="ce7242d2-301f-4d8f-816a-a36418be67ca" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" Nov 24 08:51:44 crc kubenswrapper[4886]: I1124 08:51:44.275496 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 24 08:51:44 crc kubenswrapper[4886]: I1124 08:51:44.304338 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n794d\" (UID: \"30599c42-eef7-4967-b84f-95b49a225bd6\") " pod="openshift-image-registry/image-registry-697d97f7c8-n794d" Nov 24 08:51:44 crc kubenswrapper[4886]: E1124 08:51:44.304844 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 08:51:44.804820958 +0000 UTC m=+160.691559093 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n794d" (UID: "30599c42-eef7-4967-b84f-95b49a225bd6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 08:51:44 crc kubenswrapper[4886]: I1124 08:51:44.408866 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 08:51:44 crc kubenswrapper[4886]: E1124 08:51:44.412495 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 08:51:44.912468088 +0000 UTC m=+160.799206223 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 08:51:44 crc kubenswrapper[4886]: I1124 08:51:44.514959 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n794d\" (UID: \"30599c42-eef7-4967-b84f-95b49a225bd6\") " pod="openshift-image-registry/image-registry-697d97f7c8-n794d" Nov 24 08:51:44 crc kubenswrapper[4886]: E1124 08:51:44.515502 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 08:51:45.015481221 +0000 UTC m=+160.902219356 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n794d" (UID: "30599c42-eef7-4967-b84f-95b49a225bd6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 08:51:44 crc kubenswrapper[4886]: I1124 08:51:44.539698 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-2s97s"] Nov 24 08:51:44 crc kubenswrapper[4886]: I1124 08:51:44.546001 4886 patch_prober.go:28] interesting pod/router-default-5444994796-xbr84 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 24 08:51:44 crc kubenswrapper[4886]: [-]has-synced failed: reason withheld Nov 24 08:51:44 crc kubenswrapper[4886]: [+]process-running ok Nov 24 08:51:44 crc kubenswrapper[4886]: healthz check failed Nov 24 08:51:44 crc kubenswrapper[4886]: I1124 08:51:44.546071 4886 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xbr84" podUID="a54a2524-099e-4a0f-9762-eafbc576dc56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 24 08:51:44 crc kubenswrapper[4886]: W1124 08:51:44.602359 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod44504d41_1a7d_4a15_a270_24325b0954a9.slice/crio-a7ee8f094397260ac6af09a294086fb9caa9364a2b7ce9cb9f304f524e8e8c15 WatchSource:0}: Error finding container a7ee8f094397260ac6af09a294086fb9caa9364a2b7ce9cb9f304f524e8e8c15: Status 404 returned error can't find the container with id a7ee8f094397260ac6af09a294086fb9caa9364a2b7ce9cb9f304f524e8e8c15 Nov 24 08:51:44 crc kubenswrapper[4886]: I1124 08:51:44.616076 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 08:51:44 crc kubenswrapper[4886]: E1124 08:51:44.616659 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 08:51:45.116634772 +0000 UTC m=+161.003372917 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 08:51:44 crc kubenswrapper[4886]: I1124 08:51:44.718061 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n794d\" (UID: \"30599c42-eef7-4967-b84f-95b49a225bd6\") " pod="openshift-image-registry/image-registry-697d97f7c8-n794d" Nov 24 08:51:44 crc kubenswrapper[4886]: E1124 08:51:44.718498 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 08:51:45.218482013 +0000 UTC m=+161.105220138 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n794d" (UID: "30599c42-eef7-4967-b84f-95b49a225bd6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 08:51:44 crc kubenswrapper[4886]: I1124 08:51:44.822822 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 08:51:44 crc kubenswrapper[4886]: E1124 08:51:44.823380 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 08:51:45.323361467 +0000 UTC m=+161.210099602 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 08:51:44 crc kubenswrapper[4886]: I1124 08:51:44.926053 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n794d\" (UID: \"30599c42-eef7-4967-b84f-95b49a225bd6\") " pod="openshift-image-registry/image-registry-697d97f7c8-n794d" Nov 24 08:51:44 crc kubenswrapper[4886]: E1124 08:51:44.936467 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 08:51:45.436333332 +0000 UTC m=+161.323071467 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n794d" (UID: "30599c42-eef7-4967-b84f-95b49a225bd6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 08:51:45 crc kubenswrapper[4886]: I1124 08:51:45.014167 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-ztc6l"] Nov 24 08:51:45 crc kubenswrapper[4886]: I1124 08:51:45.028184 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 08:51:45 crc kubenswrapper[4886]: E1124 08:51:45.028632 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 08:51:45.52859846 +0000 UTC m=+161.415336585 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 08:51:45 crc kubenswrapper[4886]: I1124 08:51:45.031362 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-qjxlb"] Nov 24 08:51:45 crc kubenswrapper[4886]: I1124 08:51:45.077730 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Nov 24 08:51:45 crc kubenswrapper[4886]: I1124 08:51:45.130296 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n794d\" (UID: \"30599c42-eef7-4967-b84f-95b49a225bd6\") " pod="openshift-image-registry/image-registry-697d97f7c8-n794d" Nov 24 08:51:45 crc kubenswrapper[4886]: E1124 08:51:45.130655 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 08:51:45.630638306 +0000 UTC m=+161.517376461 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n794d" (UID: "30599c42-eef7-4967-b84f-95b49a225bd6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 08:51:45 crc kubenswrapper[4886]: I1124 08:51:45.148769 4886 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Nov 24 08:51:45 crc kubenswrapper[4886]: W1124 08:51:45.157966 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod8e1a52d4_abae_4519_8288_c1c56ea36e76.slice/crio-f74734f3398d573af9e02b10fa78b20a00f7ebae33b83af38b1ba7576dbd35ed WatchSource:0}: Error finding container f74734f3398d573af9e02b10fa78b20a00f7ebae33b83af38b1ba7576dbd35ed: Status 404 returned error can't find the container with id f74734f3398d573af9e02b10fa78b20a00f7ebae33b83af38b1ba7576dbd35ed Nov 24 08:51:45 crc kubenswrapper[4886]: I1124 08:51:45.232669 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 08:51:45 crc kubenswrapper[4886]: E1124 08:51:45.233119 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 08:51:45.733097714 +0000 UTC m=+161.619835849 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 08:51:45 crc kubenswrapper[4886]: I1124 08:51:45.266988 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Nov 24 08:51:45 crc kubenswrapper[4886]: I1124 08:51:45.267744 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 24 08:51:45 crc kubenswrapper[4886]: I1124 08:51:45.271521 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Nov 24 08:51:45 crc kubenswrapper[4886]: I1124 08:51:45.271875 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Nov 24 08:51:45 crc kubenswrapper[4886]: I1124 08:51:45.304315 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Nov 24 08:51:45 crc kubenswrapper[4886]: I1124 08:51:45.334796 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1e4ca688-82ea-4513-8a11-1adb96141627-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"1e4ca688-82ea-4513-8a11-1adb96141627\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 24 08:51:45 crc kubenswrapper[4886]: I1124 08:51:45.335290 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1e4ca688-82ea-4513-8a11-1adb96141627-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"1e4ca688-82ea-4513-8a11-1adb96141627\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 24 08:51:45 crc kubenswrapper[4886]: I1124 08:51:45.335412 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n794d\" (UID: \"30599c42-eef7-4967-b84f-95b49a225bd6\") " pod="openshift-image-registry/image-registry-697d97f7c8-n794d" Nov 24 08:51:45 crc kubenswrapper[4886]: E1124 08:51:45.335818 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 08:51:45.835804378 +0000 UTC m=+161.722542513 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n794d" (UID: "30599c42-eef7-4967-b84f-95b49a225bd6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 08:51:45 crc kubenswrapper[4886]: I1124 08:51:45.346621 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ztc6l" event={"ID":"925c272d-d181-4754-a9cb-9b9b11e18f6c","Type":"ContainerStarted","Data":"3d5c83feceeea5a5abbefb15397cbbc2098a7b23d5c642760be2ca5f40d8ee49"} Nov 24 08:51:45 crc kubenswrapper[4886]: I1124 08:51:45.349938 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"8e1a52d4-abae-4519-8288-c1c56ea36e76","Type":"ContainerStarted","Data":"f74734f3398d573af9e02b10fa78b20a00f7ebae33b83af38b1ba7576dbd35ed"} Nov 24 08:51:45 crc kubenswrapper[4886]: I1124 08:51:45.355721 4886 generic.go:334] "Generic (PLEG): container finished" podID="d89bb378-d235-4377-9908-0008691b9174" containerID="d89e70664807edd91193d5a1620e87b60bd480d51d2eb41e57e890ce5b03ed1b" exitCode=0 Nov 24 08:51:45 crc kubenswrapper[4886]: I1124 08:51:45.355783 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-l8vn8" event={"ID":"d89bb378-d235-4377-9908-0008691b9174","Type":"ContainerDied","Data":"d89e70664807edd91193d5a1620e87b60bd480d51d2eb41e57e890ce5b03ed1b"} Nov 24 08:51:45 crc kubenswrapper[4886]: I1124 08:51:45.355805 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-l8vn8" event={"ID":"d89bb378-d235-4377-9908-0008691b9174","Type":"ContainerStarted","Data":"9d477cb4e0cde3e72c15845d9afb63fdb4e8904dcd37a501adc1c90e4062e9a0"} Nov 24 08:51:45 crc kubenswrapper[4886]: I1124 08:51:45.358213 4886 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 24 08:51:45 crc kubenswrapper[4886]: I1124 08:51:45.383944 4886 generic.go:334] "Generic (PLEG): container finished" podID="44504d41-1a7d-4a15-a270-24325b0954a9" containerID="2c4f9b75de0254d2bac38b77f41425b042f9c78bffd82437037ed38add7d45cc" exitCode=0 Nov 24 08:51:45 crc kubenswrapper[4886]: I1124 08:51:45.384054 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2s97s" event={"ID":"44504d41-1a7d-4a15-a270-24325b0954a9","Type":"ContainerDied","Data":"2c4f9b75de0254d2bac38b77f41425b042f9c78bffd82437037ed38add7d45cc"} Nov 24 08:51:45 crc kubenswrapper[4886]: I1124 08:51:45.384091 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2s97s" event={"ID":"44504d41-1a7d-4a15-a270-24325b0954a9","Type":"ContainerStarted","Data":"a7ee8f094397260ac6af09a294086fb9caa9364a2b7ce9cb9f304f524e8e8c15"} Nov 24 08:51:45 crc kubenswrapper[4886]: I1124 08:51:45.399995 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qjxlb" event={"ID":"aa41e796-f145-4455-b8fa-d751c98f7b5f","Type":"ContainerStarted","Data":"042328de5a589fff8df2ca6ec0c94315433df229ab551a1ef35213a65075e6b6"} Nov 24 08:51:45 crc kubenswrapper[4886]: I1124 08:51:45.425681 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-tbg5c"] Nov 24 08:51:45 crc kubenswrapper[4886]: I1124 08:51:45.432084 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-tbg5c" Nov 24 08:51:45 crc kubenswrapper[4886]: I1124 08:51:45.436590 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 08:51:45 crc kubenswrapper[4886]: I1124 08:51:45.437070 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1e4ca688-82ea-4513-8a11-1adb96141627-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"1e4ca688-82ea-4513-8a11-1adb96141627\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 24 08:51:45 crc kubenswrapper[4886]: I1124 08:51:45.437111 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1e4ca688-82ea-4513-8a11-1adb96141627-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"1e4ca688-82ea-4513-8a11-1adb96141627\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 24 08:51:45 crc kubenswrapper[4886]: E1124 08:51:45.439262 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 08:51:45.939239313 +0000 UTC m=+161.825977458 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 08:51:45 crc kubenswrapper[4886]: I1124 08:51:45.441301 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1e4ca688-82ea-4513-8a11-1adb96141627-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"1e4ca688-82ea-4513-8a11-1adb96141627\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 24 08:51:45 crc kubenswrapper[4886]: I1124 08:51:45.453071 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Nov 24 08:51:45 crc kubenswrapper[4886]: I1124 08:51:45.463580 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-rj5kj" event={"ID":"3a73ff5c-5292-45f7-a7cf-97714a8a109d","Type":"ContainerStarted","Data":"3d5d75c6e079444638db1ebe6d8ee2eccc46e42254abe3e5966cfcfa6e656a70"} Nov 24 08:51:45 crc kubenswrapper[4886]: I1124 08:51:45.463673 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-rj5kj" event={"ID":"3a73ff5c-5292-45f7-a7cf-97714a8a109d","Type":"ContainerStarted","Data":"591e938cdeebee90301e6dff729267824d9d9af1fcb32d495835263238e52d78"} Nov 24 08:51:45 crc kubenswrapper[4886]: I1124 08:51:45.485741 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-tbg5c"] Nov 24 08:51:45 crc kubenswrapper[4886]: I1124 08:51:45.518451 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1e4ca688-82ea-4513-8a11-1adb96141627-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"1e4ca688-82ea-4513-8a11-1adb96141627\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 24 08:51:45 crc kubenswrapper[4886]: I1124 08:51:45.542042 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a5f30dce-707e-45e7-a928-4602478ac07d-utilities\") pod \"redhat-marketplace-tbg5c\" (UID: \"a5f30dce-707e-45e7-a928-4602478ac07d\") " pod="openshift-marketplace/redhat-marketplace-tbg5c" Nov 24 08:51:45 crc kubenswrapper[4886]: I1124 08:51:45.542118 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a5f30dce-707e-45e7-a928-4602478ac07d-catalog-content\") pod \"redhat-marketplace-tbg5c\" (UID: \"a5f30dce-707e-45e7-a928-4602478ac07d\") " pod="openshift-marketplace/redhat-marketplace-tbg5c" Nov 24 08:51:45 crc kubenswrapper[4886]: I1124 08:51:45.542163 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9z5tm\" (UniqueName: \"kubernetes.io/projected/a5f30dce-707e-45e7-a928-4602478ac07d-kube-api-access-9z5tm\") pod \"redhat-marketplace-tbg5c\" (UID: \"a5f30dce-707e-45e7-a928-4602478ac07d\") " pod="openshift-marketplace/redhat-marketplace-tbg5c" Nov 24 08:51:45 crc kubenswrapper[4886]: I1124 08:51:45.542207 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n794d\" (UID: \"30599c42-eef7-4967-b84f-95b49a225bd6\") " pod="openshift-image-registry/image-registry-697d97f7c8-n794d" Nov 24 08:51:45 crc kubenswrapper[4886]: E1124 08:51:45.543328 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 08:51:46.043311024 +0000 UTC m=+161.930049159 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n794d" (UID: "30599c42-eef7-4967-b84f-95b49a225bd6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 08:51:45 crc kubenswrapper[4886]: I1124 08:51:45.543683 4886 patch_prober.go:28] interesting pod/router-default-5444994796-xbr84 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 24 08:51:45 crc kubenswrapper[4886]: [-]has-synced failed: reason withheld Nov 24 08:51:45 crc kubenswrapper[4886]: [+]process-running ok Nov 24 08:51:45 crc kubenswrapper[4886]: healthz check failed Nov 24 08:51:45 crc kubenswrapper[4886]: I1124 08:51:45.543720 4886 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xbr84" podUID="a54a2524-099e-4a0f-9762-eafbc576dc56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 24 08:51:45 crc kubenswrapper[4886]: I1124 08:51:45.635754 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 24 08:51:45 crc kubenswrapper[4886]: I1124 08:51:45.648982 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 08:51:45 crc kubenswrapper[4886]: I1124 08:51:45.649284 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a5f30dce-707e-45e7-a928-4602478ac07d-utilities\") pod \"redhat-marketplace-tbg5c\" (UID: \"a5f30dce-707e-45e7-a928-4602478ac07d\") " pod="openshift-marketplace/redhat-marketplace-tbg5c" Nov 24 08:51:45 crc kubenswrapper[4886]: I1124 08:51:45.649350 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a5f30dce-707e-45e7-a928-4602478ac07d-catalog-content\") pod \"redhat-marketplace-tbg5c\" (UID: \"a5f30dce-707e-45e7-a928-4602478ac07d\") " pod="openshift-marketplace/redhat-marketplace-tbg5c" Nov 24 08:51:45 crc kubenswrapper[4886]: I1124 08:51:45.649376 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9z5tm\" (UniqueName: \"kubernetes.io/projected/a5f30dce-707e-45e7-a928-4602478ac07d-kube-api-access-9z5tm\") pod \"redhat-marketplace-tbg5c\" (UID: \"a5f30dce-707e-45e7-a928-4602478ac07d\") " pod="openshift-marketplace/redhat-marketplace-tbg5c" Nov 24 08:51:45 crc kubenswrapper[4886]: E1124 08:51:45.649946 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 08:51:46.149924176 +0000 UTC m=+162.036662311 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 08:51:45 crc kubenswrapper[4886]: I1124 08:51:45.650372 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a5f30dce-707e-45e7-a928-4602478ac07d-utilities\") pod \"redhat-marketplace-tbg5c\" (UID: \"a5f30dce-707e-45e7-a928-4602478ac07d\") " pod="openshift-marketplace/redhat-marketplace-tbg5c" Nov 24 08:51:45 crc kubenswrapper[4886]: I1124 08:51:45.650648 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a5f30dce-707e-45e7-a928-4602478ac07d-catalog-content\") pod \"redhat-marketplace-tbg5c\" (UID: \"a5f30dce-707e-45e7-a928-4602478ac07d\") " pod="openshift-marketplace/redhat-marketplace-tbg5c" Nov 24 08:51:45 crc kubenswrapper[4886]: I1124 08:51:45.689461 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9z5tm\" (UniqueName: \"kubernetes.io/projected/a5f30dce-707e-45e7-a928-4602478ac07d-kube-api-access-9z5tm\") pod \"redhat-marketplace-tbg5c\" (UID: \"a5f30dce-707e-45e7-a928-4602478ac07d\") " pod="openshift-marketplace/redhat-marketplace-tbg5c" Nov 24 08:51:45 crc kubenswrapper[4886]: I1124 08:51:45.750741 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n794d\" (UID: \"30599c42-eef7-4967-b84f-95b49a225bd6\") " pod="openshift-image-registry/image-registry-697d97f7c8-n794d" Nov 24 08:51:45 crc kubenswrapper[4886]: E1124 08:51:45.751305 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 08:51:46.251290323 +0000 UTC m=+162.138028448 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n794d" (UID: "30599c42-eef7-4967-b84f-95b49a225bd6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 08:51:45 crc kubenswrapper[4886]: I1124 08:51:45.759535 4886 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2025-11-24T08:51:45.148798794Z","Handler":null,"Name":""} Nov 24 08:51:45 crc kubenswrapper[4886]: I1124 08:51:45.774073 4886 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Nov 24 08:51:45 crc kubenswrapper[4886]: I1124 08:51:45.774128 4886 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Nov 24 08:51:45 crc kubenswrapper[4886]: I1124 08:51:45.814877 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-tbg5c" Nov 24 08:51:45 crc kubenswrapper[4886]: I1124 08:51:45.839879 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-jxrnh"] Nov 24 08:51:45 crc kubenswrapper[4886]: I1124 08:51:45.842709 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jxrnh" Nov 24 08:51:45 crc kubenswrapper[4886]: I1124 08:51:45.846842 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-jxrnh"] Nov 24 08:51:45 crc kubenswrapper[4886]: I1124 08:51:45.856224 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 08:51:45 crc kubenswrapper[4886]: I1124 08:51:45.947344 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Nov 24 08:51:45 crc kubenswrapper[4886]: I1124 08:51:45.959357 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b87e82fa-37f9-46dc-8170-c77373da3ff8-utilities\") pod \"redhat-marketplace-jxrnh\" (UID: \"b87e82fa-37f9-46dc-8170-c77373da3ff8\") " pod="openshift-marketplace/redhat-marketplace-jxrnh" Nov 24 08:51:45 crc kubenswrapper[4886]: I1124 08:51:45.959404 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b87e82fa-37f9-46dc-8170-c77373da3ff8-catalog-content\") pod \"redhat-marketplace-jxrnh\" (UID: \"b87e82fa-37f9-46dc-8170-c77373da3ff8\") " pod="openshift-marketplace/redhat-marketplace-jxrnh" Nov 24 08:51:45 crc kubenswrapper[4886]: I1124 08:51:45.959472 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n794d\" (UID: \"30599c42-eef7-4967-b84f-95b49a225bd6\") " pod="openshift-image-registry/image-registry-697d97f7c8-n794d" Nov 24 08:51:45 crc kubenswrapper[4886]: I1124 08:51:45.959556 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qhmnb\" (UniqueName: \"kubernetes.io/projected/b87e82fa-37f9-46dc-8170-c77373da3ff8-kube-api-access-qhmnb\") pod \"redhat-marketplace-jxrnh\" (UID: \"b87e82fa-37f9-46dc-8170-c77373da3ff8\") " pod="openshift-marketplace/redhat-marketplace-jxrnh" Nov 24 08:51:45 crc kubenswrapper[4886]: I1124 08:51:45.978972 4886 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Nov 24 08:51:45 crc kubenswrapper[4886]: I1124 08:51:45.979072 4886 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n794d\" (UID: \"30599c42-eef7-4967-b84f-95b49a225bd6\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-n794d" Nov 24 08:51:46 crc kubenswrapper[4886]: I1124 08:51:46.050405 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n794d\" (UID: \"30599c42-eef7-4967-b84f-95b49a225bd6\") " pod="openshift-image-registry/image-registry-697d97f7c8-n794d" Nov 24 08:51:46 crc kubenswrapper[4886]: I1124 08:51:46.061440 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qhmnb\" (UniqueName: \"kubernetes.io/projected/b87e82fa-37f9-46dc-8170-c77373da3ff8-kube-api-access-qhmnb\") pod \"redhat-marketplace-jxrnh\" (UID: \"b87e82fa-37f9-46dc-8170-c77373da3ff8\") " pod="openshift-marketplace/redhat-marketplace-jxrnh" Nov 24 08:51:46 crc kubenswrapper[4886]: I1124 08:51:46.061525 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b87e82fa-37f9-46dc-8170-c77373da3ff8-utilities\") pod \"redhat-marketplace-jxrnh\" (UID: \"b87e82fa-37f9-46dc-8170-c77373da3ff8\") " pod="openshift-marketplace/redhat-marketplace-jxrnh" Nov 24 08:51:46 crc kubenswrapper[4886]: I1124 08:51:46.061551 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b87e82fa-37f9-46dc-8170-c77373da3ff8-catalog-content\") pod \"redhat-marketplace-jxrnh\" (UID: \"b87e82fa-37f9-46dc-8170-c77373da3ff8\") " pod="openshift-marketplace/redhat-marketplace-jxrnh" Nov 24 08:51:46 crc kubenswrapper[4886]: I1124 08:51:46.062137 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b87e82fa-37f9-46dc-8170-c77373da3ff8-catalog-content\") pod \"redhat-marketplace-jxrnh\" (UID: \"b87e82fa-37f9-46dc-8170-c77373da3ff8\") " pod="openshift-marketplace/redhat-marketplace-jxrnh" Nov 24 08:51:46 crc kubenswrapper[4886]: I1124 08:51:46.062795 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b87e82fa-37f9-46dc-8170-c77373da3ff8-utilities\") pod \"redhat-marketplace-jxrnh\" (UID: \"b87e82fa-37f9-46dc-8170-c77373da3ff8\") " pod="openshift-marketplace/redhat-marketplace-jxrnh" Nov 24 08:51:46 crc kubenswrapper[4886]: I1124 08:51:46.095908 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qhmnb\" (UniqueName: \"kubernetes.io/projected/b87e82fa-37f9-46dc-8170-c77373da3ff8-kube-api-access-qhmnb\") pod \"redhat-marketplace-jxrnh\" (UID: \"b87e82fa-37f9-46dc-8170-c77373da3ff8\") " pod="openshift-marketplace/redhat-marketplace-jxrnh" Nov 24 08:51:46 crc kubenswrapper[4886]: I1124 08:51:46.158508 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Nov 24 08:51:46 crc kubenswrapper[4886]: W1124 08:51:46.164178 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod1e4ca688_82ea_4513_8a11_1adb96141627.slice/crio-f8cd166d1c6dcf1f43f25d436eafa7c2db93afaf6221a4db6d7fbd4393cf60a1 WatchSource:0}: Error finding container f8cd166d1c6dcf1f43f25d436eafa7c2db93afaf6221a4db6d7fbd4393cf60a1: Status 404 returned error can't find the container with id f8cd166d1c6dcf1f43f25d436eafa7c2db93afaf6221a4db6d7fbd4393cf60a1 Nov 24 08:51:46 crc kubenswrapper[4886]: I1124 08:51:46.202005 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-n794d" Nov 24 08:51:46 crc kubenswrapper[4886]: I1124 08:51:46.216986 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-tbg5c"] Nov 24 08:51:46 crc kubenswrapper[4886]: I1124 08:51:46.248884 4886 patch_prober.go:28] interesting pod/downloads-7954f5f757-w6cvz container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Nov 24 08:51:46 crc kubenswrapper[4886]: I1124 08:51:46.248952 4886 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-w6cvz" podUID="580c4fdc-bdb3-4099-b715-ac4c63acecb2" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Nov 24 08:51:46 crc kubenswrapper[4886]: I1124 08:51:46.249508 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jxrnh" Nov 24 08:51:46 crc kubenswrapper[4886]: I1124 08:51:46.249733 4886 patch_prober.go:28] interesting pod/downloads-7954f5f757-w6cvz container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Nov 24 08:51:46 crc kubenswrapper[4886]: I1124 08:51:46.249767 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-w6cvz" podUID="580c4fdc-bdb3-4099-b715-ac4c63acecb2" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Nov 24 08:51:46 crc kubenswrapper[4886]: W1124 08:51:46.258095 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda5f30dce_707e_45e7_a928_4602478ac07d.slice/crio-62204c3138f6bb2fcd522e6fc706c1bd910bc9b22ad92d371867bb9c576be373 WatchSource:0}: Error finding container 62204c3138f6bb2fcd522e6fc706c1bd910bc9b22ad92d371867bb9c576be373: Status 404 returned error can't find the container with id 62204c3138f6bb2fcd522e6fc706c1bd910bc9b22ad92d371867bb9c576be373 Nov 24 08:51:46 crc kubenswrapper[4886]: I1124 08:51:46.379122 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-pg5ls"] Nov 24 08:51:46 crc kubenswrapper[4886]: I1124 08:51:46.383508 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pg5ls" Nov 24 08:51:46 crc kubenswrapper[4886]: I1124 08:51:46.387378 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Nov 24 08:51:46 crc kubenswrapper[4886]: I1124 08:51:46.399822 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-pg5ls"] Nov 24 08:51:46 crc kubenswrapper[4886]: I1124 08:51:46.469506 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5cfd24b4-c215-49b2-af8e-a3875c05c738-utilities\") pod \"redhat-operators-pg5ls\" (UID: \"5cfd24b4-c215-49b2-af8e-a3875c05c738\") " pod="openshift-marketplace/redhat-operators-pg5ls" Nov 24 08:51:46 crc kubenswrapper[4886]: I1124 08:51:46.469596 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5cfd24b4-c215-49b2-af8e-a3875c05c738-catalog-content\") pod \"redhat-operators-pg5ls\" (UID: \"5cfd24b4-c215-49b2-af8e-a3875c05c738\") " pod="openshift-marketplace/redhat-operators-pg5ls" Nov 24 08:51:46 crc kubenswrapper[4886]: I1124 08:51:46.469635 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v8fwn\" (UniqueName: \"kubernetes.io/projected/5cfd24b4-c215-49b2-af8e-a3875c05c738-kube-api-access-v8fwn\") pod \"redhat-operators-pg5ls\" (UID: \"5cfd24b4-c215-49b2-af8e-a3875c05c738\") " pod="openshift-marketplace/redhat-operators-pg5ls" Nov 24 08:51:46 crc kubenswrapper[4886]: I1124 08:51:46.489260 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-845fz" Nov 24 08:51:46 crc kubenswrapper[4886]: I1124 08:51:46.495634 4886 generic.go:334] "Generic (PLEG): container finished" podID="aa41e796-f145-4455-b8fa-d751c98f7b5f" containerID="0dedc518dbe713a3614a67d79e2f30f969a0d8104d2030a7aa5866b969fdef1a" exitCode=0 Nov 24 08:51:46 crc kubenswrapper[4886]: I1124 08:51:46.495723 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qjxlb" event={"ID":"aa41e796-f145-4455-b8fa-d751c98f7b5f","Type":"ContainerDied","Data":"0dedc518dbe713a3614a67d79e2f30f969a0d8104d2030a7aa5866b969fdef1a"} Nov 24 08:51:46 crc kubenswrapper[4886]: I1124 08:51:46.506510 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-rj5kj" event={"ID":"3a73ff5c-5292-45f7-a7cf-97714a8a109d","Type":"ContainerStarted","Data":"1ac2f8315921f8c71793aac976f53ed2f9d16b4d88622ad9baa6d17bc3463c90"} Nov 24 08:51:46 crc kubenswrapper[4886]: I1124 08:51:46.510255 4886 generic.go:334] "Generic (PLEG): container finished" podID="925c272d-d181-4754-a9cb-9b9b11e18f6c" containerID="8deacea395dbca010082894617f5cfccc15be962f58566b91ab42ef6e915653d" exitCode=0 Nov 24 08:51:46 crc kubenswrapper[4886]: I1124 08:51:46.510321 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ztc6l" event={"ID":"925c272d-d181-4754-a9cb-9b9b11e18f6c","Type":"ContainerDied","Data":"8deacea395dbca010082894617f5cfccc15be962f58566b91ab42ef6e915653d"} Nov 24 08:51:46 crc kubenswrapper[4886]: I1124 08:51:46.513224 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tbg5c" event={"ID":"a5f30dce-707e-45e7-a928-4602478ac07d","Type":"ContainerStarted","Data":"62204c3138f6bb2fcd522e6fc706c1bd910bc9b22ad92d371867bb9c576be373"} Nov 24 08:51:46 crc kubenswrapper[4886]: I1124 08:51:46.526115 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"8e1a52d4-abae-4519-8288-c1c56ea36e76","Type":"ContainerStarted","Data":"0918c10071238b13c57c5eedc77f5027bd62dcd0e50bd041e899b164e608cf35"} Nov 24 08:51:46 crc kubenswrapper[4886]: I1124 08:51:46.538311 4886 patch_prober.go:28] interesting pod/router-default-5444994796-xbr84 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 24 08:51:46 crc kubenswrapper[4886]: [-]has-synced failed: reason withheld Nov 24 08:51:46 crc kubenswrapper[4886]: [+]process-running ok Nov 24 08:51:46 crc kubenswrapper[4886]: healthz check failed Nov 24 08:51:46 crc kubenswrapper[4886]: I1124 08:51:46.538387 4886 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xbr84" podUID="a54a2524-099e-4a0f-9762-eafbc576dc56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 24 08:51:46 crc kubenswrapper[4886]: I1124 08:51:46.546693 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"1e4ca688-82ea-4513-8a11-1adb96141627","Type":"ContainerStarted","Data":"f8cd166d1c6dcf1f43f25d436eafa7c2db93afaf6221a4db6d7fbd4393cf60a1"} Nov 24 08:51:46 crc kubenswrapper[4886]: I1124 08:51:46.577816 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5cfd24b4-c215-49b2-af8e-a3875c05c738-catalog-content\") pod \"redhat-operators-pg5ls\" (UID: \"5cfd24b4-c215-49b2-af8e-a3875c05c738\") " pod="openshift-marketplace/redhat-operators-pg5ls" Nov 24 08:51:46 crc kubenswrapper[4886]: I1124 08:51:46.577914 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v8fwn\" (UniqueName: \"kubernetes.io/projected/5cfd24b4-c215-49b2-af8e-a3875c05c738-kube-api-access-v8fwn\") pod \"redhat-operators-pg5ls\" (UID: \"5cfd24b4-c215-49b2-af8e-a3875c05c738\") " pod="openshift-marketplace/redhat-operators-pg5ls" Nov 24 08:51:46 crc kubenswrapper[4886]: I1124 08:51:46.578080 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5cfd24b4-c215-49b2-af8e-a3875c05c738-utilities\") pod \"redhat-operators-pg5ls\" (UID: \"5cfd24b4-c215-49b2-af8e-a3875c05c738\") " pod="openshift-marketplace/redhat-operators-pg5ls" Nov 24 08:51:46 crc kubenswrapper[4886]: I1124 08:51:46.579634 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5cfd24b4-c215-49b2-af8e-a3875c05c738-catalog-content\") pod \"redhat-operators-pg5ls\" (UID: \"5cfd24b4-c215-49b2-af8e-a3875c05c738\") " pod="openshift-marketplace/redhat-operators-pg5ls" Nov 24 08:51:46 crc kubenswrapper[4886]: I1124 08:51:46.580017 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5cfd24b4-c215-49b2-af8e-a3875c05c738-utilities\") pod \"redhat-operators-pg5ls\" (UID: \"5cfd24b4-c215-49b2-af8e-a3875c05c738\") " pod="openshift-marketplace/redhat-operators-pg5ls" Nov 24 08:51:46 crc kubenswrapper[4886]: I1124 08:51:46.621641 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-rj5kj" podStartSLOduration=12.6216144 podStartE2EDuration="12.6216144s" podCreationTimestamp="2025-11-24 08:51:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 08:51:46.576758131 +0000 UTC m=+162.463496286" watchObservedRunningTime="2025-11-24 08:51:46.6216144 +0000 UTC m=+162.508352545" Nov 24 08:51:46 crc kubenswrapper[4886]: I1124 08:51:46.648289 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v8fwn\" (UniqueName: \"kubernetes.io/projected/5cfd24b4-c215-49b2-af8e-a3875c05c738-kube-api-access-v8fwn\") pod \"redhat-operators-pg5ls\" (UID: \"5cfd24b4-c215-49b2-af8e-a3875c05c738\") " pod="openshift-marketplace/redhat-operators-pg5ls" Nov 24 08:51:46 crc kubenswrapper[4886]: I1124 08:51:46.749196 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pg5ls" Nov 24 08:51:46 crc kubenswrapper[4886]: I1124 08:51:46.785672 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-j6kp4"] Nov 24 08:51:46 crc kubenswrapper[4886]: I1124 08:51:46.787037 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-j6kp4" Nov 24 08:51:46 crc kubenswrapper[4886]: I1124 08:51:46.829639 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-j6kp4"] Nov 24 08:51:46 crc kubenswrapper[4886]: I1124 08:51:46.872010 4886 patch_prober.go:28] interesting pod/console-f9d7485db-tbxjm container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.21:8443/health\": dial tcp 10.217.0.21:8443: connect: connection refused" start-of-body= Nov 24 08:51:46 crc kubenswrapper[4886]: I1124 08:51:46.872117 4886 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-tbxjm" podUID="87f902e1-073b-4ccd-8b3a-717f802e9671" containerName="console" probeResult="failure" output="Get \"https://10.217.0.21:8443/health\": dial tcp 10.217.0.21:8443: connect: connection refused" Nov 24 08:51:46 crc kubenswrapper[4886]: I1124 08:51:46.906476 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/98e3e498-9c73-407d-91f1-1032f1d0a4b2-catalog-content\") pod \"redhat-operators-j6kp4\" (UID: \"98e3e498-9c73-407d-91f1-1032f1d0a4b2\") " pod="openshift-marketplace/redhat-operators-j6kp4" Nov 24 08:51:46 crc kubenswrapper[4886]: I1124 08:51:46.906555 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/98e3e498-9c73-407d-91f1-1032f1d0a4b2-utilities\") pod \"redhat-operators-j6kp4\" (UID: \"98e3e498-9c73-407d-91f1-1032f1d0a4b2\") " pod="openshift-marketplace/redhat-operators-j6kp4" Nov 24 08:51:46 crc kubenswrapper[4886]: I1124 08:51:46.906651 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4wmcl\" (UniqueName: \"kubernetes.io/projected/98e3e498-9c73-407d-91f1-1032f1d0a4b2-kube-api-access-4wmcl\") pod \"redhat-operators-j6kp4\" (UID: \"98e3e498-9c73-407d-91f1-1032f1d0a4b2\") " pod="openshift-marketplace/redhat-operators-j6kp4" Nov 24 08:51:46 crc kubenswrapper[4886]: I1124 08:51:46.941613 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Nov 24 08:51:46 crc kubenswrapper[4886]: I1124 08:51:46.948054 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-n794d"] Nov 24 08:51:46 crc kubenswrapper[4886]: I1124 08:51:46.950344 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-tbxjm" Nov 24 08:51:46 crc kubenswrapper[4886]: I1124 08:51:46.950533 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-4z5n9" Nov 24 08:51:46 crc kubenswrapper[4886]: I1124 08:51:46.950628 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-tbxjm" Nov 24 08:51:46 crc kubenswrapper[4886]: I1124 08:51:46.950706 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-d9sdq" Nov 24 08:51:46 crc kubenswrapper[4886]: I1124 08:51:46.953409 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-d9sdq" Nov 24 08:51:47 crc kubenswrapper[4886]: I1124 08:51:47.009490 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4wmcl\" (UniqueName: \"kubernetes.io/projected/98e3e498-9c73-407d-91f1-1032f1d0a4b2-kube-api-access-4wmcl\") pod \"redhat-operators-j6kp4\" (UID: \"98e3e498-9c73-407d-91f1-1032f1d0a4b2\") " pod="openshift-marketplace/redhat-operators-j6kp4" Nov 24 08:51:47 crc kubenswrapper[4886]: I1124 08:51:47.009564 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/98e3e498-9c73-407d-91f1-1032f1d0a4b2-catalog-content\") pod \"redhat-operators-j6kp4\" (UID: \"98e3e498-9c73-407d-91f1-1032f1d0a4b2\") " pod="openshift-marketplace/redhat-operators-j6kp4" Nov 24 08:51:47 crc kubenswrapper[4886]: I1124 08:51:47.009607 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/98e3e498-9c73-407d-91f1-1032f1d0a4b2-utilities\") pod \"redhat-operators-j6kp4\" (UID: \"98e3e498-9c73-407d-91f1-1032f1d0a4b2\") " pod="openshift-marketplace/redhat-operators-j6kp4" Nov 24 08:51:47 crc kubenswrapper[4886]: I1124 08:51:47.015435 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/98e3e498-9c73-407d-91f1-1032f1d0a4b2-catalog-content\") pod \"redhat-operators-j6kp4\" (UID: \"98e3e498-9c73-407d-91f1-1032f1d0a4b2\") " pod="openshift-marketplace/redhat-operators-j6kp4" Nov 24 08:51:47 crc kubenswrapper[4886]: I1124 08:51:47.015849 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/98e3e498-9c73-407d-91f1-1032f1d0a4b2-utilities\") pod \"redhat-operators-j6kp4\" (UID: \"98e3e498-9c73-407d-91f1-1032f1d0a4b2\") " pod="openshift-marketplace/redhat-operators-j6kp4" Nov 24 08:51:47 crc kubenswrapper[4886]: I1124 08:51:47.047345 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-jxrnh"] Nov 24 08:51:47 crc kubenswrapper[4886]: I1124 08:51:47.080361 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4wmcl\" (UniqueName: \"kubernetes.io/projected/98e3e498-9c73-407d-91f1-1032f1d0a4b2-kube-api-access-4wmcl\") pod \"redhat-operators-j6kp4\" (UID: \"98e3e498-9c73-407d-91f1-1032f1d0a4b2\") " pod="openshift-marketplace/redhat-operators-j6kp4" Nov 24 08:51:47 crc kubenswrapper[4886]: I1124 08:51:47.192269 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-j6kp4" Nov 24 08:51:47 crc kubenswrapper[4886]: I1124 08:51:47.266762 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-njp2h" Nov 24 08:51:47 crc kubenswrapper[4886]: I1124 08:51:47.274523 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-njp2h" Nov 24 08:51:47 crc kubenswrapper[4886]: I1124 08:51:47.307376 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-pg5ls"] Nov 24 08:51:47 crc kubenswrapper[4886]: I1124 08:51:47.323676 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-z89jx" Nov 24 08:51:47 crc kubenswrapper[4886]: I1124 08:51:47.528484 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-xbr84" Nov 24 08:51:47 crc kubenswrapper[4886]: I1124 08:51:47.532547 4886 patch_prober.go:28] interesting pod/router-default-5444994796-xbr84 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 24 08:51:47 crc kubenswrapper[4886]: [-]has-synced failed: reason withheld Nov 24 08:51:47 crc kubenswrapper[4886]: [+]process-running ok Nov 24 08:51:47 crc kubenswrapper[4886]: healthz check failed Nov 24 08:51:47 crc kubenswrapper[4886]: I1124 08:51:47.532632 4886 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xbr84" podUID="a54a2524-099e-4a0f-9762-eafbc576dc56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 24 08:51:47 crc kubenswrapper[4886]: I1124 08:51:47.573052 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"1e4ca688-82ea-4513-8a11-1adb96141627","Type":"ContainerStarted","Data":"65c5348e1bad3fd4d51643662cdbea2d9f22ec0f1ede69dacbb76c050b3d74b8"} Nov 24 08:51:47 crc kubenswrapper[4886]: I1124 08:51:47.580848 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-n794d" event={"ID":"30599c42-eef7-4967-b84f-95b49a225bd6","Type":"ContainerStarted","Data":"010db2b5dc87e29f75af5234e09b2797b542092c922d9420b2be76a296dcd162"} Nov 24 08:51:47 crc kubenswrapper[4886]: I1124 08:51:47.580915 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-n794d" event={"ID":"30599c42-eef7-4967-b84f-95b49a225bd6","Type":"ContainerStarted","Data":"35d6c55573a07de6320c4d8e5979aae5d1ccd14f60ec0ab082181b41f8322274"} Nov 24 08:51:47 crc kubenswrapper[4886]: I1124 08:51:47.581710 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-n794d" Nov 24 08:51:47 crc kubenswrapper[4886]: I1124 08:51:47.592890 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-8-crc" podStartSLOduration=2.592837223 podStartE2EDuration="2.592837223s" podCreationTimestamp="2025-11-24 08:51:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 08:51:47.588653898 +0000 UTC m=+163.475392033" watchObservedRunningTime="2025-11-24 08:51:47.592837223 +0000 UTC m=+163.479575358" Nov 24 08:51:47 crc kubenswrapper[4886]: I1124 08:51:47.601730 4886 generic.go:334] "Generic (PLEG): container finished" podID="a5f30dce-707e-45e7-a928-4602478ac07d" containerID="c77230ecd0034d41c03604dc53595cd12e8594c482d6f0daf54a2d9d3bf6dd4d" exitCode=0 Nov 24 08:51:47 crc kubenswrapper[4886]: I1124 08:51:47.602169 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tbg5c" event={"ID":"a5f30dce-707e-45e7-a928-4602478ac07d","Type":"ContainerDied","Data":"c77230ecd0034d41c03604dc53595cd12e8594c482d6f0daf54a2d9d3bf6dd4d"} Nov 24 08:51:47 crc kubenswrapper[4886]: I1124 08:51:47.607184 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-tsmf5" Nov 24 08:51:47 crc kubenswrapper[4886]: I1124 08:51:47.614805 4886 generic.go:334] "Generic (PLEG): container finished" podID="8e1a52d4-abae-4519-8288-c1c56ea36e76" containerID="0918c10071238b13c57c5eedc77f5027bd62dcd0e50bd041e899b164e608cf35" exitCode=0 Nov 24 08:51:47 crc kubenswrapper[4886]: I1124 08:51:47.615171 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"8e1a52d4-abae-4519-8288-c1c56ea36e76","Type":"ContainerDied","Data":"0918c10071238b13c57c5eedc77f5027bd62dcd0e50bd041e899b164e608cf35"} Nov 24 08:51:47 crc kubenswrapper[4886]: I1124 08:51:47.624904 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pg5ls" event={"ID":"5cfd24b4-c215-49b2-af8e-a3875c05c738","Type":"ContainerStarted","Data":"7a352e3ab3551542b21d9f379fc66befe1f041a2524d17733b2482398058a79e"} Nov 24 08:51:47 crc kubenswrapper[4886]: I1124 08:51:47.626932 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-n794d" podStartSLOduration=142.626920777 podStartE2EDuration="2m22.626920777s" podCreationTimestamp="2025-11-24 08:49:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 08:51:47.621354204 +0000 UTC m=+163.508092339" watchObservedRunningTime="2025-11-24 08:51:47.626920777 +0000 UTC m=+163.513658912" Nov 24 08:51:47 crc kubenswrapper[4886]: I1124 08:51:47.643177 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jxrnh" event={"ID":"b87e82fa-37f9-46dc-8170-c77373da3ff8","Type":"ContainerStarted","Data":"cd19e2b1c8d0e2350ac66224460f947f74b6b9c3305b95f37c450a6dc5e00738"} Nov 24 08:51:47 crc kubenswrapper[4886]: I1124 08:51:47.644785 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jxrnh" event={"ID":"b87e82fa-37f9-46dc-8170-c77373da3ff8","Type":"ContainerStarted","Data":"951704a65933ceeb5ddf0a226ec99e0cb66d75dcb4ba65c58c073f3038f5e686"} Nov 24 08:51:47 crc kubenswrapper[4886]: I1124 08:51:47.653961 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-npx2b" Nov 24 08:51:47 crc kubenswrapper[4886]: I1124 08:51:47.673635 4886 generic.go:334] "Generic (PLEG): container finished" podID="1afd949e-d0f2-41b8-9632-917df3468232" containerID="7b3dca954eaf4c9da537eafa5a081d7a3b680b6d0ded76a6a85c95f22d76af83" exitCode=0 Nov 24 08:51:47 crc kubenswrapper[4886]: I1124 08:51:47.674165 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29399565-qwh9r" event={"ID":"1afd949e-d0f2-41b8-9632-917df3468232","Type":"ContainerDied","Data":"7b3dca954eaf4c9da537eafa5a081d7a3b680b6d0ded76a6a85c95f22d76af83"} Nov 24 08:51:47 crc kubenswrapper[4886]: I1124 08:51:47.822795 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7844f778-bcd5-40fe-ad92-0cc0fcd6c5d7-metrics-certs\") pod \"network-metrics-daemon-fkfxv\" (UID: \"7844f778-bcd5-40fe-ad92-0cc0fcd6c5d7\") " pod="openshift-multus/network-metrics-daemon-fkfxv" Nov 24 08:51:47 crc kubenswrapper[4886]: I1124 08:51:47.843315 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7844f778-bcd5-40fe-ad92-0cc0fcd6c5d7-metrics-certs\") pod \"network-metrics-daemon-fkfxv\" (UID: \"7844f778-bcd5-40fe-ad92-0cc0fcd6c5d7\") " pod="openshift-multus/network-metrics-daemon-fkfxv" Nov 24 08:51:47 crc kubenswrapper[4886]: I1124 08:51:47.938648 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-j6kp4"] Nov 24 08:51:47 crc kubenswrapper[4886]: W1124 08:51:47.994294 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod98e3e498_9c73_407d_91f1_1032f1d0a4b2.slice/crio-5afb3d7f6b398600d05b416765a77d40574ae78047fbc8a60b6f08109c19eb44 WatchSource:0}: Error finding container 5afb3d7f6b398600d05b416765a77d40574ae78047fbc8a60b6f08109c19eb44: Status 404 returned error can't find the container with id 5afb3d7f6b398600d05b416765a77d40574ae78047fbc8a60b6f08109c19eb44 Nov 24 08:51:48 crc kubenswrapper[4886]: I1124 08:51:48.081547 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-fkfxv" Nov 24 08:51:48 crc kubenswrapper[4886]: I1124 08:51:48.101035 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 24 08:51:48 crc kubenswrapper[4886]: I1124 08:51:48.140194 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8e1a52d4-abae-4519-8288-c1c56ea36e76-kube-api-access\") pod \"8e1a52d4-abae-4519-8288-c1c56ea36e76\" (UID: \"8e1a52d4-abae-4519-8288-c1c56ea36e76\") " Nov 24 08:51:48 crc kubenswrapper[4886]: I1124 08:51:48.140241 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8e1a52d4-abae-4519-8288-c1c56ea36e76-kubelet-dir\") pod \"8e1a52d4-abae-4519-8288-c1c56ea36e76\" (UID: \"8e1a52d4-abae-4519-8288-c1c56ea36e76\") " Nov 24 08:51:48 crc kubenswrapper[4886]: I1124 08:51:48.140636 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8e1a52d4-abae-4519-8288-c1c56ea36e76-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "8e1a52d4-abae-4519-8288-c1c56ea36e76" (UID: "8e1a52d4-abae-4519-8288-c1c56ea36e76"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 08:51:48 crc kubenswrapper[4886]: I1124 08:51:48.157373 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8e1a52d4-abae-4519-8288-c1c56ea36e76-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "8e1a52d4-abae-4519-8288-c1c56ea36e76" (UID: "8e1a52d4-abae-4519-8288-c1c56ea36e76"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 08:51:48 crc kubenswrapper[4886]: I1124 08:51:48.163176 4886 patch_prober.go:28] interesting pod/apiserver-76f77b778f-lz4ml container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Nov 24 08:51:48 crc kubenswrapper[4886]: [+]log ok Nov 24 08:51:48 crc kubenswrapper[4886]: [+]etcd ok Nov 24 08:51:48 crc kubenswrapper[4886]: [+]poststarthook/start-apiserver-admission-initializer ok Nov 24 08:51:48 crc kubenswrapper[4886]: [+]poststarthook/generic-apiserver-start-informers ok Nov 24 08:51:48 crc kubenswrapper[4886]: [+]poststarthook/max-in-flight-filter ok Nov 24 08:51:48 crc kubenswrapper[4886]: [+]poststarthook/storage-object-count-tracker-hook ok Nov 24 08:51:48 crc kubenswrapper[4886]: [+]poststarthook/image.openshift.io-apiserver-caches ok Nov 24 08:51:48 crc kubenswrapper[4886]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Nov 24 08:51:48 crc kubenswrapper[4886]: [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok Nov 24 08:51:48 crc kubenswrapper[4886]: [+]poststarthook/project.openshift.io-projectcache ok Nov 24 08:51:48 crc kubenswrapper[4886]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Nov 24 08:51:48 crc kubenswrapper[4886]: [+]poststarthook/openshift.io-startinformers ok Nov 24 08:51:48 crc kubenswrapper[4886]: [+]poststarthook/openshift.io-restmapperupdater ok Nov 24 08:51:48 crc kubenswrapper[4886]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Nov 24 08:51:48 crc kubenswrapper[4886]: livez check failed Nov 24 08:51:48 crc kubenswrapper[4886]: I1124 08:51:48.163302 4886 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-lz4ml" podUID="eb6743a4-a25e-4b1b-ae3b-3d2199cfbff2" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 24 08:51:48 crc kubenswrapper[4886]: I1124 08:51:48.242064 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8e1a52d4-abae-4519-8288-c1c56ea36e76-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 24 08:51:48 crc kubenswrapper[4886]: I1124 08:51:48.247580 4886 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8e1a52d4-abae-4519-8288-c1c56ea36e76-kubelet-dir\") on node \"crc\" DevicePath \"\"" Nov 24 08:51:48 crc kubenswrapper[4886]: I1124 08:51:48.474296 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-fkfxv"] Nov 24 08:51:48 crc kubenswrapper[4886]: W1124 08:51:48.484324 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7844f778_bcd5_40fe_ad92_0cc0fcd6c5d7.slice/crio-fe5b50bed31bd1a9e0940aa8069089f0115326dbb68a5e5336e95c3890330aac WatchSource:0}: Error finding container fe5b50bed31bd1a9e0940aa8069089f0115326dbb68a5e5336e95c3890330aac: Status 404 returned error can't find the container with id fe5b50bed31bd1a9e0940aa8069089f0115326dbb68a5e5336e95c3890330aac Nov 24 08:51:48 crc kubenswrapper[4886]: I1124 08:51:48.530874 4886 patch_prober.go:28] interesting pod/router-default-5444994796-xbr84 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 24 08:51:48 crc kubenswrapper[4886]: [-]has-synced failed: reason withheld Nov 24 08:51:48 crc kubenswrapper[4886]: [+]process-running ok Nov 24 08:51:48 crc kubenswrapper[4886]: healthz check failed Nov 24 08:51:48 crc kubenswrapper[4886]: I1124 08:51:48.530949 4886 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xbr84" podUID="a54a2524-099e-4a0f-9762-eafbc576dc56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 24 08:51:48 crc kubenswrapper[4886]: I1124 08:51:48.706298 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-fkfxv" event={"ID":"7844f778-bcd5-40fe-ad92-0cc0fcd6c5d7","Type":"ContainerStarted","Data":"fe5b50bed31bd1a9e0940aa8069089f0115326dbb68a5e5336e95c3890330aac"} Nov 24 08:51:48 crc kubenswrapper[4886]: I1124 08:51:48.712556 4886 generic.go:334] "Generic (PLEG): container finished" podID="b87e82fa-37f9-46dc-8170-c77373da3ff8" containerID="cd19e2b1c8d0e2350ac66224460f947f74b6b9c3305b95f37c450a6dc5e00738" exitCode=0 Nov 24 08:51:48 crc kubenswrapper[4886]: I1124 08:51:48.712619 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jxrnh" event={"ID":"b87e82fa-37f9-46dc-8170-c77373da3ff8","Type":"ContainerDied","Data":"cd19e2b1c8d0e2350ac66224460f947f74b6b9c3305b95f37c450a6dc5e00738"} Nov 24 08:51:48 crc kubenswrapper[4886]: I1124 08:51:48.715823 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j6kp4" event={"ID":"98e3e498-9c73-407d-91f1-1032f1d0a4b2","Type":"ContainerStarted","Data":"5afb3d7f6b398600d05b416765a77d40574ae78047fbc8a60b6f08109c19eb44"} Nov 24 08:51:48 crc kubenswrapper[4886]: I1124 08:51:48.749122 4886 generic.go:334] "Generic (PLEG): container finished" podID="1e4ca688-82ea-4513-8a11-1adb96141627" containerID="65c5348e1bad3fd4d51643662cdbea2d9f22ec0f1ede69dacbb76c050b3d74b8" exitCode=0 Nov 24 08:51:48 crc kubenswrapper[4886]: I1124 08:51:48.749228 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"1e4ca688-82ea-4513-8a11-1adb96141627","Type":"ContainerDied","Data":"65c5348e1bad3fd4d51643662cdbea2d9f22ec0f1ede69dacbb76c050b3d74b8"} Nov 24 08:51:48 crc kubenswrapper[4886]: I1124 08:51:48.756908 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 24 08:51:48 crc kubenswrapper[4886]: I1124 08:51:48.756917 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"8e1a52d4-abae-4519-8288-c1c56ea36e76","Type":"ContainerDied","Data":"f74734f3398d573af9e02b10fa78b20a00f7ebae33b83af38b1ba7576dbd35ed"} Nov 24 08:51:48 crc kubenswrapper[4886]: I1124 08:51:48.756984 4886 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f74734f3398d573af9e02b10fa78b20a00f7ebae33b83af38b1ba7576dbd35ed" Nov 24 08:51:48 crc kubenswrapper[4886]: I1124 08:51:48.761959 4886 generic.go:334] "Generic (PLEG): container finished" podID="5cfd24b4-c215-49b2-af8e-a3875c05c738" containerID="53ea7abd2ef81847cfd683e89cdce37086a1d41a8d43a32c3a7c4b04cee438ee" exitCode=0 Nov 24 08:51:48 crc kubenswrapper[4886]: I1124 08:51:48.763433 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pg5ls" event={"ID":"5cfd24b4-c215-49b2-af8e-a3875c05c738","Type":"ContainerDied","Data":"53ea7abd2ef81847cfd683e89cdce37086a1d41a8d43a32c3a7c4b04cee438ee"} Nov 24 08:51:49 crc kubenswrapper[4886]: I1124 08:51:49.124576 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29399565-qwh9r" Nov 24 08:51:49 crc kubenswrapper[4886]: I1124 08:51:49.172161 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1afd949e-d0f2-41b8-9632-917df3468232-config-volume\") pod \"1afd949e-d0f2-41b8-9632-917df3468232\" (UID: \"1afd949e-d0f2-41b8-9632-917df3468232\") " Nov 24 08:51:49 crc kubenswrapper[4886]: I1124 08:51:49.172245 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kgz6j\" (UniqueName: \"kubernetes.io/projected/1afd949e-d0f2-41b8-9632-917df3468232-kube-api-access-kgz6j\") pod \"1afd949e-d0f2-41b8-9632-917df3468232\" (UID: \"1afd949e-d0f2-41b8-9632-917df3468232\") " Nov 24 08:51:49 crc kubenswrapper[4886]: I1124 08:51:49.172339 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1afd949e-d0f2-41b8-9632-917df3468232-secret-volume\") pod \"1afd949e-d0f2-41b8-9632-917df3468232\" (UID: \"1afd949e-d0f2-41b8-9632-917df3468232\") " Nov 24 08:51:49 crc kubenswrapper[4886]: I1124 08:51:49.173370 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1afd949e-d0f2-41b8-9632-917df3468232-config-volume" (OuterVolumeSpecName: "config-volume") pod "1afd949e-d0f2-41b8-9632-917df3468232" (UID: "1afd949e-d0f2-41b8-9632-917df3468232"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 08:51:49 crc kubenswrapper[4886]: I1124 08:51:49.174032 4886 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1afd949e-d0f2-41b8-9632-917df3468232-config-volume\") on node \"crc\" DevicePath \"\"" Nov 24 08:51:49 crc kubenswrapper[4886]: I1124 08:51:49.184367 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1afd949e-d0f2-41b8-9632-917df3468232-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "1afd949e-d0f2-41b8-9632-917df3468232" (UID: "1afd949e-d0f2-41b8-9632-917df3468232"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 08:51:49 crc kubenswrapper[4886]: I1124 08:51:49.190659 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1afd949e-d0f2-41b8-9632-917df3468232-kube-api-access-kgz6j" (OuterVolumeSpecName: "kube-api-access-kgz6j") pod "1afd949e-d0f2-41b8-9632-917df3468232" (UID: "1afd949e-d0f2-41b8-9632-917df3468232"). InnerVolumeSpecName "kube-api-access-kgz6j". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 08:51:49 crc kubenswrapper[4886]: I1124 08:51:49.275317 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kgz6j\" (UniqueName: \"kubernetes.io/projected/1afd949e-d0f2-41b8-9632-917df3468232-kube-api-access-kgz6j\") on node \"crc\" DevicePath \"\"" Nov 24 08:51:49 crc kubenswrapper[4886]: I1124 08:51:49.275361 4886 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1afd949e-d0f2-41b8-9632-917df3468232-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 24 08:51:49 crc kubenswrapper[4886]: I1124 08:51:49.529392 4886 patch_prober.go:28] interesting pod/router-default-5444994796-xbr84 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 24 08:51:49 crc kubenswrapper[4886]: [-]has-synced failed: reason withheld Nov 24 08:51:49 crc kubenswrapper[4886]: [+]process-running ok Nov 24 08:51:49 crc kubenswrapper[4886]: healthz check failed Nov 24 08:51:49 crc kubenswrapper[4886]: I1124 08:51:49.529479 4886 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xbr84" podUID="a54a2524-099e-4a0f-9762-eafbc576dc56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 24 08:51:49 crc kubenswrapper[4886]: I1124 08:51:49.774913 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-fkfxv" event={"ID":"7844f778-bcd5-40fe-ad92-0cc0fcd6c5d7","Type":"ContainerStarted","Data":"ad9ddabeed1835f4563bb2efcb4eaabdfe870255c7b6bc1990b5185c1c7ea192"} Nov 24 08:51:49 crc kubenswrapper[4886]: I1124 08:51:49.779113 4886 generic.go:334] "Generic (PLEG): container finished" podID="98e3e498-9c73-407d-91f1-1032f1d0a4b2" containerID="ede38c567a3731a76d2e177db0ce1daceb461c8ab4441ee11c58e43c57ce7239" exitCode=0 Nov 24 08:51:49 crc kubenswrapper[4886]: I1124 08:51:49.779197 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j6kp4" event={"ID":"98e3e498-9c73-407d-91f1-1032f1d0a4b2","Type":"ContainerDied","Data":"ede38c567a3731a76d2e177db0ce1daceb461c8ab4441ee11c58e43c57ce7239"} Nov 24 08:51:49 crc kubenswrapper[4886]: I1124 08:51:49.799553 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29399565-qwh9r" Nov 24 08:51:49 crc kubenswrapper[4886]: I1124 08:51:49.800201 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29399565-qwh9r" event={"ID":"1afd949e-d0f2-41b8-9632-917df3468232","Type":"ContainerDied","Data":"aa55b97334366ac064ff40f5cb14bfd699228dd32621f679bd52fd7658ae8b90"} Nov 24 08:51:49 crc kubenswrapper[4886]: I1124 08:51:49.800249 4886 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="aa55b97334366ac064ff40f5cb14bfd699228dd32621f679bd52fd7658ae8b90" Nov 24 08:51:50 crc kubenswrapper[4886]: I1124 08:51:50.116159 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 24 08:51:50 crc kubenswrapper[4886]: I1124 08:51:50.191658 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1e4ca688-82ea-4513-8a11-1adb96141627-kubelet-dir\") pod \"1e4ca688-82ea-4513-8a11-1adb96141627\" (UID: \"1e4ca688-82ea-4513-8a11-1adb96141627\") " Nov 24 08:51:50 crc kubenswrapper[4886]: I1124 08:51:50.191781 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1e4ca688-82ea-4513-8a11-1adb96141627-kube-api-access\") pod \"1e4ca688-82ea-4513-8a11-1adb96141627\" (UID: \"1e4ca688-82ea-4513-8a11-1adb96141627\") " Nov 24 08:51:50 crc kubenswrapper[4886]: I1124 08:51:50.191833 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1e4ca688-82ea-4513-8a11-1adb96141627-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "1e4ca688-82ea-4513-8a11-1adb96141627" (UID: "1e4ca688-82ea-4513-8a11-1adb96141627"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 08:51:50 crc kubenswrapper[4886]: I1124 08:51:50.192135 4886 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1e4ca688-82ea-4513-8a11-1adb96141627-kubelet-dir\") on node \"crc\" DevicePath \"\"" Nov 24 08:51:50 crc kubenswrapper[4886]: I1124 08:51:50.207445 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1e4ca688-82ea-4513-8a11-1adb96141627-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1e4ca688-82ea-4513-8a11-1adb96141627" (UID: "1e4ca688-82ea-4513-8a11-1adb96141627"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 08:51:50 crc kubenswrapper[4886]: I1124 08:51:50.293977 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1e4ca688-82ea-4513-8a11-1adb96141627-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 24 08:51:50 crc kubenswrapper[4886]: I1124 08:51:50.540796 4886 patch_prober.go:28] interesting pod/router-default-5444994796-xbr84 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 24 08:51:50 crc kubenswrapper[4886]: [-]has-synced failed: reason withheld Nov 24 08:51:50 crc kubenswrapper[4886]: [+]process-running ok Nov 24 08:51:50 crc kubenswrapper[4886]: healthz check failed Nov 24 08:51:50 crc kubenswrapper[4886]: I1124 08:51:50.540859 4886 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xbr84" podUID="a54a2524-099e-4a0f-9762-eafbc576dc56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 24 08:51:50 crc kubenswrapper[4886]: I1124 08:51:50.821728 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 24 08:51:50 crc kubenswrapper[4886]: I1124 08:51:50.821870 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"1e4ca688-82ea-4513-8a11-1adb96141627","Type":"ContainerDied","Data":"f8cd166d1c6dcf1f43f25d436eafa7c2db93afaf6221a4db6d7fbd4393cf60a1"} Nov 24 08:51:50 crc kubenswrapper[4886]: I1124 08:51:50.822302 4886 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f8cd166d1c6dcf1f43f25d436eafa7c2db93afaf6221a4db6d7fbd4393cf60a1" Nov 24 08:51:50 crc kubenswrapper[4886]: I1124 08:51:50.843282 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-fkfxv" event={"ID":"7844f778-bcd5-40fe-ad92-0cc0fcd6c5d7","Type":"ContainerStarted","Data":"2a112e353239a2c657c73768fc8089c5612c7333c82d62ba9ea0ba795394ac15"} Nov 24 08:51:50 crc kubenswrapper[4886]: I1124 08:51:50.867743 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-fkfxv" podStartSLOduration=145.867683716 podStartE2EDuration="2m25.867683716s" podCreationTimestamp="2025-11-24 08:49:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 08:51:50.864098068 +0000 UTC m=+166.750836203" watchObservedRunningTime="2025-11-24 08:51:50.867683716 +0000 UTC m=+166.754421851" Nov 24 08:51:51 crc kubenswrapper[4886]: I1124 08:51:51.529363 4886 patch_prober.go:28] interesting pod/router-default-5444994796-xbr84 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 24 08:51:51 crc kubenswrapper[4886]: [-]has-synced failed: reason withheld Nov 24 08:51:51 crc kubenswrapper[4886]: [+]process-running ok Nov 24 08:51:51 crc kubenswrapper[4886]: healthz check failed Nov 24 08:51:51 crc kubenswrapper[4886]: I1124 08:51:51.529588 4886 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xbr84" podUID="a54a2524-099e-4a0f-9762-eafbc576dc56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 24 08:51:52 crc kubenswrapper[4886]: I1124 08:51:52.449853 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-x8hmr" Nov 24 08:51:52 crc kubenswrapper[4886]: I1124 08:51:52.528591 4886 patch_prober.go:28] interesting pod/router-default-5444994796-xbr84 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 24 08:51:52 crc kubenswrapper[4886]: [-]has-synced failed: reason withheld Nov 24 08:51:52 crc kubenswrapper[4886]: [+]process-running ok Nov 24 08:51:52 crc kubenswrapper[4886]: healthz check failed Nov 24 08:51:52 crc kubenswrapper[4886]: I1124 08:51:52.528679 4886 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xbr84" podUID="a54a2524-099e-4a0f-9762-eafbc576dc56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 24 08:51:53 crc kubenswrapper[4886]: I1124 08:51:53.162174 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-lz4ml" Nov 24 08:51:53 crc kubenswrapper[4886]: I1124 08:51:53.169877 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-lz4ml" Nov 24 08:51:53 crc kubenswrapper[4886]: I1124 08:51:53.529296 4886 patch_prober.go:28] interesting pod/router-default-5444994796-xbr84 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 24 08:51:53 crc kubenswrapper[4886]: [-]has-synced failed: reason withheld Nov 24 08:51:53 crc kubenswrapper[4886]: [+]process-running ok Nov 24 08:51:53 crc kubenswrapper[4886]: healthz check failed Nov 24 08:51:53 crc kubenswrapper[4886]: I1124 08:51:53.529406 4886 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xbr84" podUID="a54a2524-099e-4a0f-9762-eafbc576dc56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 24 08:51:54 crc kubenswrapper[4886]: I1124 08:51:54.529727 4886 patch_prober.go:28] interesting pod/router-default-5444994796-xbr84 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 24 08:51:54 crc kubenswrapper[4886]: [-]has-synced failed: reason withheld Nov 24 08:51:54 crc kubenswrapper[4886]: [+]process-running ok Nov 24 08:51:54 crc kubenswrapper[4886]: healthz check failed Nov 24 08:51:54 crc kubenswrapper[4886]: I1124 08:51:54.530143 4886 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xbr84" podUID="a54a2524-099e-4a0f-9762-eafbc576dc56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 24 08:51:55 crc kubenswrapper[4886]: I1124 08:51:55.533876 4886 patch_prober.go:28] interesting pod/router-default-5444994796-xbr84 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 24 08:51:55 crc kubenswrapper[4886]: [-]has-synced failed: reason withheld Nov 24 08:51:55 crc kubenswrapper[4886]: [+]process-running ok Nov 24 08:51:55 crc kubenswrapper[4886]: healthz check failed Nov 24 08:51:55 crc kubenswrapper[4886]: I1124 08:51:55.533946 4886 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xbr84" podUID="a54a2524-099e-4a0f-9762-eafbc576dc56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 24 08:51:56 crc kubenswrapper[4886]: I1124 08:51:56.248694 4886 patch_prober.go:28] interesting pod/downloads-7954f5f757-w6cvz container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Nov 24 08:51:56 crc kubenswrapper[4886]: I1124 08:51:56.248767 4886 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-w6cvz" podUID="580c4fdc-bdb3-4099-b715-ac4c63acecb2" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Nov 24 08:51:56 crc kubenswrapper[4886]: I1124 08:51:56.248784 4886 patch_prober.go:28] interesting pod/downloads-7954f5f757-w6cvz container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Nov 24 08:51:56 crc kubenswrapper[4886]: I1124 08:51:56.248850 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-w6cvz" podUID="580c4fdc-bdb3-4099-b715-ac4c63acecb2" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Nov 24 08:51:56 crc kubenswrapper[4886]: I1124 08:51:56.529448 4886 patch_prober.go:28] interesting pod/router-default-5444994796-xbr84 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 24 08:51:56 crc kubenswrapper[4886]: [-]has-synced failed: reason withheld Nov 24 08:51:56 crc kubenswrapper[4886]: [+]process-running ok Nov 24 08:51:56 crc kubenswrapper[4886]: healthz check failed Nov 24 08:51:56 crc kubenswrapper[4886]: I1124 08:51:56.529527 4886 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xbr84" podUID="a54a2524-099e-4a0f-9762-eafbc576dc56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 24 08:51:56 crc kubenswrapper[4886]: I1124 08:51:56.864370 4886 patch_prober.go:28] interesting pod/console-f9d7485db-tbxjm container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.21:8443/health\": dial tcp 10.217.0.21:8443: connect: connection refused" start-of-body= Nov 24 08:51:56 crc kubenswrapper[4886]: I1124 08:51:56.864493 4886 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-tbxjm" podUID="87f902e1-073b-4ccd-8b3a-717f802e9671" containerName="console" probeResult="failure" output="Get \"https://10.217.0.21:8443/health\": dial tcp 10.217.0.21:8443: connect: connection refused" Nov 24 08:51:57 crc kubenswrapper[4886]: I1124 08:51:57.534760 4886 patch_prober.go:28] interesting pod/router-default-5444994796-xbr84 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 24 08:51:57 crc kubenswrapper[4886]: [-]has-synced failed: reason withheld Nov 24 08:51:57 crc kubenswrapper[4886]: [+]process-running ok Nov 24 08:51:57 crc kubenswrapper[4886]: healthz check failed Nov 24 08:51:57 crc kubenswrapper[4886]: I1124 08:51:57.535089 4886 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xbr84" podUID="a54a2524-099e-4a0f-9762-eafbc576dc56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 24 08:51:58 crc kubenswrapper[4886]: I1124 08:51:58.529537 4886 patch_prober.go:28] interesting pod/router-default-5444994796-xbr84 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 24 08:51:58 crc kubenswrapper[4886]: [+]has-synced ok Nov 24 08:51:58 crc kubenswrapper[4886]: [+]process-running ok Nov 24 08:51:58 crc kubenswrapper[4886]: healthz check failed Nov 24 08:51:58 crc kubenswrapper[4886]: I1124 08:51:58.529660 4886 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xbr84" podUID="a54a2524-099e-4a0f-9762-eafbc576dc56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 24 08:51:59 crc kubenswrapper[4886]: I1124 08:51:59.532031 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-xbr84" Nov 24 08:51:59 crc kubenswrapper[4886]: I1124 08:51:59.535595 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-xbr84" Nov 24 08:52:01 crc kubenswrapper[4886]: I1124 08:52:01.784992 4886 patch_prober.go:28] interesting pod/machine-config-daemon-zc46q container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 08:52:01 crc kubenswrapper[4886]: I1124 08:52:01.785408 4886 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zc46q" podUID="23cb993e-0360-4449-b604-8ddd825a6502" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 08:52:03 crc kubenswrapper[4886]: I1124 08:52:03.117306 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 08:52:06 crc kubenswrapper[4886]: I1124 08:52:06.209105 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-n794d" Nov 24 08:52:06 crc kubenswrapper[4886]: I1124 08:52:06.268346 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-w6cvz" Nov 24 08:52:06 crc kubenswrapper[4886]: I1124 08:52:06.861769 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-tbxjm" Nov 24 08:52:06 crc kubenswrapper[4886]: I1124 08:52:06.865822 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-tbxjm" Nov 24 08:52:14 crc kubenswrapper[4886]: E1124 08:52:14.695885 4886 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Nov 24 08:52:14 crc kubenswrapper[4886]: E1124 08:52:14.696466 4886 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-74hf9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-l8vn8_openshift-marketplace(d89bb378-d235-4377-9908-0008691b9174): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 24 08:52:14 crc kubenswrapper[4886]: E1124 08:52:14.697721 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-l8vn8" podUID="d89bb378-d235-4377-9908-0008691b9174" Nov 24 08:52:17 crc kubenswrapper[4886]: E1124 08:52:17.096185 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-l8vn8" podUID="d89bb378-d235-4377-9908-0008691b9174" Nov 24 08:52:17 crc kubenswrapper[4886]: I1124 08:52:17.350807 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-pxjkj" Nov 24 08:52:17 crc kubenswrapper[4886]: E1124 08:52:17.518583 4886 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Nov 24 08:52:17 crc kubenswrapper[4886]: E1124 08:52:17.519341 4886 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-84x56,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-qjxlb_openshift-marketplace(aa41e796-f145-4455-b8fa-d751c98f7b5f): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 24 08:52:17 crc kubenswrapper[4886]: E1124 08:52:17.520812 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-qjxlb" podUID="aa41e796-f145-4455-b8fa-d751c98f7b5f" Nov 24 08:52:17 crc kubenswrapper[4886]: E1124 08:52:17.615501 4886 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Nov 24 08:52:17 crc kubenswrapper[4886]: E1124 08:52:17.615715 4886 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vkrdt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-2s97s_openshift-marketplace(44504d41-1a7d-4a15-a270-24325b0954a9): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 24 08:52:17 crc kubenswrapper[4886]: E1124 08:52:17.616937 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-2s97s" podUID="44504d41-1a7d-4a15-a270-24325b0954a9" Nov 24 08:52:18 crc kubenswrapper[4886]: E1124 08:52:18.609653 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-2s97s" podUID="44504d41-1a7d-4a15-a270-24325b0954a9" Nov 24 08:52:18 crc kubenswrapper[4886]: E1124 08:52:18.609668 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-qjxlb" podUID="aa41e796-f145-4455-b8fa-d751c98f7b5f" Nov 24 08:52:18 crc kubenswrapper[4886]: E1124 08:52:18.674482 4886 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Nov 24 08:52:18 crc kubenswrapper[4886]: E1124 08:52:18.674695 4886 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qhmnb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-jxrnh_openshift-marketplace(b87e82fa-37f9-46dc-8170-c77373da3ff8): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 24 08:52:18 crc kubenswrapper[4886]: E1124 08:52:18.676030 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-jxrnh" podUID="b87e82fa-37f9-46dc-8170-c77373da3ff8" Nov 24 08:52:18 crc kubenswrapper[4886]: E1124 08:52:18.699989 4886 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Nov 24 08:52:18 crc kubenswrapper[4886]: E1124 08:52:18.700224 4886 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9z5tm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-tbg5c_openshift-marketplace(a5f30dce-707e-45e7-a928-4602478ac07d): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 24 08:52:18 crc kubenswrapper[4886]: E1124 08:52:18.701360 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-tbg5c" podUID="a5f30dce-707e-45e7-a928-4602478ac07d" Nov 24 08:52:21 crc kubenswrapper[4886]: E1124 08:52:21.777671 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-jxrnh" podUID="b87e82fa-37f9-46dc-8170-c77373da3ff8" Nov 24 08:52:21 crc kubenswrapper[4886]: E1124 08:52:21.777797 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-tbg5c" podUID="a5f30dce-707e-45e7-a928-4602478ac07d" Nov 24 08:52:21 crc kubenswrapper[4886]: E1124 08:52:21.855055 4886 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Nov 24 08:52:21 crc kubenswrapper[4886]: E1124 08:52:21.855839 4886 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4wmcl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-j6kp4_openshift-marketplace(98e3e498-9c73-407d-91f1-1032f1d0a4b2): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 24 08:52:21 crc kubenswrapper[4886]: E1124 08:52:21.856988 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-j6kp4" podUID="98e3e498-9c73-407d-91f1-1032f1d0a4b2" Nov 24 08:52:21 crc kubenswrapper[4886]: E1124 08:52:21.865703 4886 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Nov 24 08:52:21 crc kubenswrapper[4886]: E1124 08:52:21.865913 4886 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-v8fwn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-pg5ls_openshift-marketplace(5cfd24b4-c215-49b2-af8e-a3875c05c738): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 24 08:52:21 crc kubenswrapper[4886]: E1124 08:52:21.867270 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-pg5ls" podUID="5cfd24b4-c215-49b2-af8e-a3875c05c738" Nov 24 08:52:22 crc kubenswrapper[4886]: I1124 08:52:22.151196 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ztc6l" event={"ID":"925c272d-d181-4754-a9cb-9b9b11e18f6c","Type":"ContainerStarted","Data":"b3a585b2d62a7a94e3912327a61adfc267b7d3d9055480c3c4be1ad0adeeab87"} Nov 24 08:52:22 crc kubenswrapper[4886]: E1124 08:52:22.154616 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-j6kp4" podUID="98e3e498-9c73-407d-91f1-1032f1d0a4b2" Nov 24 08:52:22 crc kubenswrapper[4886]: E1124 08:52:22.154808 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pg5ls" podUID="5cfd24b4-c215-49b2-af8e-a3875c05c738" Nov 24 08:52:23 crc kubenswrapper[4886]: I1124 08:52:23.162127 4886 generic.go:334] "Generic (PLEG): container finished" podID="925c272d-d181-4754-a9cb-9b9b11e18f6c" containerID="b3a585b2d62a7a94e3912327a61adfc267b7d3d9055480c3c4be1ad0adeeab87" exitCode=0 Nov 24 08:52:23 crc kubenswrapper[4886]: I1124 08:52:23.162530 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ztc6l" event={"ID":"925c272d-d181-4754-a9cb-9b9b11e18f6c","Type":"ContainerDied","Data":"b3a585b2d62a7a94e3912327a61adfc267b7d3d9055480c3c4be1ad0adeeab87"} Nov 24 08:52:24 crc kubenswrapper[4886]: I1124 08:52:24.173815 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ztc6l" event={"ID":"925c272d-d181-4754-a9cb-9b9b11e18f6c","Type":"ContainerStarted","Data":"4c45663d6d2f0d8fbdcbf1d52e15791fb1ce41364177bda2f10321c0d237a463"} Nov 24 08:52:27 crc kubenswrapper[4886]: I1124 08:52:27.873330 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-ztc6l" podStartSLOduration=7.660697009 podStartE2EDuration="44.873292341s" podCreationTimestamp="2025-11-24 08:51:43 +0000 UTC" firstStartedPulling="2025-11-24 08:51:46.512409078 +0000 UTC m=+162.399147213" lastFinishedPulling="2025-11-24 08:52:23.72500441 +0000 UTC m=+199.611742545" observedRunningTime="2025-11-24 08:52:24.214740592 +0000 UTC m=+200.101478737" watchObservedRunningTime="2025-11-24 08:52:27.873292341 +0000 UTC m=+203.760030476" Nov 24 08:52:30 crc kubenswrapper[4886]: I1124 08:52:30.210236 4886 generic.go:334] "Generic (PLEG): container finished" podID="d89bb378-d235-4377-9908-0008691b9174" containerID="e45e9766eac36d89bfe48247824653f50563c00282cb4ab4e2371338cf454db1" exitCode=0 Nov 24 08:52:30 crc kubenswrapper[4886]: I1124 08:52:30.210318 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-l8vn8" event={"ID":"d89bb378-d235-4377-9908-0008691b9174","Type":"ContainerDied","Data":"e45e9766eac36d89bfe48247824653f50563c00282cb4ab4e2371338cf454db1"} Nov 24 08:52:31 crc kubenswrapper[4886]: I1124 08:52:31.219549 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-l8vn8" event={"ID":"d89bb378-d235-4377-9908-0008691b9174","Type":"ContainerStarted","Data":"af19d2ea3795eee061798d35cb35e3b17d9b4a723418e8f5dcc3af517c73514f"} Nov 24 08:52:31 crc kubenswrapper[4886]: I1124 08:52:31.253190 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-l8vn8" podStartSLOduration=2.991328594 podStartE2EDuration="48.25316761s" podCreationTimestamp="2025-11-24 08:51:43 +0000 UTC" firstStartedPulling="2025-11-24 08:51:45.357924484 +0000 UTC m=+161.244662619" lastFinishedPulling="2025-11-24 08:52:30.61976351 +0000 UTC m=+206.506501635" observedRunningTime="2025-11-24 08:52:31.251680956 +0000 UTC m=+207.138419091" watchObservedRunningTime="2025-11-24 08:52:31.25316761 +0000 UTC m=+207.139905745" Nov 24 08:52:31 crc kubenswrapper[4886]: I1124 08:52:31.784645 4886 patch_prober.go:28] interesting pod/machine-config-daemon-zc46q container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 08:52:31 crc kubenswrapper[4886]: I1124 08:52:31.785308 4886 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zc46q" podUID="23cb993e-0360-4449-b604-8ddd825a6502" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 08:52:31 crc kubenswrapper[4886]: I1124 08:52:31.785497 4886 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-zc46q" Nov 24 08:52:31 crc kubenswrapper[4886]: I1124 08:52:31.786506 4886 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"5b6a02c10c81171f23bf0e623a3f86710da37f6dd62f885c0366830a60b2be34"} pod="openshift-machine-config-operator/machine-config-daemon-zc46q" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 24 08:52:31 crc kubenswrapper[4886]: I1124 08:52:31.786936 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-zc46q" podUID="23cb993e-0360-4449-b604-8ddd825a6502" containerName="machine-config-daemon" containerID="cri-o://5b6a02c10c81171f23bf0e623a3f86710da37f6dd62f885c0366830a60b2be34" gracePeriod=600 Nov 24 08:52:32 crc kubenswrapper[4886]: I1124 08:52:32.230105 4886 generic.go:334] "Generic (PLEG): container finished" podID="23cb993e-0360-4449-b604-8ddd825a6502" containerID="5b6a02c10c81171f23bf0e623a3f86710da37f6dd62f885c0366830a60b2be34" exitCode=0 Nov 24 08:52:32 crc kubenswrapper[4886]: I1124 08:52:32.230221 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zc46q" event={"ID":"23cb993e-0360-4449-b604-8ddd825a6502","Type":"ContainerDied","Data":"5b6a02c10c81171f23bf0e623a3f86710da37f6dd62f885c0366830a60b2be34"} Nov 24 08:52:32 crc kubenswrapper[4886]: I1124 08:52:32.233796 4886 generic.go:334] "Generic (PLEG): container finished" podID="44504d41-1a7d-4a15-a270-24325b0954a9" containerID="eabe9c9e352d8a22eb653518d139c5f7a370b4a20c10cf8132ed9f764aeb714b" exitCode=0 Nov 24 08:52:32 crc kubenswrapper[4886]: I1124 08:52:32.233843 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2s97s" event={"ID":"44504d41-1a7d-4a15-a270-24325b0954a9","Type":"ContainerDied","Data":"eabe9c9e352d8a22eb653518d139c5f7a370b4a20c10cf8132ed9f764aeb714b"} Nov 24 08:52:33 crc kubenswrapper[4886]: I1124 08:52:33.242937 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zc46q" event={"ID":"23cb993e-0360-4449-b604-8ddd825a6502","Type":"ContainerStarted","Data":"71eb5673abcc11e0163c9266fe128b74e3ad31a62badd22878a0c5c714b5f6d8"} Nov 24 08:52:33 crc kubenswrapper[4886]: I1124 08:52:33.248722 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2s97s" event={"ID":"44504d41-1a7d-4a15-a270-24325b0954a9","Type":"ContainerStarted","Data":"2545c2efdab5cce529af95431ab622627a48921caa34bd04e537da85fc028b1d"} Nov 24 08:52:33 crc kubenswrapper[4886]: I1124 08:52:33.290558 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-2s97s" podStartSLOduration=3.0180300349999998 podStartE2EDuration="50.290527183s" podCreationTimestamp="2025-11-24 08:51:43 +0000 UTC" firstStartedPulling="2025-11-24 08:51:45.38989138 +0000 UTC m=+161.276629515" lastFinishedPulling="2025-11-24 08:52:32.662388528 +0000 UTC m=+208.549126663" observedRunningTime="2025-11-24 08:52:33.288226916 +0000 UTC m=+209.174965071" watchObservedRunningTime="2025-11-24 08:52:33.290527183 +0000 UTC m=+209.177265318" Nov 24 08:52:33 crc kubenswrapper[4886]: I1124 08:52:33.513823 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-l8vn8" Nov 24 08:52:33 crc kubenswrapper[4886]: I1124 08:52:33.514448 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-l8vn8" Nov 24 08:52:33 crc kubenswrapper[4886]: I1124 08:52:33.749891 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-2s97s" Nov 24 08:52:33 crc kubenswrapper[4886]: I1124 08:52:33.749981 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-2s97s" Nov 24 08:52:33 crc kubenswrapper[4886]: I1124 08:52:33.920574 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-l8vn8" Nov 24 08:52:33 crc kubenswrapper[4886]: I1124 08:52:33.957936 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-ztc6l" Nov 24 08:52:33 crc kubenswrapper[4886]: I1124 08:52:33.958261 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-ztc6l" Nov 24 08:52:34 crc kubenswrapper[4886]: I1124 08:52:34.063702 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-ztc6l" Nov 24 08:52:34 crc kubenswrapper[4886]: I1124 08:52:34.334025 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-ztc6l" Nov 24 08:52:34 crc kubenswrapper[4886]: I1124 08:52:34.897346 4886 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-2s97s" podUID="44504d41-1a7d-4a15-a270-24325b0954a9" containerName="registry-server" probeResult="failure" output=< Nov 24 08:52:34 crc kubenswrapper[4886]: timeout: failed to connect service ":50051" within 1s Nov 24 08:52:34 crc kubenswrapper[4886]: > Nov 24 08:52:35 crc kubenswrapper[4886]: I1124 08:52:35.273265 4886 generic.go:334] "Generic (PLEG): container finished" podID="aa41e796-f145-4455-b8fa-d751c98f7b5f" containerID="56f0d139681d9520145179f9e2a3644963add65d174a849256275e4515a8ae09" exitCode=0 Nov 24 08:52:35 crc kubenswrapper[4886]: I1124 08:52:35.273379 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qjxlb" event={"ID":"aa41e796-f145-4455-b8fa-d751c98f7b5f","Type":"ContainerDied","Data":"56f0d139681d9520145179f9e2a3644963add65d174a849256275e4515a8ae09"} Nov 24 08:52:35 crc kubenswrapper[4886]: I1124 08:52:35.279250 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pg5ls" event={"ID":"5cfd24b4-c215-49b2-af8e-a3875c05c738","Type":"ContainerStarted","Data":"fc6a81a47c2d6e1722b1a2b363a893b73ad5699fad2ea91e08b20533dd5fada3"} Nov 24 08:52:35 crc kubenswrapper[4886]: I1124 08:52:35.289790 4886 generic.go:334] "Generic (PLEG): container finished" podID="b87e82fa-37f9-46dc-8170-c77373da3ff8" containerID="a918d64013747cb85ce2389ff2c5f11b74e1d4f6ff1cd2b46680bfe31bf86e42" exitCode=0 Nov 24 08:52:35 crc kubenswrapper[4886]: I1124 08:52:35.289883 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jxrnh" event={"ID":"b87e82fa-37f9-46dc-8170-c77373da3ff8","Type":"ContainerDied","Data":"a918d64013747cb85ce2389ff2c5f11b74e1d4f6ff1cd2b46680bfe31bf86e42"} Nov 24 08:52:35 crc kubenswrapper[4886]: I1124 08:52:35.298830 4886 generic.go:334] "Generic (PLEG): container finished" podID="98e3e498-9c73-407d-91f1-1032f1d0a4b2" containerID="a4182f1d2d06eb78cb736fb14193466842c8b11fff92393ae739f35e58d13604" exitCode=0 Nov 24 08:52:35 crc kubenswrapper[4886]: I1124 08:52:35.298926 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j6kp4" event={"ID":"98e3e498-9c73-407d-91f1-1032f1d0a4b2","Type":"ContainerDied","Data":"a4182f1d2d06eb78cb736fb14193466842c8b11fff92393ae739f35e58d13604"} Nov 24 08:52:36 crc kubenswrapper[4886]: I1124 08:52:36.308054 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jxrnh" event={"ID":"b87e82fa-37f9-46dc-8170-c77373da3ff8","Type":"ContainerStarted","Data":"22a6783a3b5463fe689498ab356d871f8a40cd85ed08a30882b603ae0b25dbb3"} Nov 24 08:52:36 crc kubenswrapper[4886]: I1124 08:52:36.310728 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qjxlb" event={"ID":"aa41e796-f145-4455-b8fa-d751c98f7b5f","Type":"ContainerStarted","Data":"f3630721528ba5c1692d53305eed9f6f2b54706bca1ec91c9cdd444af71f51f3"} Nov 24 08:52:36 crc kubenswrapper[4886]: I1124 08:52:36.313575 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tbg5c" event={"ID":"a5f30dce-707e-45e7-a928-4602478ac07d","Type":"ContainerStarted","Data":"3fae1e7325a7c0119ee0031edf1a21c21d03295739b3dd30a26dcfbb13e22921"} Nov 24 08:52:36 crc kubenswrapper[4886]: I1124 08:52:36.315262 4886 generic.go:334] "Generic (PLEG): container finished" podID="5cfd24b4-c215-49b2-af8e-a3875c05c738" containerID="fc6a81a47c2d6e1722b1a2b363a893b73ad5699fad2ea91e08b20533dd5fada3" exitCode=0 Nov 24 08:52:36 crc kubenswrapper[4886]: I1124 08:52:36.315311 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pg5ls" event={"ID":"5cfd24b4-c215-49b2-af8e-a3875c05c738","Type":"ContainerDied","Data":"fc6a81a47c2d6e1722b1a2b363a893b73ad5699fad2ea91e08b20533dd5fada3"} Nov 24 08:52:36 crc kubenswrapper[4886]: I1124 08:52:36.330021 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-jxrnh" podStartSLOduration=3.194131301 podStartE2EDuration="51.329997131s" podCreationTimestamp="2025-11-24 08:51:45 +0000 UTC" firstStartedPulling="2025-11-24 08:51:47.664305981 +0000 UTC m=+163.551044116" lastFinishedPulling="2025-11-24 08:52:35.800171811 +0000 UTC m=+211.686909946" observedRunningTime="2025-11-24 08:52:36.328729204 +0000 UTC m=+212.215467339" watchObservedRunningTime="2025-11-24 08:52:36.329997131 +0000 UTC m=+212.216735276" Nov 24 08:52:36 crc kubenswrapper[4886]: I1124 08:52:36.401284 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-qjxlb" podStartSLOduration=4.121886325 podStartE2EDuration="53.401260948s" podCreationTimestamp="2025-11-24 08:51:43 +0000 UTC" firstStartedPulling="2025-11-24 08:51:46.497642853 +0000 UTC m=+162.384380988" lastFinishedPulling="2025-11-24 08:52:35.777017476 +0000 UTC m=+211.663755611" observedRunningTime="2025-11-24 08:52:36.397381785 +0000 UTC m=+212.284119930" watchObservedRunningTime="2025-11-24 08:52:36.401260948 +0000 UTC m=+212.287999083" Nov 24 08:52:36 crc kubenswrapper[4886]: E1124 08:52:36.678266 4886 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda5f30dce_707e_45e7_a928_4602478ac07d.slice/crio-3fae1e7325a7c0119ee0031edf1a21c21d03295739b3dd30a26dcfbb13e22921.scope\": RecentStats: unable to find data in memory cache]" Nov 24 08:52:37 crc kubenswrapper[4886]: I1124 08:52:37.244511 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-ztc6l"] Nov 24 08:52:37 crc kubenswrapper[4886]: I1124 08:52:37.325242 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pg5ls" event={"ID":"5cfd24b4-c215-49b2-af8e-a3875c05c738","Type":"ContainerStarted","Data":"aa329a1320b154fc105a618e811e6fe0c475d56c2386e8f046fe411bb19cc7ca"} Nov 24 08:52:37 crc kubenswrapper[4886]: I1124 08:52:37.327954 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j6kp4" event={"ID":"98e3e498-9c73-407d-91f1-1032f1d0a4b2","Type":"ContainerStarted","Data":"2638ce912b5af715bdd8224d50528691ef96afa3b7a6c4270cb6111eed84c266"} Nov 24 08:52:37 crc kubenswrapper[4886]: I1124 08:52:37.330901 4886 generic.go:334] "Generic (PLEG): container finished" podID="a5f30dce-707e-45e7-a928-4602478ac07d" containerID="3fae1e7325a7c0119ee0031edf1a21c21d03295739b3dd30a26dcfbb13e22921" exitCode=0 Nov 24 08:52:37 crc kubenswrapper[4886]: I1124 08:52:37.331005 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tbg5c" event={"ID":"a5f30dce-707e-45e7-a928-4602478ac07d","Type":"ContainerDied","Data":"3fae1e7325a7c0119ee0031edf1a21c21d03295739b3dd30a26dcfbb13e22921"} Nov 24 08:52:37 crc kubenswrapper[4886]: I1124 08:52:37.331219 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-ztc6l" podUID="925c272d-d181-4754-a9cb-9b9b11e18f6c" containerName="registry-server" containerID="cri-o://4c45663d6d2f0d8fbdcbf1d52e15791fb1ce41364177bda2f10321c0d237a463" gracePeriod=2 Nov 24 08:52:37 crc kubenswrapper[4886]: I1124 08:52:37.358987 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-pg5ls" podStartSLOduration=3.209858765 podStartE2EDuration="51.358962427s" podCreationTimestamp="2025-11-24 08:51:46 +0000 UTC" firstStartedPulling="2025-11-24 08:51:48.782644635 +0000 UTC m=+164.669382770" lastFinishedPulling="2025-11-24 08:52:36.931748297 +0000 UTC m=+212.818486432" observedRunningTime="2025-11-24 08:52:37.354719813 +0000 UTC m=+213.241457958" watchObservedRunningTime="2025-11-24 08:52:37.358962427 +0000 UTC m=+213.245700562" Nov 24 08:52:37 crc kubenswrapper[4886]: I1124 08:52:37.403819 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-j6kp4" podStartSLOduration=4.223706365 podStartE2EDuration="51.403791763s" podCreationTimestamp="2025-11-24 08:51:46 +0000 UTC" firstStartedPulling="2025-11-24 08:51:49.784770554 +0000 UTC m=+165.671508689" lastFinishedPulling="2025-11-24 08:52:36.964855952 +0000 UTC m=+212.851594087" observedRunningTime="2025-11-24 08:52:37.403281519 +0000 UTC m=+213.290019674" watchObservedRunningTime="2025-11-24 08:52:37.403791763 +0000 UTC m=+213.290529898" Nov 24 08:52:37 crc kubenswrapper[4886]: I1124 08:52:37.820588 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ztc6l" Nov 24 08:52:37 crc kubenswrapper[4886]: I1124 08:52:37.866655 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/925c272d-d181-4754-a9cb-9b9b11e18f6c-catalog-content\") pod \"925c272d-d181-4754-a9cb-9b9b11e18f6c\" (UID: \"925c272d-d181-4754-a9cb-9b9b11e18f6c\") " Nov 24 08:52:37 crc kubenswrapper[4886]: I1124 08:52:37.866782 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6xsnr\" (UniqueName: \"kubernetes.io/projected/925c272d-d181-4754-a9cb-9b9b11e18f6c-kube-api-access-6xsnr\") pod \"925c272d-d181-4754-a9cb-9b9b11e18f6c\" (UID: \"925c272d-d181-4754-a9cb-9b9b11e18f6c\") " Nov 24 08:52:37 crc kubenswrapper[4886]: I1124 08:52:37.866831 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/925c272d-d181-4754-a9cb-9b9b11e18f6c-utilities\") pod \"925c272d-d181-4754-a9cb-9b9b11e18f6c\" (UID: \"925c272d-d181-4754-a9cb-9b9b11e18f6c\") " Nov 24 08:52:37 crc kubenswrapper[4886]: I1124 08:52:37.868191 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/925c272d-d181-4754-a9cb-9b9b11e18f6c-utilities" (OuterVolumeSpecName: "utilities") pod "925c272d-d181-4754-a9cb-9b9b11e18f6c" (UID: "925c272d-d181-4754-a9cb-9b9b11e18f6c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 08:52:37 crc kubenswrapper[4886]: I1124 08:52:37.875280 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925c272d-d181-4754-a9cb-9b9b11e18f6c-kube-api-access-6xsnr" (OuterVolumeSpecName: "kube-api-access-6xsnr") pod "925c272d-d181-4754-a9cb-9b9b11e18f6c" (UID: "925c272d-d181-4754-a9cb-9b9b11e18f6c"). InnerVolumeSpecName "kube-api-access-6xsnr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 08:52:37 crc kubenswrapper[4886]: I1124 08:52:37.931333 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/925c272d-d181-4754-a9cb-9b9b11e18f6c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "925c272d-d181-4754-a9cb-9b9b11e18f6c" (UID: "925c272d-d181-4754-a9cb-9b9b11e18f6c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 08:52:37 crc kubenswrapper[4886]: I1124 08:52:37.968375 4886 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/925c272d-d181-4754-a9cb-9b9b11e18f6c-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 08:52:37 crc kubenswrapper[4886]: I1124 08:52:37.968418 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6xsnr\" (UniqueName: \"kubernetes.io/projected/925c272d-d181-4754-a9cb-9b9b11e18f6c-kube-api-access-6xsnr\") on node \"crc\" DevicePath \"\"" Nov 24 08:52:37 crc kubenswrapper[4886]: I1124 08:52:37.968433 4886 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/925c272d-d181-4754-a9cb-9b9b11e18f6c-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 08:52:38 crc kubenswrapper[4886]: I1124 08:52:38.343352 4886 generic.go:334] "Generic (PLEG): container finished" podID="925c272d-d181-4754-a9cb-9b9b11e18f6c" containerID="4c45663d6d2f0d8fbdcbf1d52e15791fb1ce41364177bda2f10321c0d237a463" exitCode=0 Nov 24 08:52:38 crc kubenswrapper[4886]: I1124 08:52:38.343464 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ztc6l" Nov 24 08:52:38 crc kubenswrapper[4886]: I1124 08:52:38.343461 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ztc6l" event={"ID":"925c272d-d181-4754-a9cb-9b9b11e18f6c","Type":"ContainerDied","Data":"4c45663d6d2f0d8fbdcbf1d52e15791fb1ce41364177bda2f10321c0d237a463"} Nov 24 08:52:38 crc kubenswrapper[4886]: I1124 08:52:38.343600 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ztc6l" event={"ID":"925c272d-d181-4754-a9cb-9b9b11e18f6c","Type":"ContainerDied","Data":"3d5c83feceeea5a5abbefb15397cbbc2098a7b23d5c642760be2ca5f40d8ee49"} Nov 24 08:52:38 crc kubenswrapper[4886]: I1124 08:52:38.343625 4886 scope.go:117] "RemoveContainer" containerID="4c45663d6d2f0d8fbdcbf1d52e15791fb1ce41364177bda2f10321c0d237a463" Nov 24 08:52:38 crc kubenswrapper[4886]: I1124 08:52:38.347766 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tbg5c" event={"ID":"a5f30dce-707e-45e7-a928-4602478ac07d","Type":"ContainerStarted","Data":"d3b3deb7b25f0c5deb76617e12fa4f03827524fced20eeb6e9526ab61bca8ff2"} Nov 24 08:52:38 crc kubenswrapper[4886]: I1124 08:52:38.364028 4886 scope.go:117] "RemoveContainer" containerID="b3a585b2d62a7a94e3912327a61adfc267b7d3d9055480c3c4be1ad0adeeab87" Nov 24 08:52:38 crc kubenswrapper[4886]: I1124 08:52:38.382211 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-tbg5c" podStartSLOduration=2.915957476 podStartE2EDuration="53.382190447s" podCreationTimestamp="2025-11-24 08:51:45 +0000 UTC" firstStartedPulling="2025-11-24 08:51:47.611686649 +0000 UTC m=+163.498424784" lastFinishedPulling="2025-11-24 08:52:38.07791963 +0000 UTC m=+213.964657755" observedRunningTime="2025-11-24 08:52:38.376539692 +0000 UTC m=+214.263277827" watchObservedRunningTime="2025-11-24 08:52:38.382190447 +0000 UTC m=+214.268928582" Nov 24 08:52:38 crc kubenswrapper[4886]: I1124 08:52:38.399526 4886 scope.go:117] "RemoveContainer" containerID="8deacea395dbca010082894617f5cfccc15be962f58566b91ab42ef6e915653d" Nov 24 08:52:38 crc kubenswrapper[4886]: I1124 08:52:38.402585 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-ztc6l"] Nov 24 08:52:38 crc kubenswrapper[4886]: I1124 08:52:38.414894 4886 scope.go:117] "RemoveContainer" containerID="4c45663d6d2f0d8fbdcbf1d52e15791fb1ce41364177bda2f10321c0d237a463" Nov 24 08:52:38 crc kubenswrapper[4886]: E1124 08:52:38.415387 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4c45663d6d2f0d8fbdcbf1d52e15791fb1ce41364177bda2f10321c0d237a463\": container with ID starting with 4c45663d6d2f0d8fbdcbf1d52e15791fb1ce41364177bda2f10321c0d237a463 not found: ID does not exist" containerID="4c45663d6d2f0d8fbdcbf1d52e15791fb1ce41364177bda2f10321c0d237a463" Nov 24 08:52:38 crc kubenswrapper[4886]: I1124 08:52:38.415436 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4c45663d6d2f0d8fbdcbf1d52e15791fb1ce41364177bda2f10321c0d237a463"} err="failed to get container status \"4c45663d6d2f0d8fbdcbf1d52e15791fb1ce41364177bda2f10321c0d237a463\": rpc error: code = NotFound desc = could not find container \"4c45663d6d2f0d8fbdcbf1d52e15791fb1ce41364177bda2f10321c0d237a463\": container with ID starting with 4c45663d6d2f0d8fbdcbf1d52e15791fb1ce41364177bda2f10321c0d237a463 not found: ID does not exist" Nov 24 08:52:38 crc kubenswrapper[4886]: I1124 08:52:38.415468 4886 scope.go:117] "RemoveContainer" containerID="b3a585b2d62a7a94e3912327a61adfc267b7d3d9055480c3c4be1ad0adeeab87" Nov 24 08:52:38 crc kubenswrapper[4886]: E1124 08:52:38.415882 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b3a585b2d62a7a94e3912327a61adfc267b7d3d9055480c3c4be1ad0adeeab87\": container with ID starting with b3a585b2d62a7a94e3912327a61adfc267b7d3d9055480c3c4be1ad0adeeab87 not found: ID does not exist" containerID="b3a585b2d62a7a94e3912327a61adfc267b7d3d9055480c3c4be1ad0adeeab87" Nov 24 08:52:38 crc kubenswrapper[4886]: I1124 08:52:38.415940 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b3a585b2d62a7a94e3912327a61adfc267b7d3d9055480c3c4be1ad0adeeab87"} err="failed to get container status \"b3a585b2d62a7a94e3912327a61adfc267b7d3d9055480c3c4be1ad0adeeab87\": rpc error: code = NotFound desc = could not find container \"b3a585b2d62a7a94e3912327a61adfc267b7d3d9055480c3c4be1ad0adeeab87\": container with ID starting with b3a585b2d62a7a94e3912327a61adfc267b7d3d9055480c3c4be1ad0adeeab87 not found: ID does not exist" Nov 24 08:52:38 crc kubenswrapper[4886]: I1124 08:52:38.415981 4886 scope.go:117] "RemoveContainer" containerID="8deacea395dbca010082894617f5cfccc15be962f58566b91ab42ef6e915653d" Nov 24 08:52:38 crc kubenswrapper[4886]: E1124 08:52:38.416393 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8deacea395dbca010082894617f5cfccc15be962f58566b91ab42ef6e915653d\": container with ID starting with 8deacea395dbca010082894617f5cfccc15be962f58566b91ab42ef6e915653d not found: ID does not exist" containerID="8deacea395dbca010082894617f5cfccc15be962f58566b91ab42ef6e915653d" Nov 24 08:52:38 crc kubenswrapper[4886]: I1124 08:52:38.416424 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8deacea395dbca010082894617f5cfccc15be962f58566b91ab42ef6e915653d"} err="failed to get container status \"8deacea395dbca010082894617f5cfccc15be962f58566b91ab42ef6e915653d\": rpc error: code = NotFound desc = could not find container \"8deacea395dbca010082894617f5cfccc15be962f58566b91ab42ef6e915653d\": container with ID starting with 8deacea395dbca010082894617f5cfccc15be962f58566b91ab42ef6e915653d not found: ID does not exist" Nov 24 08:52:38 crc kubenswrapper[4886]: I1124 08:52:38.416593 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-ztc6l"] Nov 24 08:52:38 crc kubenswrapper[4886]: I1124 08:52:38.856813 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925c272d-d181-4754-a9cb-9b9b11e18f6c" path="/var/lib/kubelet/pods/925c272d-d181-4754-a9cb-9b9b11e18f6c/volumes" Nov 24 08:52:42 crc kubenswrapper[4886]: I1124 08:52:42.128855 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-d9sdq"] Nov 24 08:52:42 crc kubenswrapper[4886]: I1124 08:52:42.129979 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-d9sdq" podUID="32727d31-2207-4688-b70c-6045b674538b" containerName="controller-manager" containerID="cri-o://cb4617b1031f737a531df633c01416bcf80b241c706e8e32b34439c75f45c002" gracePeriod=30 Nov 24 08:52:42 crc kubenswrapper[4886]: I1124 08:52:42.212740 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-4k7lh"] Nov 24 08:52:42 crc kubenswrapper[4886]: I1124 08:52:42.213023 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4k7lh" podUID="b65ba9fc-0a0d-49f2-9991-319b054df0b0" containerName="route-controller-manager" containerID="cri-o://98637391d646e5def58fbedaa6d121a822611aa1c862a56bc29b7aaefd3ff356" gracePeriod=30 Nov 24 08:52:42 crc kubenswrapper[4886]: I1124 08:52:42.381130 4886 generic.go:334] "Generic (PLEG): container finished" podID="b65ba9fc-0a0d-49f2-9991-319b054df0b0" containerID="98637391d646e5def58fbedaa6d121a822611aa1c862a56bc29b7aaefd3ff356" exitCode=0 Nov 24 08:52:42 crc kubenswrapper[4886]: I1124 08:52:42.381219 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4k7lh" event={"ID":"b65ba9fc-0a0d-49f2-9991-319b054df0b0","Type":"ContainerDied","Data":"98637391d646e5def58fbedaa6d121a822611aa1c862a56bc29b7aaefd3ff356"} Nov 24 08:52:42 crc kubenswrapper[4886]: I1124 08:52:42.383695 4886 generic.go:334] "Generic (PLEG): container finished" podID="32727d31-2207-4688-b70c-6045b674538b" containerID="cb4617b1031f737a531df633c01416bcf80b241c706e8e32b34439c75f45c002" exitCode=0 Nov 24 08:52:42 crc kubenswrapper[4886]: I1124 08:52:42.383749 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-d9sdq" event={"ID":"32727d31-2207-4688-b70c-6045b674538b","Type":"ContainerDied","Data":"cb4617b1031f737a531df633c01416bcf80b241c706e8e32b34439c75f45c002"} Nov 24 08:52:43 crc kubenswrapper[4886]: I1124 08:52:43.135009 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-d9sdq" Nov 24 08:52:43 crc kubenswrapper[4886]: I1124 08:52:43.142761 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4k7lh" Nov 24 08:52:43 crc kubenswrapper[4886]: I1124 08:52:43.244069 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b65ba9fc-0a0d-49f2-9991-319b054df0b0-config\") pod \"b65ba9fc-0a0d-49f2-9991-319b054df0b0\" (UID: \"b65ba9fc-0a0d-49f2-9991-319b054df0b0\") " Nov 24 08:52:43 crc kubenswrapper[4886]: I1124 08:52:43.244129 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/32727d31-2207-4688-b70c-6045b674538b-config\") pod \"32727d31-2207-4688-b70c-6045b674538b\" (UID: \"32727d31-2207-4688-b70c-6045b674538b\") " Nov 24 08:52:43 crc kubenswrapper[4886]: I1124 08:52:43.244194 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/32727d31-2207-4688-b70c-6045b674538b-proxy-ca-bundles\") pod \"32727d31-2207-4688-b70c-6045b674538b\" (UID: \"32727d31-2207-4688-b70c-6045b674538b\") " Nov 24 08:52:43 crc kubenswrapper[4886]: I1124 08:52:43.244240 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b65ba9fc-0a0d-49f2-9991-319b054df0b0-serving-cert\") pod \"b65ba9fc-0a0d-49f2-9991-319b054df0b0\" (UID: \"b65ba9fc-0a0d-49f2-9991-319b054df0b0\") " Nov 24 08:52:43 crc kubenswrapper[4886]: I1124 08:52:43.244266 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/32727d31-2207-4688-b70c-6045b674538b-serving-cert\") pod \"32727d31-2207-4688-b70c-6045b674538b\" (UID: \"32727d31-2207-4688-b70c-6045b674538b\") " Nov 24 08:52:43 crc kubenswrapper[4886]: I1124 08:52:43.244302 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/32727d31-2207-4688-b70c-6045b674538b-client-ca\") pod \"32727d31-2207-4688-b70c-6045b674538b\" (UID: \"32727d31-2207-4688-b70c-6045b674538b\") " Nov 24 08:52:43 crc kubenswrapper[4886]: I1124 08:52:43.244336 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b65ba9fc-0a0d-49f2-9991-319b054df0b0-client-ca\") pod \"b65ba9fc-0a0d-49f2-9991-319b054df0b0\" (UID: \"b65ba9fc-0a0d-49f2-9991-319b054df0b0\") " Nov 24 08:52:43 crc kubenswrapper[4886]: I1124 08:52:43.244405 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-74mns\" (UniqueName: \"kubernetes.io/projected/32727d31-2207-4688-b70c-6045b674538b-kube-api-access-74mns\") pod \"32727d31-2207-4688-b70c-6045b674538b\" (UID: \"32727d31-2207-4688-b70c-6045b674538b\") " Nov 24 08:52:43 crc kubenswrapper[4886]: I1124 08:52:43.244434 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v68xl\" (UniqueName: \"kubernetes.io/projected/b65ba9fc-0a0d-49f2-9991-319b054df0b0-kube-api-access-v68xl\") pod \"b65ba9fc-0a0d-49f2-9991-319b054df0b0\" (UID: \"b65ba9fc-0a0d-49f2-9991-319b054df0b0\") " Nov 24 08:52:43 crc kubenswrapper[4886]: I1124 08:52:43.245699 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/32727d31-2207-4688-b70c-6045b674538b-client-ca" (OuterVolumeSpecName: "client-ca") pod "32727d31-2207-4688-b70c-6045b674538b" (UID: "32727d31-2207-4688-b70c-6045b674538b"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 08:52:43 crc kubenswrapper[4886]: I1124 08:52:43.245719 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b65ba9fc-0a0d-49f2-9991-319b054df0b0-client-ca" (OuterVolumeSpecName: "client-ca") pod "b65ba9fc-0a0d-49f2-9991-319b054df0b0" (UID: "b65ba9fc-0a0d-49f2-9991-319b054df0b0"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 08:52:43 crc kubenswrapper[4886]: I1124 08:52:43.245819 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b65ba9fc-0a0d-49f2-9991-319b054df0b0-config" (OuterVolumeSpecName: "config") pod "b65ba9fc-0a0d-49f2-9991-319b054df0b0" (UID: "b65ba9fc-0a0d-49f2-9991-319b054df0b0"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 08:52:43 crc kubenswrapper[4886]: I1124 08:52:43.246117 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/32727d31-2207-4688-b70c-6045b674538b-config" (OuterVolumeSpecName: "config") pod "32727d31-2207-4688-b70c-6045b674538b" (UID: "32727d31-2207-4688-b70c-6045b674538b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 08:52:43 crc kubenswrapper[4886]: I1124 08:52:43.246178 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/32727d31-2207-4688-b70c-6045b674538b-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "32727d31-2207-4688-b70c-6045b674538b" (UID: "32727d31-2207-4688-b70c-6045b674538b"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 08:52:43 crc kubenswrapper[4886]: I1124 08:52:43.251291 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b65ba9fc-0a0d-49f2-9991-319b054df0b0-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "b65ba9fc-0a0d-49f2-9991-319b054df0b0" (UID: "b65ba9fc-0a0d-49f2-9991-319b054df0b0"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 08:52:43 crc kubenswrapper[4886]: I1124 08:52:43.252507 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b65ba9fc-0a0d-49f2-9991-319b054df0b0-kube-api-access-v68xl" (OuterVolumeSpecName: "kube-api-access-v68xl") pod "b65ba9fc-0a0d-49f2-9991-319b054df0b0" (UID: "b65ba9fc-0a0d-49f2-9991-319b054df0b0"). InnerVolumeSpecName "kube-api-access-v68xl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 08:52:43 crc kubenswrapper[4886]: I1124 08:52:43.257333 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/32727d31-2207-4688-b70c-6045b674538b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "32727d31-2207-4688-b70c-6045b674538b" (UID: "32727d31-2207-4688-b70c-6045b674538b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 08:52:43 crc kubenswrapper[4886]: I1124 08:52:43.263378 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/32727d31-2207-4688-b70c-6045b674538b-kube-api-access-74mns" (OuterVolumeSpecName: "kube-api-access-74mns") pod "32727d31-2207-4688-b70c-6045b674538b" (UID: "32727d31-2207-4688-b70c-6045b674538b"). InnerVolumeSpecName "kube-api-access-74mns". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 08:52:43 crc kubenswrapper[4886]: I1124 08:52:43.345544 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v68xl\" (UniqueName: \"kubernetes.io/projected/b65ba9fc-0a0d-49f2-9991-319b054df0b0-kube-api-access-v68xl\") on node \"crc\" DevicePath \"\"" Nov 24 08:52:43 crc kubenswrapper[4886]: I1124 08:52:43.345588 4886 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b65ba9fc-0a0d-49f2-9991-319b054df0b0-config\") on node \"crc\" DevicePath \"\"" Nov 24 08:52:43 crc kubenswrapper[4886]: I1124 08:52:43.345598 4886 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/32727d31-2207-4688-b70c-6045b674538b-config\") on node \"crc\" DevicePath \"\"" Nov 24 08:52:43 crc kubenswrapper[4886]: I1124 08:52:43.345609 4886 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/32727d31-2207-4688-b70c-6045b674538b-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Nov 24 08:52:43 crc kubenswrapper[4886]: I1124 08:52:43.345619 4886 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b65ba9fc-0a0d-49f2-9991-319b054df0b0-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 08:52:43 crc kubenswrapper[4886]: I1124 08:52:43.345629 4886 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/32727d31-2207-4688-b70c-6045b674538b-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 08:52:43 crc kubenswrapper[4886]: I1124 08:52:43.345639 4886 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/32727d31-2207-4688-b70c-6045b674538b-client-ca\") on node \"crc\" DevicePath \"\"" Nov 24 08:52:43 crc kubenswrapper[4886]: I1124 08:52:43.345648 4886 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b65ba9fc-0a0d-49f2-9991-319b054df0b0-client-ca\") on node \"crc\" DevicePath \"\"" Nov 24 08:52:43 crc kubenswrapper[4886]: I1124 08:52:43.345658 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-74mns\" (UniqueName: \"kubernetes.io/projected/32727d31-2207-4688-b70c-6045b674538b-kube-api-access-74mns\") on node \"crc\" DevicePath \"\"" Nov 24 08:52:43 crc kubenswrapper[4886]: I1124 08:52:43.397787 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-d9sdq" event={"ID":"32727d31-2207-4688-b70c-6045b674538b","Type":"ContainerDied","Data":"c5ec4e295cfb04d8f64542d7b1c5b0f0e56c6f6289195e25940474276d4b8e16"} Nov 24 08:52:43 crc kubenswrapper[4886]: I1124 08:52:43.397871 4886 scope.go:117] "RemoveContainer" containerID="cb4617b1031f737a531df633c01416bcf80b241c706e8e32b34439c75f45c002" Nov 24 08:52:43 crc kubenswrapper[4886]: I1124 08:52:43.398053 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-d9sdq" Nov 24 08:52:43 crc kubenswrapper[4886]: I1124 08:52:43.399685 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4k7lh" event={"ID":"b65ba9fc-0a0d-49f2-9991-319b054df0b0","Type":"ContainerDied","Data":"0aaabad0f46973659ca6fdd06412de9bf4454a5075d1e587845f09613f54e20d"} Nov 24 08:52:43 crc kubenswrapper[4886]: I1124 08:52:43.400001 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4k7lh" Nov 24 08:52:43 crc kubenswrapper[4886]: I1124 08:52:43.426210 4886 scope.go:117] "RemoveContainer" containerID="98637391d646e5def58fbedaa6d121a822611aa1c862a56bc29b7aaefd3ff356" Nov 24 08:52:43 crc kubenswrapper[4886]: I1124 08:52:43.446869 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-d9sdq"] Nov 24 08:52:43 crc kubenswrapper[4886]: I1124 08:52:43.450971 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-d9sdq"] Nov 24 08:52:43 crc kubenswrapper[4886]: I1124 08:52:43.460122 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-4k7lh"] Nov 24 08:52:43 crc kubenswrapper[4886]: I1124 08:52:43.463845 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-4k7lh"] Nov 24 08:52:43 crc kubenswrapper[4886]: I1124 08:52:43.558590 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-l8vn8" Nov 24 08:52:43 crc kubenswrapper[4886]: I1124 08:52:43.790775 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-2s97s" Nov 24 08:52:43 crc kubenswrapper[4886]: I1124 08:52:43.832305 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-2s97s" Nov 24 08:52:43 crc kubenswrapper[4886]: I1124 08:52:43.879506 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5495dc887c-qlcv9"] Nov 24 08:52:43 crc kubenswrapper[4886]: E1124 08:52:43.879985 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="925c272d-d181-4754-a9cb-9b9b11e18f6c" containerName="registry-server" Nov 24 08:52:43 crc kubenswrapper[4886]: I1124 08:52:43.880014 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="925c272d-d181-4754-a9cb-9b9b11e18f6c" containerName="registry-server" Nov 24 08:52:43 crc kubenswrapper[4886]: E1124 08:52:43.880028 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="32727d31-2207-4688-b70c-6045b674538b" containerName="controller-manager" Nov 24 08:52:43 crc kubenswrapper[4886]: I1124 08:52:43.880038 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="32727d31-2207-4688-b70c-6045b674538b" containerName="controller-manager" Nov 24 08:52:43 crc kubenswrapper[4886]: E1124 08:52:43.880063 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="925c272d-d181-4754-a9cb-9b9b11e18f6c" containerName="extract-utilities" Nov 24 08:52:43 crc kubenswrapper[4886]: I1124 08:52:43.880073 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="925c272d-d181-4754-a9cb-9b9b11e18f6c" containerName="extract-utilities" Nov 24 08:52:43 crc kubenswrapper[4886]: E1124 08:52:43.880084 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="925c272d-d181-4754-a9cb-9b9b11e18f6c" containerName="extract-content" Nov 24 08:52:43 crc kubenswrapper[4886]: I1124 08:52:43.880093 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="925c272d-d181-4754-a9cb-9b9b11e18f6c" containerName="extract-content" Nov 24 08:52:43 crc kubenswrapper[4886]: E1124 08:52:43.880108 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e1a52d4-abae-4519-8288-c1c56ea36e76" containerName="pruner" Nov 24 08:52:43 crc kubenswrapper[4886]: I1124 08:52:43.880116 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e1a52d4-abae-4519-8288-c1c56ea36e76" containerName="pruner" Nov 24 08:52:43 crc kubenswrapper[4886]: E1124 08:52:43.880127 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1afd949e-d0f2-41b8-9632-917df3468232" containerName="collect-profiles" Nov 24 08:52:43 crc kubenswrapper[4886]: I1124 08:52:43.880138 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="1afd949e-d0f2-41b8-9632-917df3468232" containerName="collect-profiles" Nov 24 08:52:43 crc kubenswrapper[4886]: E1124 08:52:43.880174 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e4ca688-82ea-4513-8a11-1adb96141627" containerName="pruner" Nov 24 08:52:43 crc kubenswrapper[4886]: I1124 08:52:43.880184 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e4ca688-82ea-4513-8a11-1adb96141627" containerName="pruner" Nov 24 08:52:43 crc kubenswrapper[4886]: E1124 08:52:43.880200 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b65ba9fc-0a0d-49f2-9991-319b054df0b0" containerName="route-controller-manager" Nov 24 08:52:43 crc kubenswrapper[4886]: I1124 08:52:43.880211 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="b65ba9fc-0a0d-49f2-9991-319b054df0b0" containerName="route-controller-manager" Nov 24 08:52:43 crc kubenswrapper[4886]: I1124 08:52:43.880361 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="32727d31-2207-4688-b70c-6045b674538b" containerName="controller-manager" Nov 24 08:52:43 crc kubenswrapper[4886]: I1124 08:52:43.880372 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="1e4ca688-82ea-4513-8a11-1adb96141627" containerName="pruner" Nov 24 08:52:43 crc kubenswrapper[4886]: I1124 08:52:43.880381 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="b65ba9fc-0a0d-49f2-9991-319b054df0b0" containerName="route-controller-manager" Nov 24 08:52:43 crc kubenswrapper[4886]: I1124 08:52:43.880394 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e1a52d4-abae-4519-8288-c1c56ea36e76" containerName="pruner" Nov 24 08:52:43 crc kubenswrapper[4886]: I1124 08:52:43.880411 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="1afd949e-d0f2-41b8-9632-917df3468232" containerName="collect-profiles" Nov 24 08:52:43 crc kubenswrapper[4886]: I1124 08:52:43.880422 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="925c272d-d181-4754-a9cb-9b9b11e18f6c" containerName="registry-server" Nov 24 08:52:43 crc kubenswrapper[4886]: I1124 08:52:43.881090 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5495dc887c-qlcv9" Nov 24 08:52:43 crc kubenswrapper[4886]: I1124 08:52:43.883100 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-576c9d799-7j84h"] Nov 24 08:52:43 crc kubenswrapper[4886]: I1124 08:52:43.885120 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-576c9d799-7j84h" Nov 24 08:52:43 crc kubenswrapper[4886]: I1124 08:52:43.886196 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Nov 24 08:52:43 crc kubenswrapper[4886]: I1124 08:52:43.886436 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Nov 24 08:52:43 crc kubenswrapper[4886]: I1124 08:52:43.886624 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Nov 24 08:52:43 crc kubenswrapper[4886]: I1124 08:52:43.887045 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Nov 24 08:52:43 crc kubenswrapper[4886]: I1124 08:52:43.887748 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Nov 24 08:52:43 crc kubenswrapper[4886]: I1124 08:52:43.887950 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Nov 24 08:52:43 crc kubenswrapper[4886]: I1124 08:52:43.888167 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Nov 24 08:52:43 crc kubenswrapper[4886]: I1124 08:52:43.888511 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Nov 24 08:52:43 crc kubenswrapper[4886]: I1124 08:52:43.891053 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Nov 24 08:52:43 crc kubenswrapper[4886]: I1124 08:52:43.891216 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Nov 24 08:52:43 crc kubenswrapper[4886]: I1124 08:52:43.891367 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Nov 24 08:52:43 crc kubenswrapper[4886]: I1124 08:52:43.891420 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Nov 24 08:52:43 crc kubenswrapper[4886]: I1124 08:52:43.892250 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5495dc887c-qlcv9"] Nov 24 08:52:43 crc kubenswrapper[4886]: I1124 08:52:43.899521 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Nov 24 08:52:43 crc kubenswrapper[4886]: I1124 08:52:43.903717 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-576c9d799-7j84h"] Nov 24 08:52:43 crc kubenswrapper[4886]: I1124 08:52:43.956184 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0e48387d-08ef-435e-ab67-8c252131888d-client-ca\") pod \"route-controller-manager-5495dc887c-qlcv9\" (UID: \"0e48387d-08ef-435e-ab67-8c252131888d\") " pod="openshift-route-controller-manager/route-controller-manager-5495dc887c-qlcv9" Nov 24 08:52:43 crc kubenswrapper[4886]: I1124 08:52:43.956270 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bb2cz\" (UniqueName: \"kubernetes.io/projected/16c90ea0-b2e9-4483-9f38-badc2b15c7e6-kube-api-access-bb2cz\") pod \"controller-manager-576c9d799-7j84h\" (UID: \"16c90ea0-b2e9-4483-9f38-badc2b15c7e6\") " pod="openshift-controller-manager/controller-manager-576c9d799-7j84h" Nov 24 08:52:43 crc kubenswrapper[4886]: I1124 08:52:43.956354 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pt28d\" (UniqueName: \"kubernetes.io/projected/0e48387d-08ef-435e-ab67-8c252131888d-kube-api-access-pt28d\") pod \"route-controller-manager-5495dc887c-qlcv9\" (UID: \"0e48387d-08ef-435e-ab67-8c252131888d\") " pod="openshift-route-controller-manager/route-controller-manager-5495dc887c-qlcv9" Nov 24 08:52:43 crc kubenswrapper[4886]: I1124 08:52:43.956546 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/16c90ea0-b2e9-4483-9f38-badc2b15c7e6-serving-cert\") pod \"controller-manager-576c9d799-7j84h\" (UID: \"16c90ea0-b2e9-4483-9f38-badc2b15c7e6\") " pod="openshift-controller-manager/controller-manager-576c9d799-7j84h" Nov 24 08:52:43 crc kubenswrapper[4886]: I1124 08:52:43.956615 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/16c90ea0-b2e9-4483-9f38-badc2b15c7e6-config\") pod \"controller-manager-576c9d799-7j84h\" (UID: \"16c90ea0-b2e9-4483-9f38-badc2b15c7e6\") " pod="openshift-controller-manager/controller-manager-576c9d799-7j84h" Nov 24 08:52:43 crc kubenswrapper[4886]: I1124 08:52:43.956778 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0e48387d-08ef-435e-ab67-8c252131888d-serving-cert\") pod \"route-controller-manager-5495dc887c-qlcv9\" (UID: \"0e48387d-08ef-435e-ab67-8c252131888d\") " pod="openshift-route-controller-manager/route-controller-manager-5495dc887c-qlcv9" Nov 24 08:52:43 crc kubenswrapper[4886]: I1124 08:52:43.956864 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/16c90ea0-b2e9-4483-9f38-badc2b15c7e6-proxy-ca-bundles\") pod \"controller-manager-576c9d799-7j84h\" (UID: \"16c90ea0-b2e9-4483-9f38-badc2b15c7e6\") " pod="openshift-controller-manager/controller-manager-576c9d799-7j84h" Nov 24 08:52:43 crc kubenswrapper[4886]: I1124 08:52:43.956916 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0e48387d-08ef-435e-ab67-8c252131888d-config\") pod \"route-controller-manager-5495dc887c-qlcv9\" (UID: \"0e48387d-08ef-435e-ab67-8c252131888d\") " pod="openshift-route-controller-manager/route-controller-manager-5495dc887c-qlcv9" Nov 24 08:52:43 crc kubenswrapper[4886]: I1124 08:52:43.956954 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/16c90ea0-b2e9-4483-9f38-badc2b15c7e6-client-ca\") pod \"controller-manager-576c9d799-7j84h\" (UID: \"16c90ea0-b2e9-4483-9f38-badc2b15c7e6\") " pod="openshift-controller-manager/controller-manager-576c9d799-7j84h" Nov 24 08:52:44 crc kubenswrapper[4886]: I1124 08:52:44.058141 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/16c90ea0-b2e9-4483-9f38-badc2b15c7e6-proxy-ca-bundles\") pod \"controller-manager-576c9d799-7j84h\" (UID: \"16c90ea0-b2e9-4483-9f38-badc2b15c7e6\") " pod="openshift-controller-manager/controller-manager-576c9d799-7j84h" Nov 24 08:52:44 crc kubenswrapper[4886]: I1124 08:52:44.058210 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0e48387d-08ef-435e-ab67-8c252131888d-config\") pod \"route-controller-manager-5495dc887c-qlcv9\" (UID: \"0e48387d-08ef-435e-ab67-8c252131888d\") " pod="openshift-route-controller-manager/route-controller-manager-5495dc887c-qlcv9" Nov 24 08:52:44 crc kubenswrapper[4886]: I1124 08:52:44.058227 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/16c90ea0-b2e9-4483-9f38-badc2b15c7e6-client-ca\") pod \"controller-manager-576c9d799-7j84h\" (UID: \"16c90ea0-b2e9-4483-9f38-badc2b15c7e6\") " pod="openshift-controller-manager/controller-manager-576c9d799-7j84h" Nov 24 08:52:44 crc kubenswrapper[4886]: I1124 08:52:44.058278 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0e48387d-08ef-435e-ab67-8c252131888d-client-ca\") pod \"route-controller-manager-5495dc887c-qlcv9\" (UID: \"0e48387d-08ef-435e-ab67-8c252131888d\") " pod="openshift-route-controller-manager/route-controller-manager-5495dc887c-qlcv9" Nov 24 08:52:44 crc kubenswrapper[4886]: I1124 08:52:44.058302 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bb2cz\" (UniqueName: \"kubernetes.io/projected/16c90ea0-b2e9-4483-9f38-badc2b15c7e6-kube-api-access-bb2cz\") pod \"controller-manager-576c9d799-7j84h\" (UID: \"16c90ea0-b2e9-4483-9f38-badc2b15c7e6\") " pod="openshift-controller-manager/controller-manager-576c9d799-7j84h" Nov 24 08:52:44 crc kubenswrapper[4886]: I1124 08:52:44.058325 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pt28d\" (UniqueName: \"kubernetes.io/projected/0e48387d-08ef-435e-ab67-8c252131888d-kube-api-access-pt28d\") pod \"route-controller-manager-5495dc887c-qlcv9\" (UID: \"0e48387d-08ef-435e-ab67-8c252131888d\") " pod="openshift-route-controller-manager/route-controller-manager-5495dc887c-qlcv9" Nov 24 08:52:44 crc kubenswrapper[4886]: I1124 08:52:44.058347 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/16c90ea0-b2e9-4483-9f38-badc2b15c7e6-serving-cert\") pod \"controller-manager-576c9d799-7j84h\" (UID: \"16c90ea0-b2e9-4483-9f38-badc2b15c7e6\") " pod="openshift-controller-manager/controller-manager-576c9d799-7j84h" Nov 24 08:52:44 crc kubenswrapper[4886]: I1124 08:52:44.058365 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/16c90ea0-b2e9-4483-9f38-badc2b15c7e6-config\") pod \"controller-manager-576c9d799-7j84h\" (UID: \"16c90ea0-b2e9-4483-9f38-badc2b15c7e6\") " pod="openshift-controller-manager/controller-manager-576c9d799-7j84h" Nov 24 08:52:44 crc kubenswrapper[4886]: I1124 08:52:44.058391 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0e48387d-08ef-435e-ab67-8c252131888d-serving-cert\") pod \"route-controller-manager-5495dc887c-qlcv9\" (UID: \"0e48387d-08ef-435e-ab67-8c252131888d\") " pod="openshift-route-controller-manager/route-controller-manager-5495dc887c-qlcv9" Nov 24 08:52:44 crc kubenswrapper[4886]: I1124 08:52:44.059514 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0e48387d-08ef-435e-ab67-8c252131888d-client-ca\") pod \"route-controller-manager-5495dc887c-qlcv9\" (UID: \"0e48387d-08ef-435e-ab67-8c252131888d\") " pod="openshift-route-controller-manager/route-controller-manager-5495dc887c-qlcv9" Nov 24 08:52:44 crc kubenswrapper[4886]: I1124 08:52:44.059708 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0e48387d-08ef-435e-ab67-8c252131888d-config\") pod \"route-controller-manager-5495dc887c-qlcv9\" (UID: \"0e48387d-08ef-435e-ab67-8c252131888d\") " pod="openshift-route-controller-manager/route-controller-manager-5495dc887c-qlcv9" Nov 24 08:52:44 crc kubenswrapper[4886]: I1124 08:52:44.059925 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/16c90ea0-b2e9-4483-9f38-badc2b15c7e6-proxy-ca-bundles\") pod \"controller-manager-576c9d799-7j84h\" (UID: \"16c90ea0-b2e9-4483-9f38-badc2b15c7e6\") " pod="openshift-controller-manager/controller-manager-576c9d799-7j84h" Nov 24 08:52:44 crc kubenswrapper[4886]: I1124 08:52:44.060233 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/16c90ea0-b2e9-4483-9f38-badc2b15c7e6-client-ca\") pod \"controller-manager-576c9d799-7j84h\" (UID: \"16c90ea0-b2e9-4483-9f38-badc2b15c7e6\") " pod="openshift-controller-manager/controller-manager-576c9d799-7j84h" Nov 24 08:52:44 crc kubenswrapper[4886]: I1124 08:52:44.061123 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/16c90ea0-b2e9-4483-9f38-badc2b15c7e6-config\") pod \"controller-manager-576c9d799-7j84h\" (UID: \"16c90ea0-b2e9-4483-9f38-badc2b15c7e6\") " pod="openshift-controller-manager/controller-manager-576c9d799-7j84h" Nov 24 08:52:44 crc kubenswrapper[4886]: I1124 08:52:44.064257 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/16c90ea0-b2e9-4483-9f38-badc2b15c7e6-serving-cert\") pod \"controller-manager-576c9d799-7j84h\" (UID: \"16c90ea0-b2e9-4483-9f38-badc2b15c7e6\") " pod="openshift-controller-manager/controller-manager-576c9d799-7j84h" Nov 24 08:52:44 crc kubenswrapper[4886]: I1124 08:52:44.065506 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0e48387d-08ef-435e-ab67-8c252131888d-serving-cert\") pod \"route-controller-manager-5495dc887c-qlcv9\" (UID: \"0e48387d-08ef-435e-ab67-8c252131888d\") " pod="openshift-route-controller-manager/route-controller-manager-5495dc887c-qlcv9" Nov 24 08:52:44 crc kubenswrapper[4886]: I1124 08:52:44.078094 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pt28d\" (UniqueName: \"kubernetes.io/projected/0e48387d-08ef-435e-ab67-8c252131888d-kube-api-access-pt28d\") pod \"route-controller-manager-5495dc887c-qlcv9\" (UID: \"0e48387d-08ef-435e-ab67-8c252131888d\") " pod="openshift-route-controller-manager/route-controller-manager-5495dc887c-qlcv9" Nov 24 08:52:44 crc kubenswrapper[4886]: I1124 08:52:44.080572 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bb2cz\" (UniqueName: \"kubernetes.io/projected/16c90ea0-b2e9-4483-9f38-badc2b15c7e6-kube-api-access-bb2cz\") pod \"controller-manager-576c9d799-7j84h\" (UID: \"16c90ea0-b2e9-4483-9f38-badc2b15c7e6\") " pod="openshift-controller-manager/controller-manager-576c9d799-7j84h" Nov 24 08:52:44 crc kubenswrapper[4886]: I1124 08:52:44.158658 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-qjxlb" Nov 24 08:52:44 crc kubenswrapper[4886]: I1124 08:52:44.159026 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-qjxlb" Nov 24 08:52:44 crc kubenswrapper[4886]: I1124 08:52:44.197924 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-qjxlb" Nov 24 08:52:44 crc kubenswrapper[4886]: I1124 08:52:44.217366 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5495dc887c-qlcv9" Nov 24 08:52:44 crc kubenswrapper[4886]: I1124 08:52:44.228065 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-576c9d799-7j84h" Nov 24 08:52:44 crc kubenswrapper[4886]: I1124 08:52:44.430059 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5495dc887c-qlcv9"] Nov 24 08:52:44 crc kubenswrapper[4886]: W1124 08:52:44.436777 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0e48387d_08ef_435e_ab67_8c252131888d.slice/crio-8f300800f664b678b2c81df2454387d004b0f35b12546828351e096b2da6a3de WatchSource:0}: Error finding container 8f300800f664b678b2c81df2454387d004b0f35b12546828351e096b2da6a3de: Status 404 returned error can't find the container with id 8f300800f664b678b2c81df2454387d004b0f35b12546828351e096b2da6a3de Nov 24 08:52:44 crc kubenswrapper[4886]: I1124 08:52:44.462120 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-qjxlb" Nov 24 08:52:44 crc kubenswrapper[4886]: I1124 08:52:44.470263 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-576c9d799-7j84h"] Nov 24 08:52:44 crc kubenswrapper[4886]: I1124 08:52:44.861927 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="32727d31-2207-4688-b70c-6045b674538b" path="/var/lib/kubelet/pods/32727d31-2207-4688-b70c-6045b674538b/volumes" Nov 24 08:52:44 crc kubenswrapper[4886]: I1124 08:52:44.862994 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b65ba9fc-0a0d-49f2-9991-319b054df0b0" path="/var/lib/kubelet/pods/b65ba9fc-0a0d-49f2-9991-319b054df0b0/volumes" Nov 24 08:52:45 crc kubenswrapper[4886]: I1124 08:52:45.422389 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5495dc887c-qlcv9" event={"ID":"0e48387d-08ef-435e-ab67-8c252131888d","Type":"ContainerStarted","Data":"fddbe4aad875eda3e8150a8c312c15fcae0d6745674d7a67f080e1afc31f34a5"} Nov 24 08:52:45 crc kubenswrapper[4886]: I1124 08:52:45.422445 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5495dc887c-qlcv9" event={"ID":"0e48387d-08ef-435e-ab67-8c252131888d","Type":"ContainerStarted","Data":"8f300800f664b678b2c81df2454387d004b0f35b12546828351e096b2da6a3de"} Nov 24 08:52:45 crc kubenswrapper[4886]: I1124 08:52:45.423942 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-5495dc887c-qlcv9" Nov 24 08:52:45 crc kubenswrapper[4886]: I1124 08:52:45.428658 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-576c9d799-7j84h" event={"ID":"16c90ea0-b2e9-4483-9f38-badc2b15c7e6","Type":"ContainerStarted","Data":"d4c9f09f4234f273f13b425675bbf2ca5cd5e1d2e1898a3fd138ee9e5b71753c"} Nov 24 08:52:45 crc kubenswrapper[4886]: I1124 08:52:45.428736 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-576c9d799-7j84h" event={"ID":"16c90ea0-b2e9-4483-9f38-badc2b15c7e6","Type":"ContainerStarted","Data":"76fbc2f02551256ce8d58e3644fe7b749404d58b07b07c27d253782f2529fa3d"} Nov 24 08:52:45 crc kubenswrapper[4886]: I1124 08:52:45.432285 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-5495dc887c-qlcv9" Nov 24 08:52:45 crc kubenswrapper[4886]: I1124 08:52:45.453329 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-5495dc887c-qlcv9" podStartSLOduration=3.453304927 podStartE2EDuration="3.453304927s" podCreationTimestamp="2025-11-24 08:52:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 08:52:45.449563268 +0000 UTC m=+221.336301423" watchObservedRunningTime="2025-11-24 08:52:45.453304927 +0000 UTC m=+221.340043062" Nov 24 08:52:45 crc kubenswrapper[4886]: I1124 08:52:45.639715 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-qjxlb"] Nov 24 08:52:45 crc kubenswrapper[4886]: I1124 08:52:45.816809 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-tbg5c" Nov 24 08:52:45 crc kubenswrapper[4886]: I1124 08:52:45.817262 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-tbg5c" Nov 24 08:52:45 crc kubenswrapper[4886]: I1124 08:52:45.857187 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-tbg5c" Nov 24 08:52:46 crc kubenswrapper[4886]: I1124 08:52:46.250326 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-jxrnh" Nov 24 08:52:46 crc kubenswrapper[4886]: I1124 08:52:46.250424 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-jxrnh" Nov 24 08:52:46 crc kubenswrapper[4886]: I1124 08:52:46.287704 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-jxrnh" Nov 24 08:52:46 crc kubenswrapper[4886]: I1124 08:52:46.459889 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-576c9d799-7j84h" podStartSLOduration=4.4598521 podStartE2EDuration="4.4598521s" podCreationTimestamp="2025-11-24 08:52:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 08:52:46.454927607 +0000 UTC m=+222.341665742" watchObservedRunningTime="2025-11-24 08:52:46.4598521 +0000 UTC m=+222.346590245" Nov 24 08:52:46 crc kubenswrapper[4886]: I1124 08:52:46.474875 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-jxrnh" Nov 24 08:52:46 crc kubenswrapper[4886]: I1124 08:52:46.485484 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-tbg5c" Nov 24 08:52:46 crc kubenswrapper[4886]: I1124 08:52:46.749442 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-pg5ls" Nov 24 08:52:46 crc kubenswrapper[4886]: I1124 08:52:46.749819 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-pg5ls" Nov 24 08:52:46 crc kubenswrapper[4886]: I1124 08:52:46.797396 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-pg5ls" Nov 24 08:52:47 crc kubenswrapper[4886]: I1124 08:52:47.193640 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-j6kp4" Nov 24 08:52:47 crc kubenswrapper[4886]: I1124 08:52:47.193731 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-j6kp4" Nov 24 08:52:47 crc kubenswrapper[4886]: I1124 08:52:47.242958 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-j6kp4" Nov 24 08:52:47 crc kubenswrapper[4886]: I1124 08:52:47.441192 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-qjxlb" podUID="aa41e796-f145-4455-b8fa-d751c98f7b5f" containerName="registry-server" containerID="cri-o://f3630721528ba5c1692d53305eed9f6f2b54706bca1ec91c9cdd444af71f51f3" gracePeriod=2 Nov 24 08:52:47 crc kubenswrapper[4886]: I1124 08:52:47.485578 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-j6kp4" Nov 24 08:52:47 crc kubenswrapper[4886]: I1124 08:52:47.493695 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-pg5ls" Nov 24 08:52:48 crc kubenswrapper[4886]: I1124 08:52:48.042815 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-jxrnh"] Nov 24 08:52:48 crc kubenswrapper[4886]: I1124 08:52:48.447935 4886 generic.go:334] "Generic (PLEG): container finished" podID="aa41e796-f145-4455-b8fa-d751c98f7b5f" containerID="f3630721528ba5c1692d53305eed9f6f2b54706bca1ec91c9cdd444af71f51f3" exitCode=0 Nov 24 08:52:48 crc kubenswrapper[4886]: I1124 08:52:48.448088 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qjxlb" event={"ID":"aa41e796-f145-4455-b8fa-d751c98f7b5f","Type":"ContainerDied","Data":"f3630721528ba5c1692d53305eed9f6f2b54706bca1ec91c9cdd444af71f51f3"} Nov 24 08:52:48 crc kubenswrapper[4886]: I1124 08:52:48.448189 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-jxrnh" podUID="b87e82fa-37f9-46dc-8170-c77373da3ff8" containerName="registry-server" containerID="cri-o://22a6783a3b5463fe689498ab356d871f8a40cd85ed08a30882b603ae0b25dbb3" gracePeriod=2 Nov 24 08:52:49 crc kubenswrapper[4886]: I1124 08:52:49.457469 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qjxlb" event={"ID":"aa41e796-f145-4455-b8fa-d751c98f7b5f","Type":"ContainerDied","Data":"042328de5a589fff8df2ca6ec0c94315433df229ab551a1ef35213a65075e6b6"} Nov 24 08:52:49 crc kubenswrapper[4886]: I1124 08:52:49.457852 4886 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="042328de5a589fff8df2ca6ec0c94315433df229ab551a1ef35213a65075e6b6" Nov 24 08:52:49 crc kubenswrapper[4886]: I1124 08:52:49.472581 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qjxlb" Nov 24 08:52:49 crc kubenswrapper[4886]: I1124 08:52:49.550755 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aa41e796-f145-4455-b8fa-d751c98f7b5f-utilities\") pod \"aa41e796-f145-4455-b8fa-d751c98f7b5f\" (UID: \"aa41e796-f145-4455-b8fa-d751c98f7b5f\") " Nov 24 08:52:49 crc kubenswrapper[4886]: I1124 08:52:49.550892 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-84x56\" (UniqueName: \"kubernetes.io/projected/aa41e796-f145-4455-b8fa-d751c98f7b5f-kube-api-access-84x56\") pod \"aa41e796-f145-4455-b8fa-d751c98f7b5f\" (UID: \"aa41e796-f145-4455-b8fa-d751c98f7b5f\") " Nov 24 08:52:49 crc kubenswrapper[4886]: I1124 08:52:49.550935 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aa41e796-f145-4455-b8fa-d751c98f7b5f-catalog-content\") pod \"aa41e796-f145-4455-b8fa-d751c98f7b5f\" (UID: \"aa41e796-f145-4455-b8fa-d751c98f7b5f\") " Nov 24 08:52:49 crc kubenswrapper[4886]: I1124 08:52:49.552432 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aa41e796-f145-4455-b8fa-d751c98f7b5f-utilities" (OuterVolumeSpecName: "utilities") pod "aa41e796-f145-4455-b8fa-d751c98f7b5f" (UID: "aa41e796-f145-4455-b8fa-d751c98f7b5f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 08:52:49 crc kubenswrapper[4886]: I1124 08:52:49.559824 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aa41e796-f145-4455-b8fa-d751c98f7b5f-kube-api-access-84x56" (OuterVolumeSpecName: "kube-api-access-84x56") pod "aa41e796-f145-4455-b8fa-d751c98f7b5f" (UID: "aa41e796-f145-4455-b8fa-d751c98f7b5f"). InnerVolumeSpecName "kube-api-access-84x56". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 08:52:49 crc kubenswrapper[4886]: I1124 08:52:49.602529 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aa41e796-f145-4455-b8fa-d751c98f7b5f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "aa41e796-f145-4455-b8fa-d751c98f7b5f" (UID: "aa41e796-f145-4455-b8fa-d751c98f7b5f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 08:52:49 crc kubenswrapper[4886]: I1124 08:52:49.652912 4886 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aa41e796-f145-4455-b8fa-d751c98f7b5f-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 08:52:49 crc kubenswrapper[4886]: I1124 08:52:49.652964 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-84x56\" (UniqueName: \"kubernetes.io/projected/aa41e796-f145-4455-b8fa-d751c98f7b5f-kube-api-access-84x56\") on node \"crc\" DevicePath \"\"" Nov 24 08:52:49 crc kubenswrapper[4886]: I1124 08:52:49.652976 4886 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aa41e796-f145-4455-b8fa-d751c98f7b5f-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 08:52:50 crc kubenswrapper[4886]: I1124 08:52:50.242642 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-j6kp4"] Nov 24 08:52:50 crc kubenswrapper[4886]: I1124 08:52:50.242995 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-j6kp4" podUID="98e3e498-9c73-407d-91f1-1032f1d0a4b2" containerName="registry-server" containerID="cri-o://2638ce912b5af715bdd8224d50528691ef96afa3b7a6c4270cb6111eed84c266" gracePeriod=2 Nov 24 08:52:50 crc kubenswrapper[4886]: I1124 08:52:50.465093 4886 generic.go:334] "Generic (PLEG): container finished" podID="b87e82fa-37f9-46dc-8170-c77373da3ff8" containerID="22a6783a3b5463fe689498ab356d871f8a40cd85ed08a30882b603ae0b25dbb3" exitCode=0 Nov 24 08:52:50 crc kubenswrapper[4886]: I1124 08:52:50.465195 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jxrnh" event={"ID":"b87e82fa-37f9-46dc-8170-c77373da3ff8","Type":"ContainerDied","Data":"22a6783a3b5463fe689498ab356d871f8a40cd85ed08a30882b603ae0b25dbb3"} Nov 24 08:52:50 crc kubenswrapper[4886]: I1124 08:52:50.465287 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qjxlb" Nov 24 08:52:50 crc kubenswrapper[4886]: I1124 08:52:50.502004 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-qjxlb"] Nov 24 08:52:50 crc kubenswrapper[4886]: I1124 08:52:50.507124 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-qjxlb"] Nov 24 08:52:50 crc kubenswrapper[4886]: I1124 08:52:50.857876 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aa41e796-f145-4455-b8fa-d751c98f7b5f" path="/var/lib/kubelet/pods/aa41e796-f145-4455-b8fa-d751c98f7b5f/volumes" Nov 24 08:52:51 crc kubenswrapper[4886]: I1124 08:52:51.485675 4886 generic.go:334] "Generic (PLEG): container finished" podID="98e3e498-9c73-407d-91f1-1032f1d0a4b2" containerID="2638ce912b5af715bdd8224d50528691ef96afa3b7a6c4270cb6111eed84c266" exitCode=0 Nov 24 08:52:51 crc kubenswrapper[4886]: I1124 08:52:51.485750 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j6kp4" event={"ID":"98e3e498-9c73-407d-91f1-1032f1d0a4b2","Type":"ContainerDied","Data":"2638ce912b5af715bdd8224d50528691ef96afa3b7a6c4270cb6111eed84c266"} Nov 24 08:52:51 crc kubenswrapper[4886]: I1124 08:52:51.636328 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jxrnh" Nov 24 08:52:51 crc kubenswrapper[4886]: I1124 08:52:51.690806 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qhmnb\" (UniqueName: \"kubernetes.io/projected/b87e82fa-37f9-46dc-8170-c77373da3ff8-kube-api-access-qhmnb\") pod \"b87e82fa-37f9-46dc-8170-c77373da3ff8\" (UID: \"b87e82fa-37f9-46dc-8170-c77373da3ff8\") " Nov 24 08:52:51 crc kubenswrapper[4886]: I1124 08:52:51.690891 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b87e82fa-37f9-46dc-8170-c77373da3ff8-utilities\") pod \"b87e82fa-37f9-46dc-8170-c77373da3ff8\" (UID: \"b87e82fa-37f9-46dc-8170-c77373da3ff8\") " Nov 24 08:52:51 crc kubenswrapper[4886]: I1124 08:52:51.691023 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b87e82fa-37f9-46dc-8170-c77373da3ff8-catalog-content\") pod \"b87e82fa-37f9-46dc-8170-c77373da3ff8\" (UID: \"b87e82fa-37f9-46dc-8170-c77373da3ff8\") " Nov 24 08:52:51 crc kubenswrapper[4886]: I1124 08:52:51.692054 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b87e82fa-37f9-46dc-8170-c77373da3ff8-utilities" (OuterVolumeSpecName: "utilities") pod "b87e82fa-37f9-46dc-8170-c77373da3ff8" (UID: "b87e82fa-37f9-46dc-8170-c77373da3ff8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 08:52:51 crc kubenswrapper[4886]: I1124 08:52:51.700724 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b87e82fa-37f9-46dc-8170-c77373da3ff8-kube-api-access-qhmnb" (OuterVolumeSpecName: "kube-api-access-qhmnb") pod "b87e82fa-37f9-46dc-8170-c77373da3ff8" (UID: "b87e82fa-37f9-46dc-8170-c77373da3ff8"). InnerVolumeSpecName "kube-api-access-qhmnb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 08:52:51 crc kubenswrapper[4886]: I1124 08:52:51.711140 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b87e82fa-37f9-46dc-8170-c77373da3ff8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b87e82fa-37f9-46dc-8170-c77373da3ff8" (UID: "b87e82fa-37f9-46dc-8170-c77373da3ff8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 08:52:51 crc kubenswrapper[4886]: I1124 08:52:51.792800 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qhmnb\" (UniqueName: \"kubernetes.io/projected/b87e82fa-37f9-46dc-8170-c77373da3ff8-kube-api-access-qhmnb\") on node \"crc\" DevicePath \"\"" Nov 24 08:52:51 crc kubenswrapper[4886]: I1124 08:52:51.792938 4886 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b87e82fa-37f9-46dc-8170-c77373da3ff8-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 08:52:51 crc kubenswrapper[4886]: I1124 08:52:51.792956 4886 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b87e82fa-37f9-46dc-8170-c77373da3ff8-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 08:52:52 crc kubenswrapper[4886]: I1124 08:52:52.208087 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-j6kp4" Nov 24 08:52:52 crc kubenswrapper[4886]: I1124 08:52:52.299677 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/98e3e498-9c73-407d-91f1-1032f1d0a4b2-utilities\") pod \"98e3e498-9c73-407d-91f1-1032f1d0a4b2\" (UID: \"98e3e498-9c73-407d-91f1-1032f1d0a4b2\") " Nov 24 08:52:52 crc kubenswrapper[4886]: I1124 08:52:52.299874 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/98e3e498-9c73-407d-91f1-1032f1d0a4b2-catalog-content\") pod \"98e3e498-9c73-407d-91f1-1032f1d0a4b2\" (UID: \"98e3e498-9c73-407d-91f1-1032f1d0a4b2\") " Nov 24 08:52:52 crc kubenswrapper[4886]: I1124 08:52:52.299959 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4wmcl\" (UniqueName: \"kubernetes.io/projected/98e3e498-9c73-407d-91f1-1032f1d0a4b2-kube-api-access-4wmcl\") pod \"98e3e498-9c73-407d-91f1-1032f1d0a4b2\" (UID: \"98e3e498-9c73-407d-91f1-1032f1d0a4b2\") " Nov 24 08:52:52 crc kubenswrapper[4886]: I1124 08:52:52.300712 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/98e3e498-9c73-407d-91f1-1032f1d0a4b2-utilities" (OuterVolumeSpecName: "utilities") pod "98e3e498-9c73-407d-91f1-1032f1d0a4b2" (UID: "98e3e498-9c73-407d-91f1-1032f1d0a4b2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 08:52:52 crc kubenswrapper[4886]: I1124 08:52:52.304501 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/98e3e498-9c73-407d-91f1-1032f1d0a4b2-kube-api-access-4wmcl" (OuterVolumeSpecName: "kube-api-access-4wmcl") pod "98e3e498-9c73-407d-91f1-1032f1d0a4b2" (UID: "98e3e498-9c73-407d-91f1-1032f1d0a4b2"). InnerVolumeSpecName "kube-api-access-4wmcl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 08:52:52 crc kubenswrapper[4886]: I1124 08:52:52.394583 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/98e3e498-9c73-407d-91f1-1032f1d0a4b2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "98e3e498-9c73-407d-91f1-1032f1d0a4b2" (UID: "98e3e498-9c73-407d-91f1-1032f1d0a4b2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 08:52:52 crc kubenswrapper[4886]: I1124 08:52:52.401851 4886 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/98e3e498-9c73-407d-91f1-1032f1d0a4b2-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 08:52:52 crc kubenswrapper[4886]: I1124 08:52:52.401893 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4wmcl\" (UniqueName: \"kubernetes.io/projected/98e3e498-9c73-407d-91f1-1032f1d0a4b2-kube-api-access-4wmcl\") on node \"crc\" DevicePath \"\"" Nov 24 08:52:52 crc kubenswrapper[4886]: I1124 08:52:52.401906 4886 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/98e3e498-9c73-407d-91f1-1032f1d0a4b2-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 08:52:52 crc kubenswrapper[4886]: I1124 08:52:52.494328 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jxrnh" Nov 24 08:52:52 crc kubenswrapper[4886]: I1124 08:52:52.494305 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jxrnh" event={"ID":"b87e82fa-37f9-46dc-8170-c77373da3ff8","Type":"ContainerDied","Data":"951704a65933ceeb5ddf0a226ec99e0cb66d75dcb4ba65c58c073f3038f5e686"} Nov 24 08:52:52 crc kubenswrapper[4886]: I1124 08:52:52.494479 4886 scope.go:117] "RemoveContainer" containerID="22a6783a3b5463fe689498ab356d871f8a40cd85ed08a30882b603ae0b25dbb3" Nov 24 08:52:52 crc kubenswrapper[4886]: I1124 08:52:52.497730 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j6kp4" event={"ID":"98e3e498-9c73-407d-91f1-1032f1d0a4b2","Type":"ContainerDied","Data":"5afb3d7f6b398600d05b416765a77d40574ae78047fbc8a60b6f08109c19eb44"} Nov 24 08:52:52 crc kubenswrapper[4886]: I1124 08:52:52.497802 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-j6kp4" Nov 24 08:52:52 crc kubenswrapper[4886]: I1124 08:52:52.512941 4886 scope.go:117] "RemoveContainer" containerID="a918d64013747cb85ce2389ff2c5f11b74e1d4f6ff1cd2b46680bfe31bf86e42" Nov 24 08:52:52 crc kubenswrapper[4886]: I1124 08:52:52.526711 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-jxrnh"] Nov 24 08:52:52 crc kubenswrapper[4886]: I1124 08:52:52.531379 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-jxrnh"] Nov 24 08:52:52 crc kubenswrapper[4886]: I1124 08:52:52.535325 4886 scope.go:117] "RemoveContainer" containerID="cd19e2b1c8d0e2350ac66224460f947f74b6b9c3305b95f37c450a6dc5e00738" Nov 24 08:52:52 crc kubenswrapper[4886]: I1124 08:52:52.542913 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-j6kp4"] Nov 24 08:52:52 crc kubenswrapper[4886]: I1124 08:52:52.560665 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-j6kp4"] Nov 24 08:52:52 crc kubenswrapper[4886]: I1124 08:52:52.567438 4886 scope.go:117] "RemoveContainer" containerID="2638ce912b5af715bdd8224d50528691ef96afa3b7a6c4270cb6111eed84c266" Nov 24 08:52:52 crc kubenswrapper[4886]: I1124 08:52:52.585526 4886 scope.go:117] "RemoveContainer" containerID="a4182f1d2d06eb78cb736fb14193466842c8b11fff92393ae739f35e58d13604" Nov 24 08:52:52 crc kubenswrapper[4886]: I1124 08:52:52.605785 4886 scope.go:117] "RemoveContainer" containerID="ede38c567a3731a76d2e177db0ce1daceb461c8ab4441ee11c58e43c57ce7239" Nov 24 08:52:52 crc kubenswrapper[4886]: I1124 08:52:52.856957 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="98e3e498-9c73-407d-91f1-1032f1d0a4b2" path="/var/lib/kubelet/pods/98e3e498-9c73-407d-91f1-1032f1d0a4b2/volumes" Nov 24 08:52:52 crc kubenswrapper[4886]: I1124 08:52:52.857613 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b87e82fa-37f9-46dc-8170-c77373da3ff8" path="/var/lib/kubelet/pods/b87e82fa-37f9-46dc-8170-c77373da3ff8/volumes" Nov 24 08:52:54 crc kubenswrapper[4886]: I1124 08:52:54.229500 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-576c9d799-7j84h" Nov 24 08:52:54 crc kubenswrapper[4886]: I1124 08:52:54.235413 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-576c9d799-7j84h" Nov 24 08:52:56 crc kubenswrapper[4886]: I1124 08:52:56.087668 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-vtbkx"] Nov 24 08:53:02 crc kubenswrapper[4886]: I1124 08:53:02.075603 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-576c9d799-7j84h"] Nov 24 08:53:02 crc kubenswrapper[4886]: I1124 08:53:02.076378 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-576c9d799-7j84h" podUID="16c90ea0-b2e9-4483-9f38-badc2b15c7e6" containerName="controller-manager" containerID="cri-o://d4c9f09f4234f273f13b425675bbf2ca5cd5e1d2e1898a3fd138ee9e5b71753c" gracePeriod=30 Nov 24 08:53:02 crc kubenswrapper[4886]: I1124 08:53:02.104994 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5495dc887c-qlcv9"] Nov 24 08:53:02 crc kubenswrapper[4886]: I1124 08:53:02.105321 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-5495dc887c-qlcv9" podUID="0e48387d-08ef-435e-ab67-8c252131888d" containerName="route-controller-manager" containerID="cri-o://fddbe4aad875eda3e8150a8c312c15fcae0d6745674d7a67f080e1afc31f34a5" gracePeriod=30 Nov 24 08:53:02 crc kubenswrapper[4886]: I1124 08:53:02.560922 4886 generic.go:334] "Generic (PLEG): container finished" podID="0e48387d-08ef-435e-ab67-8c252131888d" containerID="fddbe4aad875eda3e8150a8c312c15fcae0d6745674d7a67f080e1afc31f34a5" exitCode=0 Nov 24 08:53:02 crc kubenswrapper[4886]: I1124 08:53:02.561019 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5495dc887c-qlcv9" event={"ID":"0e48387d-08ef-435e-ab67-8c252131888d","Type":"ContainerDied","Data":"fddbe4aad875eda3e8150a8c312c15fcae0d6745674d7a67f080e1afc31f34a5"} Nov 24 08:53:02 crc kubenswrapper[4886]: I1124 08:53:02.562714 4886 generic.go:334] "Generic (PLEG): container finished" podID="16c90ea0-b2e9-4483-9f38-badc2b15c7e6" containerID="d4c9f09f4234f273f13b425675bbf2ca5cd5e1d2e1898a3fd138ee9e5b71753c" exitCode=0 Nov 24 08:53:02 crc kubenswrapper[4886]: I1124 08:53:02.562762 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-576c9d799-7j84h" event={"ID":"16c90ea0-b2e9-4483-9f38-badc2b15c7e6","Type":"ContainerDied","Data":"d4c9f09f4234f273f13b425675bbf2ca5cd5e1d2e1898a3fd138ee9e5b71753c"} Nov 24 08:53:02 crc kubenswrapper[4886]: I1124 08:53:02.646333 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5495dc887c-qlcv9" Nov 24 08:53:02 crc kubenswrapper[4886]: I1124 08:53:02.740178 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-576c9d799-7j84h" Nov 24 08:53:02 crc kubenswrapper[4886]: I1124 08:53:02.749005 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0e48387d-08ef-435e-ab67-8c252131888d-serving-cert\") pod \"0e48387d-08ef-435e-ab67-8c252131888d\" (UID: \"0e48387d-08ef-435e-ab67-8c252131888d\") " Nov 24 08:53:02 crc kubenswrapper[4886]: I1124 08:53:02.749088 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0e48387d-08ef-435e-ab67-8c252131888d-client-ca\") pod \"0e48387d-08ef-435e-ab67-8c252131888d\" (UID: \"0e48387d-08ef-435e-ab67-8c252131888d\") " Nov 24 08:53:02 crc kubenswrapper[4886]: I1124 08:53:02.749120 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pt28d\" (UniqueName: \"kubernetes.io/projected/0e48387d-08ef-435e-ab67-8c252131888d-kube-api-access-pt28d\") pod \"0e48387d-08ef-435e-ab67-8c252131888d\" (UID: \"0e48387d-08ef-435e-ab67-8c252131888d\") " Nov 24 08:53:02 crc kubenswrapper[4886]: I1124 08:53:02.749194 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0e48387d-08ef-435e-ab67-8c252131888d-config\") pod \"0e48387d-08ef-435e-ab67-8c252131888d\" (UID: \"0e48387d-08ef-435e-ab67-8c252131888d\") " Nov 24 08:53:02 crc kubenswrapper[4886]: I1124 08:53:02.750367 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0e48387d-08ef-435e-ab67-8c252131888d-config" (OuterVolumeSpecName: "config") pod "0e48387d-08ef-435e-ab67-8c252131888d" (UID: "0e48387d-08ef-435e-ab67-8c252131888d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 08:53:02 crc kubenswrapper[4886]: I1124 08:53:02.750918 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0e48387d-08ef-435e-ab67-8c252131888d-client-ca" (OuterVolumeSpecName: "client-ca") pod "0e48387d-08ef-435e-ab67-8c252131888d" (UID: "0e48387d-08ef-435e-ab67-8c252131888d"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 08:53:02 crc kubenswrapper[4886]: I1124 08:53:02.756315 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0e48387d-08ef-435e-ab67-8c252131888d-kube-api-access-pt28d" (OuterVolumeSpecName: "kube-api-access-pt28d") pod "0e48387d-08ef-435e-ab67-8c252131888d" (UID: "0e48387d-08ef-435e-ab67-8c252131888d"). InnerVolumeSpecName "kube-api-access-pt28d". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 08:53:02 crc kubenswrapper[4886]: I1124 08:53:02.756566 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0e48387d-08ef-435e-ab67-8c252131888d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0e48387d-08ef-435e-ab67-8c252131888d" (UID: "0e48387d-08ef-435e-ab67-8c252131888d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 08:53:02 crc kubenswrapper[4886]: I1124 08:53:02.850090 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/16c90ea0-b2e9-4483-9f38-badc2b15c7e6-serving-cert\") pod \"16c90ea0-b2e9-4483-9f38-badc2b15c7e6\" (UID: \"16c90ea0-b2e9-4483-9f38-badc2b15c7e6\") " Nov 24 08:53:02 crc kubenswrapper[4886]: I1124 08:53:02.850331 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/16c90ea0-b2e9-4483-9f38-badc2b15c7e6-proxy-ca-bundles\") pod \"16c90ea0-b2e9-4483-9f38-badc2b15c7e6\" (UID: \"16c90ea0-b2e9-4483-9f38-badc2b15c7e6\") " Nov 24 08:53:02 crc kubenswrapper[4886]: I1124 08:53:02.850378 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bb2cz\" (UniqueName: \"kubernetes.io/projected/16c90ea0-b2e9-4483-9f38-badc2b15c7e6-kube-api-access-bb2cz\") pod \"16c90ea0-b2e9-4483-9f38-badc2b15c7e6\" (UID: \"16c90ea0-b2e9-4483-9f38-badc2b15c7e6\") " Nov 24 08:53:02 crc kubenswrapper[4886]: I1124 08:53:02.850401 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/16c90ea0-b2e9-4483-9f38-badc2b15c7e6-client-ca\") pod \"16c90ea0-b2e9-4483-9f38-badc2b15c7e6\" (UID: \"16c90ea0-b2e9-4483-9f38-badc2b15c7e6\") " Nov 24 08:53:02 crc kubenswrapper[4886]: I1124 08:53:02.850434 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/16c90ea0-b2e9-4483-9f38-badc2b15c7e6-config\") pod \"16c90ea0-b2e9-4483-9f38-badc2b15c7e6\" (UID: \"16c90ea0-b2e9-4483-9f38-badc2b15c7e6\") " Nov 24 08:53:02 crc kubenswrapper[4886]: I1124 08:53:02.850760 4886 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0e48387d-08ef-435e-ab67-8c252131888d-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 08:53:02 crc kubenswrapper[4886]: I1124 08:53:02.850777 4886 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0e48387d-08ef-435e-ab67-8c252131888d-client-ca\") on node \"crc\" DevicePath \"\"" Nov 24 08:53:02 crc kubenswrapper[4886]: I1124 08:53:02.850789 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pt28d\" (UniqueName: \"kubernetes.io/projected/0e48387d-08ef-435e-ab67-8c252131888d-kube-api-access-pt28d\") on node \"crc\" DevicePath \"\"" Nov 24 08:53:02 crc kubenswrapper[4886]: I1124 08:53:02.850802 4886 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0e48387d-08ef-435e-ab67-8c252131888d-config\") on node \"crc\" DevicePath \"\"" Nov 24 08:53:02 crc kubenswrapper[4886]: I1124 08:53:02.851231 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/16c90ea0-b2e9-4483-9f38-badc2b15c7e6-client-ca" (OuterVolumeSpecName: "client-ca") pod "16c90ea0-b2e9-4483-9f38-badc2b15c7e6" (UID: "16c90ea0-b2e9-4483-9f38-badc2b15c7e6"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 08:53:02 crc kubenswrapper[4886]: I1124 08:53:02.851337 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/16c90ea0-b2e9-4483-9f38-badc2b15c7e6-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "16c90ea0-b2e9-4483-9f38-badc2b15c7e6" (UID: "16c90ea0-b2e9-4483-9f38-badc2b15c7e6"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 08:53:02 crc kubenswrapper[4886]: I1124 08:53:02.851430 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/16c90ea0-b2e9-4483-9f38-badc2b15c7e6-config" (OuterVolumeSpecName: "config") pod "16c90ea0-b2e9-4483-9f38-badc2b15c7e6" (UID: "16c90ea0-b2e9-4483-9f38-badc2b15c7e6"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 08:53:02 crc kubenswrapper[4886]: I1124 08:53:02.853024 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16c90ea0-b2e9-4483-9f38-badc2b15c7e6-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "16c90ea0-b2e9-4483-9f38-badc2b15c7e6" (UID: "16c90ea0-b2e9-4483-9f38-badc2b15c7e6"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 08:53:02 crc kubenswrapper[4886]: I1124 08:53:02.853254 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/16c90ea0-b2e9-4483-9f38-badc2b15c7e6-kube-api-access-bb2cz" (OuterVolumeSpecName: "kube-api-access-bb2cz") pod "16c90ea0-b2e9-4483-9f38-badc2b15c7e6" (UID: "16c90ea0-b2e9-4483-9f38-badc2b15c7e6"). InnerVolumeSpecName "kube-api-access-bb2cz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 08:53:02 crc kubenswrapper[4886]: I1124 08:53:02.952306 4886 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/16c90ea0-b2e9-4483-9f38-badc2b15c7e6-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 08:53:02 crc kubenswrapper[4886]: I1124 08:53:02.952344 4886 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/16c90ea0-b2e9-4483-9f38-badc2b15c7e6-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Nov 24 08:53:02 crc kubenswrapper[4886]: I1124 08:53:02.952358 4886 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/16c90ea0-b2e9-4483-9f38-badc2b15c7e6-client-ca\") on node \"crc\" DevicePath \"\"" Nov 24 08:53:02 crc kubenswrapper[4886]: I1124 08:53:02.952367 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bb2cz\" (UniqueName: \"kubernetes.io/projected/16c90ea0-b2e9-4483-9f38-badc2b15c7e6-kube-api-access-bb2cz\") on node \"crc\" DevicePath \"\"" Nov 24 08:53:02 crc kubenswrapper[4886]: I1124 08:53:02.952379 4886 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/16c90ea0-b2e9-4483-9f38-badc2b15c7e6-config\") on node \"crc\" DevicePath \"\"" Nov 24 08:53:03 crc kubenswrapper[4886]: I1124 08:53:03.570203 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-576c9d799-7j84h" Nov 24 08:53:03 crc kubenswrapper[4886]: I1124 08:53:03.570200 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-576c9d799-7j84h" event={"ID":"16c90ea0-b2e9-4483-9f38-badc2b15c7e6","Type":"ContainerDied","Data":"76fbc2f02551256ce8d58e3644fe7b749404d58b07b07c27d253782f2529fa3d"} Nov 24 08:53:03 crc kubenswrapper[4886]: I1124 08:53:03.570355 4886 scope.go:117] "RemoveContainer" containerID="d4c9f09f4234f273f13b425675bbf2ca5cd5e1d2e1898a3fd138ee9e5b71753c" Nov 24 08:53:03 crc kubenswrapper[4886]: I1124 08:53:03.571558 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5495dc887c-qlcv9" event={"ID":"0e48387d-08ef-435e-ab67-8c252131888d","Type":"ContainerDied","Data":"8f300800f664b678b2c81df2454387d004b0f35b12546828351e096b2da6a3de"} Nov 24 08:53:03 crc kubenswrapper[4886]: I1124 08:53:03.571631 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5495dc887c-qlcv9" Nov 24 08:53:03 crc kubenswrapper[4886]: I1124 08:53:03.592591 4886 scope.go:117] "RemoveContainer" containerID="fddbe4aad875eda3e8150a8c312c15fcae0d6745674d7a67f080e1afc31f34a5" Nov 24 08:53:03 crc kubenswrapper[4886]: I1124 08:53:03.599921 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5495dc887c-qlcv9"] Nov 24 08:53:03 crc kubenswrapper[4886]: I1124 08:53:03.604451 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5495dc887c-qlcv9"] Nov 24 08:53:03 crc kubenswrapper[4886]: I1124 08:53:03.612849 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-576c9d799-7j84h"] Nov 24 08:53:03 crc kubenswrapper[4886]: I1124 08:53:03.616685 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-576c9d799-7j84h"] Nov 24 08:53:03 crc kubenswrapper[4886]: I1124 08:53:03.885243 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-86d9f84578-m4q28"] Nov 24 08:53:03 crc kubenswrapper[4886]: E1124 08:53:03.885568 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b87e82fa-37f9-46dc-8170-c77373da3ff8" containerName="extract-utilities" Nov 24 08:53:03 crc kubenswrapper[4886]: I1124 08:53:03.885586 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="b87e82fa-37f9-46dc-8170-c77373da3ff8" containerName="extract-utilities" Nov 24 08:53:03 crc kubenswrapper[4886]: E1124 08:53:03.885600 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="16c90ea0-b2e9-4483-9f38-badc2b15c7e6" containerName="controller-manager" Nov 24 08:53:03 crc kubenswrapper[4886]: I1124 08:53:03.885608 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="16c90ea0-b2e9-4483-9f38-badc2b15c7e6" containerName="controller-manager" Nov 24 08:53:03 crc kubenswrapper[4886]: E1124 08:53:03.885621 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa41e796-f145-4455-b8fa-d751c98f7b5f" containerName="extract-content" Nov 24 08:53:03 crc kubenswrapper[4886]: I1124 08:53:03.885629 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa41e796-f145-4455-b8fa-d751c98f7b5f" containerName="extract-content" Nov 24 08:53:03 crc kubenswrapper[4886]: E1124 08:53:03.885642 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98e3e498-9c73-407d-91f1-1032f1d0a4b2" containerName="extract-content" Nov 24 08:53:03 crc kubenswrapper[4886]: I1124 08:53:03.885650 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="98e3e498-9c73-407d-91f1-1032f1d0a4b2" containerName="extract-content" Nov 24 08:53:03 crc kubenswrapper[4886]: E1124 08:53:03.885663 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa41e796-f145-4455-b8fa-d751c98f7b5f" containerName="extract-utilities" Nov 24 08:53:03 crc kubenswrapper[4886]: I1124 08:53:03.885671 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa41e796-f145-4455-b8fa-d751c98f7b5f" containerName="extract-utilities" Nov 24 08:53:03 crc kubenswrapper[4886]: E1124 08:53:03.885681 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa41e796-f145-4455-b8fa-d751c98f7b5f" containerName="registry-server" Nov 24 08:53:03 crc kubenswrapper[4886]: I1124 08:53:03.885689 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa41e796-f145-4455-b8fa-d751c98f7b5f" containerName="registry-server" Nov 24 08:53:03 crc kubenswrapper[4886]: E1124 08:53:03.885699 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e48387d-08ef-435e-ab67-8c252131888d" containerName="route-controller-manager" Nov 24 08:53:03 crc kubenswrapper[4886]: I1124 08:53:03.885708 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e48387d-08ef-435e-ab67-8c252131888d" containerName="route-controller-manager" Nov 24 08:53:03 crc kubenswrapper[4886]: E1124 08:53:03.885720 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98e3e498-9c73-407d-91f1-1032f1d0a4b2" containerName="registry-server" Nov 24 08:53:03 crc kubenswrapper[4886]: I1124 08:53:03.885727 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="98e3e498-9c73-407d-91f1-1032f1d0a4b2" containerName="registry-server" Nov 24 08:53:03 crc kubenswrapper[4886]: E1124 08:53:03.885737 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98e3e498-9c73-407d-91f1-1032f1d0a4b2" containerName="extract-utilities" Nov 24 08:53:03 crc kubenswrapper[4886]: I1124 08:53:03.885745 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="98e3e498-9c73-407d-91f1-1032f1d0a4b2" containerName="extract-utilities" Nov 24 08:53:03 crc kubenswrapper[4886]: E1124 08:53:03.885760 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b87e82fa-37f9-46dc-8170-c77373da3ff8" containerName="extract-content" Nov 24 08:53:03 crc kubenswrapper[4886]: I1124 08:53:03.885768 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="b87e82fa-37f9-46dc-8170-c77373da3ff8" containerName="extract-content" Nov 24 08:53:03 crc kubenswrapper[4886]: E1124 08:53:03.885779 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b87e82fa-37f9-46dc-8170-c77373da3ff8" containerName="registry-server" Nov 24 08:53:03 crc kubenswrapper[4886]: I1124 08:53:03.885788 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="b87e82fa-37f9-46dc-8170-c77373da3ff8" containerName="registry-server" Nov 24 08:53:03 crc kubenswrapper[4886]: I1124 08:53:03.885911 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="98e3e498-9c73-407d-91f1-1032f1d0a4b2" containerName="registry-server" Nov 24 08:53:03 crc kubenswrapper[4886]: I1124 08:53:03.885922 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="aa41e796-f145-4455-b8fa-d751c98f7b5f" containerName="registry-server" Nov 24 08:53:03 crc kubenswrapper[4886]: I1124 08:53:03.885934 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="0e48387d-08ef-435e-ab67-8c252131888d" containerName="route-controller-manager" Nov 24 08:53:03 crc kubenswrapper[4886]: I1124 08:53:03.885944 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="16c90ea0-b2e9-4483-9f38-badc2b15c7e6" containerName="controller-manager" Nov 24 08:53:03 crc kubenswrapper[4886]: I1124 08:53:03.885957 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="b87e82fa-37f9-46dc-8170-c77373da3ff8" containerName="registry-server" Nov 24 08:53:03 crc kubenswrapper[4886]: I1124 08:53:03.886463 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-86d9f84578-m4q28" Nov 24 08:53:03 crc kubenswrapper[4886]: I1124 08:53:03.888279 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-845f9d5998-2fgxv"] Nov 24 08:53:03 crc kubenswrapper[4886]: I1124 08:53:03.888875 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-845f9d5998-2fgxv" Nov 24 08:53:03 crc kubenswrapper[4886]: I1124 08:53:03.893447 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Nov 24 08:53:03 crc kubenswrapper[4886]: I1124 08:53:03.893676 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Nov 24 08:53:03 crc kubenswrapper[4886]: I1124 08:53:03.893713 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Nov 24 08:53:03 crc kubenswrapper[4886]: I1124 08:53:03.893953 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Nov 24 08:53:03 crc kubenswrapper[4886]: I1124 08:53:03.894086 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Nov 24 08:53:03 crc kubenswrapper[4886]: I1124 08:53:03.894225 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Nov 24 08:53:03 crc kubenswrapper[4886]: I1124 08:53:03.894377 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Nov 24 08:53:03 crc kubenswrapper[4886]: I1124 08:53:03.900815 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Nov 24 08:53:03 crc kubenswrapper[4886]: I1124 08:53:03.902014 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Nov 24 08:53:03 crc kubenswrapper[4886]: I1124 08:53:03.902005 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Nov 24 08:53:03 crc kubenswrapper[4886]: I1124 08:53:03.902602 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Nov 24 08:53:03 crc kubenswrapper[4886]: I1124 08:53:03.904016 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Nov 24 08:53:03 crc kubenswrapper[4886]: I1124 08:53:03.910665 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-845f9d5998-2fgxv"] Nov 24 08:53:03 crc kubenswrapper[4886]: I1124 08:53:03.922388 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Nov 24 08:53:03 crc kubenswrapper[4886]: I1124 08:53:03.951180 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-86d9f84578-m4q28"] Nov 24 08:53:03 crc kubenswrapper[4886]: I1124 08:53:03.966139 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/53d5461d-1198-4a7a-84f3-b413ea336897-config\") pod \"route-controller-manager-86d9f84578-m4q28\" (UID: \"53d5461d-1198-4a7a-84f3-b413ea336897\") " pod="openshift-route-controller-manager/route-controller-manager-86d9f84578-m4q28" Nov 24 08:53:03 crc kubenswrapper[4886]: I1124 08:53:03.966270 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/53d5461d-1198-4a7a-84f3-b413ea336897-serving-cert\") pod \"route-controller-manager-86d9f84578-m4q28\" (UID: \"53d5461d-1198-4a7a-84f3-b413ea336897\") " pod="openshift-route-controller-manager/route-controller-manager-86d9f84578-m4q28" Nov 24 08:53:03 crc kubenswrapper[4886]: I1124 08:53:03.966314 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/20546bcb-e5ac-4ee0-ae5b-9d708c09eccc-proxy-ca-bundles\") pod \"controller-manager-845f9d5998-2fgxv\" (UID: \"20546bcb-e5ac-4ee0-ae5b-9d708c09eccc\") " pod="openshift-controller-manager/controller-manager-845f9d5998-2fgxv" Nov 24 08:53:03 crc kubenswrapper[4886]: I1124 08:53:03.966344 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/20546bcb-e5ac-4ee0-ae5b-9d708c09eccc-client-ca\") pod \"controller-manager-845f9d5998-2fgxv\" (UID: \"20546bcb-e5ac-4ee0-ae5b-9d708c09eccc\") " pod="openshift-controller-manager/controller-manager-845f9d5998-2fgxv" Nov 24 08:53:03 crc kubenswrapper[4886]: I1124 08:53:03.966413 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/20546bcb-e5ac-4ee0-ae5b-9d708c09eccc-serving-cert\") pod \"controller-manager-845f9d5998-2fgxv\" (UID: \"20546bcb-e5ac-4ee0-ae5b-9d708c09eccc\") " pod="openshift-controller-manager/controller-manager-845f9d5998-2fgxv" Nov 24 08:53:03 crc kubenswrapper[4886]: I1124 08:53:03.966435 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r7jlm\" (UniqueName: \"kubernetes.io/projected/53d5461d-1198-4a7a-84f3-b413ea336897-kube-api-access-r7jlm\") pod \"route-controller-manager-86d9f84578-m4q28\" (UID: \"53d5461d-1198-4a7a-84f3-b413ea336897\") " pod="openshift-route-controller-manager/route-controller-manager-86d9f84578-m4q28" Nov 24 08:53:03 crc kubenswrapper[4886]: I1124 08:53:03.966474 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/20546bcb-e5ac-4ee0-ae5b-9d708c09eccc-config\") pod \"controller-manager-845f9d5998-2fgxv\" (UID: \"20546bcb-e5ac-4ee0-ae5b-9d708c09eccc\") " pod="openshift-controller-manager/controller-manager-845f9d5998-2fgxv" Nov 24 08:53:03 crc kubenswrapper[4886]: I1124 08:53:03.966504 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xgzk9\" (UniqueName: \"kubernetes.io/projected/20546bcb-e5ac-4ee0-ae5b-9d708c09eccc-kube-api-access-xgzk9\") pod \"controller-manager-845f9d5998-2fgxv\" (UID: \"20546bcb-e5ac-4ee0-ae5b-9d708c09eccc\") " pod="openshift-controller-manager/controller-manager-845f9d5998-2fgxv" Nov 24 08:53:03 crc kubenswrapper[4886]: I1124 08:53:03.966559 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/53d5461d-1198-4a7a-84f3-b413ea336897-client-ca\") pod \"route-controller-manager-86d9f84578-m4q28\" (UID: \"53d5461d-1198-4a7a-84f3-b413ea336897\") " pod="openshift-route-controller-manager/route-controller-manager-86d9f84578-m4q28" Nov 24 08:53:04 crc kubenswrapper[4886]: I1124 08:53:04.067318 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/20546bcb-e5ac-4ee0-ae5b-9d708c09eccc-config\") pod \"controller-manager-845f9d5998-2fgxv\" (UID: \"20546bcb-e5ac-4ee0-ae5b-9d708c09eccc\") " pod="openshift-controller-manager/controller-manager-845f9d5998-2fgxv" Nov 24 08:53:04 crc kubenswrapper[4886]: I1124 08:53:04.067384 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xgzk9\" (UniqueName: \"kubernetes.io/projected/20546bcb-e5ac-4ee0-ae5b-9d708c09eccc-kube-api-access-xgzk9\") pod \"controller-manager-845f9d5998-2fgxv\" (UID: \"20546bcb-e5ac-4ee0-ae5b-9d708c09eccc\") " pod="openshift-controller-manager/controller-manager-845f9d5998-2fgxv" Nov 24 08:53:04 crc kubenswrapper[4886]: I1124 08:53:04.067425 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/53d5461d-1198-4a7a-84f3-b413ea336897-client-ca\") pod \"route-controller-manager-86d9f84578-m4q28\" (UID: \"53d5461d-1198-4a7a-84f3-b413ea336897\") " pod="openshift-route-controller-manager/route-controller-manager-86d9f84578-m4q28" Nov 24 08:53:04 crc kubenswrapper[4886]: I1124 08:53:04.067495 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/53d5461d-1198-4a7a-84f3-b413ea336897-config\") pod \"route-controller-manager-86d9f84578-m4q28\" (UID: \"53d5461d-1198-4a7a-84f3-b413ea336897\") " pod="openshift-route-controller-manager/route-controller-manager-86d9f84578-m4q28" Nov 24 08:53:04 crc kubenswrapper[4886]: I1124 08:53:04.067512 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/53d5461d-1198-4a7a-84f3-b413ea336897-serving-cert\") pod \"route-controller-manager-86d9f84578-m4q28\" (UID: \"53d5461d-1198-4a7a-84f3-b413ea336897\") " pod="openshift-route-controller-manager/route-controller-manager-86d9f84578-m4q28" Nov 24 08:53:04 crc kubenswrapper[4886]: I1124 08:53:04.067533 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/20546bcb-e5ac-4ee0-ae5b-9d708c09eccc-proxy-ca-bundles\") pod \"controller-manager-845f9d5998-2fgxv\" (UID: \"20546bcb-e5ac-4ee0-ae5b-9d708c09eccc\") " pod="openshift-controller-manager/controller-manager-845f9d5998-2fgxv" Nov 24 08:53:04 crc kubenswrapper[4886]: I1124 08:53:04.067568 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/20546bcb-e5ac-4ee0-ae5b-9d708c09eccc-client-ca\") pod \"controller-manager-845f9d5998-2fgxv\" (UID: \"20546bcb-e5ac-4ee0-ae5b-9d708c09eccc\") " pod="openshift-controller-manager/controller-manager-845f9d5998-2fgxv" Nov 24 08:53:04 crc kubenswrapper[4886]: I1124 08:53:04.067621 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/20546bcb-e5ac-4ee0-ae5b-9d708c09eccc-serving-cert\") pod \"controller-manager-845f9d5998-2fgxv\" (UID: \"20546bcb-e5ac-4ee0-ae5b-9d708c09eccc\") " pod="openshift-controller-manager/controller-manager-845f9d5998-2fgxv" Nov 24 08:53:04 crc kubenswrapper[4886]: I1124 08:53:04.067650 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r7jlm\" (UniqueName: \"kubernetes.io/projected/53d5461d-1198-4a7a-84f3-b413ea336897-kube-api-access-r7jlm\") pod \"route-controller-manager-86d9f84578-m4q28\" (UID: \"53d5461d-1198-4a7a-84f3-b413ea336897\") " pod="openshift-route-controller-manager/route-controller-manager-86d9f84578-m4q28" Nov 24 08:53:04 crc kubenswrapper[4886]: I1124 08:53:04.069394 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/20546bcb-e5ac-4ee0-ae5b-9d708c09eccc-client-ca\") pod \"controller-manager-845f9d5998-2fgxv\" (UID: \"20546bcb-e5ac-4ee0-ae5b-9d708c09eccc\") " pod="openshift-controller-manager/controller-manager-845f9d5998-2fgxv" Nov 24 08:53:04 crc kubenswrapper[4886]: I1124 08:53:04.069405 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/20546bcb-e5ac-4ee0-ae5b-9d708c09eccc-proxy-ca-bundles\") pod \"controller-manager-845f9d5998-2fgxv\" (UID: \"20546bcb-e5ac-4ee0-ae5b-9d708c09eccc\") " pod="openshift-controller-manager/controller-manager-845f9d5998-2fgxv" Nov 24 08:53:04 crc kubenswrapper[4886]: I1124 08:53:04.069826 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/20546bcb-e5ac-4ee0-ae5b-9d708c09eccc-config\") pod \"controller-manager-845f9d5998-2fgxv\" (UID: \"20546bcb-e5ac-4ee0-ae5b-9d708c09eccc\") " pod="openshift-controller-manager/controller-manager-845f9d5998-2fgxv" Nov 24 08:53:04 crc kubenswrapper[4886]: I1124 08:53:04.070370 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/53d5461d-1198-4a7a-84f3-b413ea336897-config\") pod \"route-controller-manager-86d9f84578-m4q28\" (UID: \"53d5461d-1198-4a7a-84f3-b413ea336897\") " pod="openshift-route-controller-manager/route-controller-manager-86d9f84578-m4q28" Nov 24 08:53:04 crc kubenswrapper[4886]: I1124 08:53:04.070436 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/53d5461d-1198-4a7a-84f3-b413ea336897-client-ca\") pod \"route-controller-manager-86d9f84578-m4q28\" (UID: \"53d5461d-1198-4a7a-84f3-b413ea336897\") " pod="openshift-route-controller-manager/route-controller-manager-86d9f84578-m4q28" Nov 24 08:53:04 crc kubenswrapper[4886]: I1124 08:53:04.075766 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/53d5461d-1198-4a7a-84f3-b413ea336897-serving-cert\") pod \"route-controller-manager-86d9f84578-m4q28\" (UID: \"53d5461d-1198-4a7a-84f3-b413ea336897\") " pod="openshift-route-controller-manager/route-controller-manager-86d9f84578-m4q28" Nov 24 08:53:04 crc kubenswrapper[4886]: I1124 08:53:04.076355 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/20546bcb-e5ac-4ee0-ae5b-9d708c09eccc-serving-cert\") pod \"controller-manager-845f9d5998-2fgxv\" (UID: \"20546bcb-e5ac-4ee0-ae5b-9d708c09eccc\") " pod="openshift-controller-manager/controller-manager-845f9d5998-2fgxv" Nov 24 08:53:04 crc kubenswrapper[4886]: I1124 08:53:04.092232 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r7jlm\" (UniqueName: \"kubernetes.io/projected/53d5461d-1198-4a7a-84f3-b413ea336897-kube-api-access-r7jlm\") pod \"route-controller-manager-86d9f84578-m4q28\" (UID: \"53d5461d-1198-4a7a-84f3-b413ea336897\") " pod="openshift-route-controller-manager/route-controller-manager-86d9f84578-m4q28" Nov 24 08:53:04 crc kubenswrapper[4886]: I1124 08:53:04.094344 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xgzk9\" (UniqueName: \"kubernetes.io/projected/20546bcb-e5ac-4ee0-ae5b-9d708c09eccc-kube-api-access-xgzk9\") pod \"controller-manager-845f9d5998-2fgxv\" (UID: \"20546bcb-e5ac-4ee0-ae5b-9d708c09eccc\") " pod="openshift-controller-manager/controller-manager-845f9d5998-2fgxv" Nov 24 08:53:04 crc kubenswrapper[4886]: I1124 08:53:04.213864 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-86d9f84578-m4q28" Nov 24 08:53:04 crc kubenswrapper[4886]: I1124 08:53:04.222658 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-845f9d5998-2fgxv" Nov 24 08:53:04 crc kubenswrapper[4886]: I1124 08:53:04.698782 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-845f9d5998-2fgxv"] Nov 24 08:53:04 crc kubenswrapper[4886]: I1124 08:53:04.778124 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-86d9f84578-m4q28"] Nov 24 08:53:04 crc kubenswrapper[4886]: W1124 08:53:04.802479 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod53d5461d_1198_4a7a_84f3_b413ea336897.slice/crio-4584a102d56bc3929f96c5c3575e66aa3de2f7e194c43d558ee51e32bc08f6aa WatchSource:0}: Error finding container 4584a102d56bc3929f96c5c3575e66aa3de2f7e194c43d558ee51e32bc08f6aa: Status 404 returned error can't find the container with id 4584a102d56bc3929f96c5c3575e66aa3de2f7e194c43d558ee51e32bc08f6aa Nov 24 08:53:04 crc kubenswrapper[4886]: I1124 08:53:04.886590 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0e48387d-08ef-435e-ab67-8c252131888d" path="/var/lib/kubelet/pods/0e48387d-08ef-435e-ab67-8c252131888d/volumes" Nov 24 08:53:04 crc kubenswrapper[4886]: I1124 08:53:04.888811 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="16c90ea0-b2e9-4483-9f38-badc2b15c7e6" path="/var/lib/kubelet/pods/16c90ea0-b2e9-4483-9f38-badc2b15c7e6/volumes" Nov 24 08:53:05 crc kubenswrapper[4886]: I1124 08:53:05.592675 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-86d9f84578-m4q28" event={"ID":"53d5461d-1198-4a7a-84f3-b413ea336897","Type":"ContainerStarted","Data":"7f8abb84afa649962fd56d6e874b87dd3137ae21f60fbec0f1bec879a703f2ba"} Nov 24 08:53:05 crc kubenswrapper[4886]: I1124 08:53:05.593054 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-86d9f84578-m4q28" event={"ID":"53d5461d-1198-4a7a-84f3-b413ea336897","Type":"ContainerStarted","Data":"4584a102d56bc3929f96c5c3575e66aa3de2f7e194c43d558ee51e32bc08f6aa"} Nov 24 08:53:05 crc kubenswrapper[4886]: I1124 08:53:05.593733 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-86d9f84578-m4q28" Nov 24 08:53:05 crc kubenswrapper[4886]: I1124 08:53:05.594671 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-845f9d5998-2fgxv" event={"ID":"20546bcb-e5ac-4ee0-ae5b-9d708c09eccc","Type":"ContainerStarted","Data":"5e10f6df44e334a4bf48daee9c8dbc4f438e2610aed507401a2ba0fab96fb881"} Nov 24 08:53:05 crc kubenswrapper[4886]: I1124 08:53:05.594729 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-845f9d5998-2fgxv" event={"ID":"20546bcb-e5ac-4ee0-ae5b-9d708c09eccc","Type":"ContainerStarted","Data":"4c74cdbe5263edf4f405194095b6319d0da72d0097fe87d7b282dc0b96c1ea16"} Nov 24 08:53:05 crc kubenswrapper[4886]: I1124 08:53:05.595852 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-845f9d5998-2fgxv" Nov 24 08:53:05 crc kubenswrapper[4886]: I1124 08:53:05.599007 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-86d9f84578-m4q28" Nov 24 08:53:05 crc kubenswrapper[4886]: I1124 08:53:05.601366 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-845f9d5998-2fgxv" Nov 24 08:53:05 crc kubenswrapper[4886]: I1124 08:53:05.640412 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-86d9f84578-m4q28" podStartSLOduration=3.64038914 podStartE2EDuration="3.64038914s" podCreationTimestamp="2025-11-24 08:53:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 08:53:05.616457193 +0000 UTC m=+241.503195328" watchObservedRunningTime="2025-11-24 08:53:05.64038914 +0000 UTC m=+241.527127265" Nov 24 08:53:05 crc kubenswrapper[4886]: I1124 08:53:05.640523 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-845f9d5998-2fgxv" podStartSLOduration=3.640518744 podStartE2EDuration="3.640518744s" podCreationTimestamp="2025-11-24 08:53:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 08:53:05.638553286 +0000 UTC m=+241.525291431" watchObservedRunningTime="2025-11-24 08:53:05.640518744 +0000 UTC m=+241.527256879" Nov 24 08:53:21 crc kubenswrapper[4886]: I1124 08:53:21.123409 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-vtbkx" podUID="168f234d-da70-475a-b6df-2771ab11368e" containerName="oauth-openshift" containerID="cri-o://aa38250f3526b2779ae2dc43ce0fb9ee9b4e330642cbe8145c1656f4468c24fa" gracePeriod=15 Nov 24 08:53:21 crc kubenswrapper[4886]: I1124 08:53:21.607986 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-vtbkx" Nov 24 08:53:21 crc kubenswrapper[4886]: I1124 08:53:21.675662 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-9d745f8b5-mng8b"] Nov 24 08:53:21 crc kubenswrapper[4886]: E1124 08:53:21.675912 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="168f234d-da70-475a-b6df-2771ab11368e" containerName="oauth-openshift" Nov 24 08:53:21 crc kubenswrapper[4886]: I1124 08:53:21.675931 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="168f234d-da70-475a-b6df-2771ab11368e" containerName="oauth-openshift" Nov 24 08:53:21 crc kubenswrapper[4886]: I1124 08:53:21.676026 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="168f234d-da70-475a-b6df-2771ab11368e" containerName="oauth-openshift" Nov 24 08:53:21 crc kubenswrapper[4886]: I1124 08:53:21.676423 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-9d745f8b5-mng8b" Nov 24 08:53:21 crc kubenswrapper[4886]: I1124 08:53:21.685530 4886 generic.go:334] "Generic (PLEG): container finished" podID="168f234d-da70-475a-b6df-2771ab11368e" containerID="aa38250f3526b2779ae2dc43ce0fb9ee9b4e330642cbe8145c1656f4468c24fa" exitCode=0 Nov 24 08:53:21 crc kubenswrapper[4886]: I1124 08:53:21.685576 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-vtbkx" event={"ID":"168f234d-da70-475a-b6df-2771ab11368e","Type":"ContainerDied","Data":"aa38250f3526b2779ae2dc43ce0fb9ee9b4e330642cbe8145c1656f4468c24fa"} Nov 24 08:53:21 crc kubenswrapper[4886]: I1124 08:53:21.685606 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-vtbkx" event={"ID":"168f234d-da70-475a-b6df-2771ab11368e","Type":"ContainerDied","Data":"144cfe1bbe6e337034c9d8f295c50e3c42ad810bc14f2d02242ec8d6774217e8"} Nov 24 08:53:21 crc kubenswrapper[4886]: I1124 08:53:21.685625 4886 scope.go:117] "RemoveContainer" containerID="aa38250f3526b2779ae2dc43ce0fb9ee9b4e330642cbe8145c1656f4468c24fa" Nov 24 08:53:21 crc kubenswrapper[4886]: I1124 08:53:21.685765 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-vtbkx" Nov 24 08:53:21 crc kubenswrapper[4886]: I1124 08:53:21.689904 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-9d745f8b5-mng8b"] Nov 24 08:53:21 crc kubenswrapper[4886]: I1124 08:53:21.710689 4886 scope.go:117] "RemoveContainer" containerID="aa38250f3526b2779ae2dc43ce0fb9ee9b4e330642cbe8145c1656f4468c24fa" Nov 24 08:53:21 crc kubenswrapper[4886]: E1124 08:53:21.711684 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aa38250f3526b2779ae2dc43ce0fb9ee9b4e330642cbe8145c1656f4468c24fa\": container with ID starting with aa38250f3526b2779ae2dc43ce0fb9ee9b4e330642cbe8145c1656f4468c24fa not found: ID does not exist" containerID="aa38250f3526b2779ae2dc43ce0fb9ee9b4e330642cbe8145c1656f4468c24fa" Nov 24 08:53:21 crc kubenswrapper[4886]: I1124 08:53:21.711718 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aa38250f3526b2779ae2dc43ce0fb9ee9b4e330642cbe8145c1656f4468c24fa"} err="failed to get container status \"aa38250f3526b2779ae2dc43ce0fb9ee9b4e330642cbe8145c1656f4468c24fa\": rpc error: code = NotFound desc = could not find container \"aa38250f3526b2779ae2dc43ce0fb9ee9b4e330642cbe8145c1656f4468c24fa\": container with ID starting with aa38250f3526b2779ae2dc43ce0fb9ee9b4e330642cbe8145c1656f4468c24fa not found: ID does not exist" Nov 24 08:53:21 crc kubenswrapper[4886]: I1124 08:53:21.728054 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/168f234d-da70-475a-b6df-2771ab11368e-v4-0-config-user-template-provider-selection\") pod \"168f234d-da70-475a-b6df-2771ab11368e\" (UID: \"168f234d-da70-475a-b6df-2771ab11368e\") " Nov 24 08:53:21 crc kubenswrapper[4886]: I1124 08:53:21.728107 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/168f234d-da70-475a-b6df-2771ab11368e-audit-policies\") pod \"168f234d-da70-475a-b6df-2771ab11368e\" (UID: \"168f234d-da70-475a-b6df-2771ab11368e\") " Nov 24 08:53:21 crc kubenswrapper[4886]: I1124 08:53:21.728692 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/168f234d-da70-475a-b6df-2771ab11368e-v4-0-config-system-cliconfig\") pod \"168f234d-da70-475a-b6df-2771ab11368e\" (UID: \"168f234d-da70-475a-b6df-2771ab11368e\") " Nov 24 08:53:21 crc kubenswrapper[4886]: I1124 08:53:21.728744 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/168f234d-da70-475a-b6df-2771ab11368e-v4-0-config-system-serving-cert\") pod \"168f234d-da70-475a-b6df-2771ab11368e\" (UID: \"168f234d-da70-475a-b6df-2771ab11368e\") " Nov 24 08:53:21 crc kubenswrapper[4886]: I1124 08:53:21.728768 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/168f234d-da70-475a-b6df-2771ab11368e-v4-0-config-user-template-error\") pod \"168f234d-da70-475a-b6df-2771ab11368e\" (UID: \"168f234d-da70-475a-b6df-2771ab11368e\") " Nov 24 08:53:21 crc kubenswrapper[4886]: I1124 08:53:21.728786 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/168f234d-da70-475a-b6df-2771ab11368e-v4-0-config-system-router-certs\") pod \"168f234d-da70-475a-b6df-2771ab11368e\" (UID: \"168f234d-da70-475a-b6df-2771ab11368e\") " Nov 24 08:53:21 crc kubenswrapper[4886]: I1124 08:53:21.728847 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/168f234d-da70-475a-b6df-2771ab11368e-v4-0-config-user-idp-0-file-data\") pod \"168f234d-da70-475a-b6df-2771ab11368e\" (UID: \"168f234d-da70-475a-b6df-2771ab11368e\") " Nov 24 08:53:21 crc kubenswrapper[4886]: I1124 08:53:21.728872 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/168f234d-da70-475a-b6df-2771ab11368e-v4-0-config-user-template-login\") pod \"168f234d-da70-475a-b6df-2771ab11368e\" (UID: \"168f234d-da70-475a-b6df-2771ab11368e\") " Nov 24 08:53:21 crc kubenswrapper[4886]: I1124 08:53:21.728895 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bmhrd\" (UniqueName: \"kubernetes.io/projected/168f234d-da70-475a-b6df-2771ab11368e-kube-api-access-bmhrd\") pod \"168f234d-da70-475a-b6df-2771ab11368e\" (UID: \"168f234d-da70-475a-b6df-2771ab11368e\") " Nov 24 08:53:21 crc kubenswrapper[4886]: I1124 08:53:21.728923 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/168f234d-da70-475a-b6df-2771ab11368e-v4-0-config-system-session\") pod \"168f234d-da70-475a-b6df-2771ab11368e\" (UID: \"168f234d-da70-475a-b6df-2771ab11368e\") " Nov 24 08:53:21 crc kubenswrapper[4886]: I1124 08:53:21.728971 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/168f234d-da70-475a-b6df-2771ab11368e-v4-0-config-system-trusted-ca-bundle\") pod \"168f234d-da70-475a-b6df-2771ab11368e\" (UID: \"168f234d-da70-475a-b6df-2771ab11368e\") " Nov 24 08:53:21 crc kubenswrapper[4886]: I1124 08:53:21.729029 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/168f234d-da70-475a-b6df-2771ab11368e-v4-0-config-system-service-ca\") pod \"168f234d-da70-475a-b6df-2771ab11368e\" (UID: \"168f234d-da70-475a-b6df-2771ab11368e\") " Nov 24 08:53:21 crc kubenswrapper[4886]: I1124 08:53:21.729073 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/168f234d-da70-475a-b6df-2771ab11368e-audit-dir\") pod \"168f234d-da70-475a-b6df-2771ab11368e\" (UID: \"168f234d-da70-475a-b6df-2771ab11368e\") " Nov 24 08:53:21 crc kubenswrapper[4886]: I1124 08:53:21.729100 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/168f234d-da70-475a-b6df-2771ab11368e-v4-0-config-system-ocp-branding-template\") pod \"168f234d-da70-475a-b6df-2771ab11368e\" (UID: \"168f234d-da70-475a-b6df-2771ab11368e\") " Nov 24 08:53:21 crc kubenswrapper[4886]: I1124 08:53:21.729378 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/1fa6a847-810d-4487-bbca-6a8dbc2c6a5c-audit-policies\") pod \"oauth-openshift-9d745f8b5-mng8b\" (UID: \"1fa6a847-810d-4487-bbca-6a8dbc2c6a5c\") " pod="openshift-authentication/oauth-openshift-9d745f8b5-mng8b" Nov 24 08:53:21 crc kubenswrapper[4886]: I1124 08:53:21.729426 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1fa6a847-810d-4487-bbca-6a8dbc2c6a5c-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-9d745f8b5-mng8b\" (UID: \"1fa6a847-810d-4487-bbca-6a8dbc2c6a5c\") " pod="openshift-authentication/oauth-openshift-9d745f8b5-mng8b" Nov 24 08:53:21 crc kubenswrapper[4886]: I1124 08:53:21.729452 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/1fa6a847-810d-4487-bbca-6a8dbc2c6a5c-audit-dir\") pod \"oauth-openshift-9d745f8b5-mng8b\" (UID: \"1fa6a847-810d-4487-bbca-6a8dbc2c6a5c\") " pod="openshift-authentication/oauth-openshift-9d745f8b5-mng8b" Nov 24 08:53:21 crc kubenswrapper[4886]: I1124 08:53:21.729470 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/1fa6a847-810d-4487-bbca-6a8dbc2c6a5c-v4-0-config-user-template-error\") pod \"oauth-openshift-9d745f8b5-mng8b\" (UID: \"1fa6a847-810d-4487-bbca-6a8dbc2c6a5c\") " pod="openshift-authentication/oauth-openshift-9d745f8b5-mng8b" Nov 24 08:53:21 crc kubenswrapper[4886]: I1124 08:53:21.729488 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/1fa6a847-810d-4487-bbca-6a8dbc2c6a5c-v4-0-config-user-template-login\") pod \"oauth-openshift-9d745f8b5-mng8b\" (UID: \"1fa6a847-810d-4487-bbca-6a8dbc2c6a5c\") " pod="openshift-authentication/oauth-openshift-9d745f8b5-mng8b" Nov 24 08:53:21 crc kubenswrapper[4886]: I1124 08:53:21.729513 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/1fa6a847-810d-4487-bbca-6a8dbc2c6a5c-v4-0-config-system-session\") pod \"oauth-openshift-9d745f8b5-mng8b\" (UID: \"1fa6a847-810d-4487-bbca-6a8dbc2c6a5c\") " pod="openshift-authentication/oauth-openshift-9d745f8b5-mng8b" Nov 24 08:53:21 crc kubenswrapper[4886]: I1124 08:53:21.729535 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/1fa6a847-810d-4487-bbca-6a8dbc2c6a5c-v4-0-config-system-service-ca\") pod \"oauth-openshift-9d745f8b5-mng8b\" (UID: \"1fa6a847-810d-4487-bbca-6a8dbc2c6a5c\") " pod="openshift-authentication/oauth-openshift-9d745f8b5-mng8b" Nov 24 08:53:21 crc kubenswrapper[4886]: I1124 08:53:21.729550 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/1fa6a847-810d-4487-bbca-6a8dbc2c6a5c-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-9d745f8b5-mng8b\" (UID: \"1fa6a847-810d-4487-bbca-6a8dbc2c6a5c\") " pod="openshift-authentication/oauth-openshift-9d745f8b5-mng8b" Nov 24 08:53:21 crc kubenswrapper[4886]: I1124 08:53:21.729572 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/1fa6a847-810d-4487-bbca-6a8dbc2c6a5c-v4-0-config-system-cliconfig\") pod \"oauth-openshift-9d745f8b5-mng8b\" (UID: \"1fa6a847-810d-4487-bbca-6a8dbc2c6a5c\") " pod="openshift-authentication/oauth-openshift-9d745f8b5-mng8b" Nov 24 08:53:21 crc kubenswrapper[4886]: I1124 08:53:21.729590 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w7lsv\" (UniqueName: \"kubernetes.io/projected/1fa6a847-810d-4487-bbca-6a8dbc2c6a5c-kube-api-access-w7lsv\") pod \"oauth-openshift-9d745f8b5-mng8b\" (UID: \"1fa6a847-810d-4487-bbca-6a8dbc2c6a5c\") " pod="openshift-authentication/oauth-openshift-9d745f8b5-mng8b" Nov 24 08:53:21 crc kubenswrapper[4886]: I1124 08:53:21.729616 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/1fa6a847-810d-4487-bbca-6a8dbc2c6a5c-v4-0-config-system-serving-cert\") pod \"oauth-openshift-9d745f8b5-mng8b\" (UID: \"1fa6a847-810d-4487-bbca-6a8dbc2c6a5c\") " pod="openshift-authentication/oauth-openshift-9d745f8b5-mng8b" Nov 24 08:53:21 crc kubenswrapper[4886]: I1124 08:53:21.729642 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/1fa6a847-810d-4487-bbca-6a8dbc2c6a5c-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-9d745f8b5-mng8b\" (UID: \"1fa6a847-810d-4487-bbca-6a8dbc2c6a5c\") " pod="openshift-authentication/oauth-openshift-9d745f8b5-mng8b" Nov 24 08:53:21 crc kubenswrapper[4886]: I1124 08:53:21.729662 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/1fa6a847-810d-4487-bbca-6a8dbc2c6a5c-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-9d745f8b5-mng8b\" (UID: \"1fa6a847-810d-4487-bbca-6a8dbc2c6a5c\") " pod="openshift-authentication/oauth-openshift-9d745f8b5-mng8b" Nov 24 08:53:21 crc kubenswrapper[4886]: I1124 08:53:21.729681 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/1fa6a847-810d-4487-bbca-6a8dbc2c6a5c-v4-0-config-system-router-certs\") pod \"oauth-openshift-9d745f8b5-mng8b\" (UID: \"1fa6a847-810d-4487-bbca-6a8dbc2c6a5c\") " pod="openshift-authentication/oauth-openshift-9d745f8b5-mng8b" Nov 24 08:53:21 crc kubenswrapper[4886]: I1124 08:53:21.729730 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/168f234d-da70-475a-b6df-2771ab11368e-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "168f234d-da70-475a-b6df-2771ab11368e" (UID: "168f234d-da70-475a-b6df-2771ab11368e"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 08:53:21 crc kubenswrapper[4886]: I1124 08:53:21.729755 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/168f234d-da70-475a-b6df-2771ab11368e-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "168f234d-da70-475a-b6df-2771ab11368e" (UID: "168f234d-da70-475a-b6df-2771ab11368e"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 08:53:21 crc kubenswrapper[4886]: I1124 08:53:21.730100 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/168f234d-da70-475a-b6df-2771ab11368e-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "168f234d-da70-475a-b6df-2771ab11368e" (UID: "168f234d-da70-475a-b6df-2771ab11368e"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 08:53:21 crc kubenswrapper[4886]: I1124 08:53:21.730095 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/168f234d-da70-475a-b6df-2771ab11368e-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "168f234d-da70-475a-b6df-2771ab11368e" (UID: "168f234d-da70-475a-b6df-2771ab11368e"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 08:53:21 crc kubenswrapper[4886]: I1124 08:53:21.730925 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/168f234d-da70-475a-b6df-2771ab11368e-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "168f234d-da70-475a-b6df-2771ab11368e" (UID: "168f234d-da70-475a-b6df-2771ab11368e"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 08:53:21 crc kubenswrapper[4886]: I1124 08:53:21.735194 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/168f234d-da70-475a-b6df-2771ab11368e-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "168f234d-da70-475a-b6df-2771ab11368e" (UID: "168f234d-da70-475a-b6df-2771ab11368e"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 08:53:21 crc kubenswrapper[4886]: I1124 08:53:21.735470 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/168f234d-da70-475a-b6df-2771ab11368e-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "168f234d-da70-475a-b6df-2771ab11368e" (UID: "168f234d-da70-475a-b6df-2771ab11368e"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 08:53:21 crc kubenswrapper[4886]: I1124 08:53:21.745450 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/168f234d-da70-475a-b6df-2771ab11368e-kube-api-access-bmhrd" (OuterVolumeSpecName: "kube-api-access-bmhrd") pod "168f234d-da70-475a-b6df-2771ab11368e" (UID: "168f234d-da70-475a-b6df-2771ab11368e"). InnerVolumeSpecName "kube-api-access-bmhrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 08:53:21 crc kubenswrapper[4886]: I1124 08:53:21.745582 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/168f234d-da70-475a-b6df-2771ab11368e-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "168f234d-da70-475a-b6df-2771ab11368e" (UID: "168f234d-da70-475a-b6df-2771ab11368e"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 08:53:21 crc kubenswrapper[4886]: I1124 08:53:21.745994 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/168f234d-da70-475a-b6df-2771ab11368e-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "168f234d-da70-475a-b6df-2771ab11368e" (UID: "168f234d-da70-475a-b6df-2771ab11368e"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 08:53:21 crc kubenswrapper[4886]: I1124 08:53:21.746349 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/168f234d-da70-475a-b6df-2771ab11368e-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "168f234d-da70-475a-b6df-2771ab11368e" (UID: "168f234d-da70-475a-b6df-2771ab11368e"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 08:53:21 crc kubenswrapper[4886]: I1124 08:53:21.748500 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/168f234d-da70-475a-b6df-2771ab11368e-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "168f234d-da70-475a-b6df-2771ab11368e" (UID: "168f234d-da70-475a-b6df-2771ab11368e"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 08:53:21 crc kubenswrapper[4886]: I1124 08:53:21.750551 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/168f234d-da70-475a-b6df-2771ab11368e-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "168f234d-da70-475a-b6df-2771ab11368e" (UID: "168f234d-da70-475a-b6df-2771ab11368e"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 08:53:21 crc kubenswrapper[4886]: I1124 08:53:21.750916 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/168f234d-da70-475a-b6df-2771ab11368e-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "168f234d-da70-475a-b6df-2771ab11368e" (UID: "168f234d-da70-475a-b6df-2771ab11368e"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 08:53:21 crc kubenswrapper[4886]: I1124 08:53:21.831380 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/1fa6a847-810d-4487-bbca-6a8dbc2c6a5c-audit-policies\") pod \"oauth-openshift-9d745f8b5-mng8b\" (UID: \"1fa6a847-810d-4487-bbca-6a8dbc2c6a5c\") " pod="openshift-authentication/oauth-openshift-9d745f8b5-mng8b" Nov 24 08:53:21 crc kubenswrapper[4886]: I1124 08:53:21.831468 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1fa6a847-810d-4487-bbca-6a8dbc2c6a5c-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-9d745f8b5-mng8b\" (UID: \"1fa6a847-810d-4487-bbca-6a8dbc2c6a5c\") " pod="openshift-authentication/oauth-openshift-9d745f8b5-mng8b" Nov 24 08:53:21 crc kubenswrapper[4886]: I1124 08:53:21.831499 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/1fa6a847-810d-4487-bbca-6a8dbc2c6a5c-audit-dir\") pod \"oauth-openshift-9d745f8b5-mng8b\" (UID: \"1fa6a847-810d-4487-bbca-6a8dbc2c6a5c\") " pod="openshift-authentication/oauth-openshift-9d745f8b5-mng8b" Nov 24 08:53:21 crc kubenswrapper[4886]: I1124 08:53:21.831519 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/1fa6a847-810d-4487-bbca-6a8dbc2c6a5c-v4-0-config-user-template-error\") pod \"oauth-openshift-9d745f8b5-mng8b\" (UID: \"1fa6a847-810d-4487-bbca-6a8dbc2c6a5c\") " pod="openshift-authentication/oauth-openshift-9d745f8b5-mng8b" Nov 24 08:53:21 crc kubenswrapper[4886]: I1124 08:53:21.831543 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/1fa6a847-810d-4487-bbca-6a8dbc2c6a5c-v4-0-config-user-template-login\") pod \"oauth-openshift-9d745f8b5-mng8b\" (UID: \"1fa6a847-810d-4487-bbca-6a8dbc2c6a5c\") " pod="openshift-authentication/oauth-openshift-9d745f8b5-mng8b" Nov 24 08:53:21 crc kubenswrapper[4886]: I1124 08:53:21.831572 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/1fa6a847-810d-4487-bbca-6a8dbc2c6a5c-v4-0-config-system-session\") pod \"oauth-openshift-9d745f8b5-mng8b\" (UID: \"1fa6a847-810d-4487-bbca-6a8dbc2c6a5c\") " pod="openshift-authentication/oauth-openshift-9d745f8b5-mng8b" Nov 24 08:53:21 crc kubenswrapper[4886]: I1124 08:53:21.831602 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/1fa6a847-810d-4487-bbca-6a8dbc2c6a5c-v4-0-config-system-service-ca\") pod \"oauth-openshift-9d745f8b5-mng8b\" (UID: \"1fa6a847-810d-4487-bbca-6a8dbc2c6a5c\") " pod="openshift-authentication/oauth-openshift-9d745f8b5-mng8b" Nov 24 08:53:21 crc kubenswrapper[4886]: I1124 08:53:21.831626 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/1fa6a847-810d-4487-bbca-6a8dbc2c6a5c-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-9d745f8b5-mng8b\" (UID: \"1fa6a847-810d-4487-bbca-6a8dbc2c6a5c\") " pod="openshift-authentication/oauth-openshift-9d745f8b5-mng8b" Nov 24 08:53:21 crc kubenswrapper[4886]: I1124 08:53:21.831645 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/1fa6a847-810d-4487-bbca-6a8dbc2c6a5c-v4-0-config-system-cliconfig\") pod \"oauth-openshift-9d745f8b5-mng8b\" (UID: \"1fa6a847-810d-4487-bbca-6a8dbc2c6a5c\") " pod="openshift-authentication/oauth-openshift-9d745f8b5-mng8b" Nov 24 08:53:21 crc kubenswrapper[4886]: I1124 08:53:21.831661 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w7lsv\" (UniqueName: \"kubernetes.io/projected/1fa6a847-810d-4487-bbca-6a8dbc2c6a5c-kube-api-access-w7lsv\") pod \"oauth-openshift-9d745f8b5-mng8b\" (UID: \"1fa6a847-810d-4487-bbca-6a8dbc2c6a5c\") " pod="openshift-authentication/oauth-openshift-9d745f8b5-mng8b" Nov 24 08:53:21 crc kubenswrapper[4886]: I1124 08:53:21.831660 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/1fa6a847-810d-4487-bbca-6a8dbc2c6a5c-audit-dir\") pod \"oauth-openshift-9d745f8b5-mng8b\" (UID: \"1fa6a847-810d-4487-bbca-6a8dbc2c6a5c\") " pod="openshift-authentication/oauth-openshift-9d745f8b5-mng8b" Nov 24 08:53:21 crc kubenswrapper[4886]: I1124 08:53:21.831696 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/1fa6a847-810d-4487-bbca-6a8dbc2c6a5c-v4-0-config-system-serving-cert\") pod \"oauth-openshift-9d745f8b5-mng8b\" (UID: \"1fa6a847-810d-4487-bbca-6a8dbc2c6a5c\") " pod="openshift-authentication/oauth-openshift-9d745f8b5-mng8b" Nov 24 08:53:21 crc kubenswrapper[4886]: I1124 08:53:21.831747 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/1fa6a847-810d-4487-bbca-6a8dbc2c6a5c-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-9d745f8b5-mng8b\" (UID: \"1fa6a847-810d-4487-bbca-6a8dbc2c6a5c\") " pod="openshift-authentication/oauth-openshift-9d745f8b5-mng8b" Nov 24 08:53:21 crc kubenswrapper[4886]: I1124 08:53:21.831773 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/1fa6a847-810d-4487-bbca-6a8dbc2c6a5c-v4-0-config-system-router-certs\") pod \"oauth-openshift-9d745f8b5-mng8b\" (UID: \"1fa6a847-810d-4487-bbca-6a8dbc2c6a5c\") " pod="openshift-authentication/oauth-openshift-9d745f8b5-mng8b" Nov 24 08:53:21 crc kubenswrapper[4886]: I1124 08:53:21.831792 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/1fa6a847-810d-4487-bbca-6a8dbc2c6a5c-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-9d745f8b5-mng8b\" (UID: \"1fa6a847-810d-4487-bbca-6a8dbc2c6a5c\") " pod="openshift-authentication/oauth-openshift-9d745f8b5-mng8b" Nov 24 08:53:21 crc kubenswrapper[4886]: I1124 08:53:21.831849 4886 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/168f234d-da70-475a-b6df-2771ab11368e-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 08:53:21 crc kubenswrapper[4886]: I1124 08:53:21.831862 4886 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/168f234d-da70-475a-b6df-2771ab11368e-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Nov 24 08:53:21 crc kubenswrapper[4886]: I1124 08:53:21.831876 4886 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/168f234d-da70-475a-b6df-2771ab11368e-audit-dir\") on node \"crc\" DevicePath \"\"" Nov 24 08:53:21 crc kubenswrapper[4886]: I1124 08:53:21.831888 4886 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/168f234d-da70-475a-b6df-2771ab11368e-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Nov 24 08:53:21 crc kubenswrapper[4886]: I1124 08:53:21.831904 4886 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/168f234d-da70-475a-b6df-2771ab11368e-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Nov 24 08:53:21 crc kubenswrapper[4886]: I1124 08:53:21.831917 4886 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/168f234d-da70-475a-b6df-2771ab11368e-audit-policies\") on node \"crc\" DevicePath \"\"" Nov 24 08:53:21 crc kubenswrapper[4886]: I1124 08:53:21.831931 4886 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/168f234d-da70-475a-b6df-2771ab11368e-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Nov 24 08:53:21 crc kubenswrapper[4886]: I1124 08:53:21.831944 4886 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/168f234d-da70-475a-b6df-2771ab11368e-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 08:53:21 crc kubenswrapper[4886]: I1124 08:53:21.831957 4886 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/168f234d-da70-475a-b6df-2771ab11368e-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Nov 24 08:53:21 crc kubenswrapper[4886]: I1124 08:53:21.831972 4886 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/168f234d-da70-475a-b6df-2771ab11368e-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Nov 24 08:53:21 crc kubenswrapper[4886]: I1124 08:53:21.831986 4886 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/168f234d-da70-475a-b6df-2771ab11368e-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Nov 24 08:53:21 crc kubenswrapper[4886]: I1124 08:53:21.831999 4886 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/168f234d-da70-475a-b6df-2771ab11368e-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Nov 24 08:53:21 crc kubenswrapper[4886]: I1124 08:53:21.832014 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bmhrd\" (UniqueName: \"kubernetes.io/projected/168f234d-da70-475a-b6df-2771ab11368e-kube-api-access-bmhrd\") on node \"crc\" DevicePath \"\"" Nov 24 08:53:21 crc kubenswrapper[4886]: I1124 08:53:21.832026 4886 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/168f234d-da70-475a-b6df-2771ab11368e-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Nov 24 08:53:21 crc kubenswrapper[4886]: I1124 08:53:21.832468 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/1fa6a847-810d-4487-bbca-6a8dbc2c6a5c-audit-policies\") pod \"oauth-openshift-9d745f8b5-mng8b\" (UID: \"1fa6a847-810d-4487-bbca-6a8dbc2c6a5c\") " pod="openshift-authentication/oauth-openshift-9d745f8b5-mng8b" Nov 24 08:53:21 crc kubenswrapper[4886]: I1124 08:53:21.832489 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/1fa6a847-810d-4487-bbca-6a8dbc2c6a5c-v4-0-config-system-service-ca\") pod \"oauth-openshift-9d745f8b5-mng8b\" (UID: \"1fa6a847-810d-4487-bbca-6a8dbc2c6a5c\") " pod="openshift-authentication/oauth-openshift-9d745f8b5-mng8b" Nov 24 08:53:21 crc kubenswrapper[4886]: I1124 08:53:21.833337 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1fa6a847-810d-4487-bbca-6a8dbc2c6a5c-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-9d745f8b5-mng8b\" (UID: \"1fa6a847-810d-4487-bbca-6a8dbc2c6a5c\") " pod="openshift-authentication/oauth-openshift-9d745f8b5-mng8b" Nov 24 08:53:21 crc kubenswrapper[4886]: I1124 08:53:21.834842 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/1fa6a847-810d-4487-bbca-6a8dbc2c6a5c-v4-0-config-system-cliconfig\") pod \"oauth-openshift-9d745f8b5-mng8b\" (UID: \"1fa6a847-810d-4487-bbca-6a8dbc2c6a5c\") " pod="openshift-authentication/oauth-openshift-9d745f8b5-mng8b" Nov 24 08:53:21 crc kubenswrapper[4886]: I1124 08:53:21.836327 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/1fa6a847-810d-4487-bbca-6a8dbc2c6a5c-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-9d745f8b5-mng8b\" (UID: \"1fa6a847-810d-4487-bbca-6a8dbc2c6a5c\") " pod="openshift-authentication/oauth-openshift-9d745f8b5-mng8b" Nov 24 08:53:21 crc kubenswrapper[4886]: I1124 08:53:21.836349 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/1fa6a847-810d-4487-bbca-6a8dbc2c6a5c-v4-0-config-user-template-error\") pod \"oauth-openshift-9d745f8b5-mng8b\" (UID: \"1fa6a847-810d-4487-bbca-6a8dbc2c6a5c\") " pod="openshift-authentication/oauth-openshift-9d745f8b5-mng8b" Nov 24 08:53:21 crc kubenswrapper[4886]: I1124 08:53:21.836356 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/1fa6a847-810d-4487-bbca-6a8dbc2c6a5c-v4-0-config-system-session\") pod \"oauth-openshift-9d745f8b5-mng8b\" (UID: \"1fa6a847-810d-4487-bbca-6a8dbc2c6a5c\") " pod="openshift-authentication/oauth-openshift-9d745f8b5-mng8b" Nov 24 08:53:21 crc kubenswrapper[4886]: I1124 08:53:21.836511 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/1fa6a847-810d-4487-bbca-6a8dbc2c6a5c-v4-0-config-user-template-login\") pod \"oauth-openshift-9d745f8b5-mng8b\" (UID: \"1fa6a847-810d-4487-bbca-6a8dbc2c6a5c\") " pod="openshift-authentication/oauth-openshift-9d745f8b5-mng8b" Nov 24 08:53:21 crc kubenswrapper[4886]: I1124 08:53:21.836704 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/1fa6a847-810d-4487-bbca-6a8dbc2c6a5c-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-9d745f8b5-mng8b\" (UID: \"1fa6a847-810d-4487-bbca-6a8dbc2c6a5c\") " pod="openshift-authentication/oauth-openshift-9d745f8b5-mng8b" Nov 24 08:53:21 crc kubenswrapper[4886]: I1124 08:53:21.837169 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/1fa6a847-810d-4487-bbca-6a8dbc2c6a5c-v4-0-config-system-serving-cert\") pod \"oauth-openshift-9d745f8b5-mng8b\" (UID: \"1fa6a847-810d-4487-bbca-6a8dbc2c6a5c\") " pod="openshift-authentication/oauth-openshift-9d745f8b5-mng8b" Nov 24 08:53:21 crc kubenswrapper[4886]: I1124 08:53:21.837737 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/1fa6a847-810d-4487-bbca-6a8dbc2c6a5c-v4-0-config-system-router-certs\") pod \"oauth-openshift-9d745f8b5-mng8b\" (UID: \"1fa6a847-810d-4487-bbca-6a8dbc2c6a5c\") " pod="openshift-authentication/oauth-openshift-9d745f8b5-mng8b" Nov 24 08:53:21 crc kubenswrapper[4886]: I1124 08:53:21.838145 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/1fa6a847-810d-4487-bbca-6a8dbc2c6a5c-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-9d745f8b5-mng8b\" (UID: \"1fa6a847-810d-4487-bbca-6a8dbc2c6a5c\") " pod="openshift-authentication/oauth-openshift-9d745f8b5-mng8b" Nov 24 08:53:21 crc kubenswrapper[4886]: I1124 08:53:21.851454 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w7lsv\" (UniqueName: \"kubernetes.io/projected/1fa6a847-810d-4487-bbca-6a8dbc2c6a5c-kube-api-access-w7lsv\") pod \"oauth-openshift-9d745f8b5-mng8b\" (UID: \"1fa6a847-810d-4487-bbca-6a8dbc2c6a5c\") " pod="openshift-authentication/oauth-openshift-9d745f8b5-mng8b" Nov 24 08:53:21 crc kubenswrapper[4886]: I1124 08:53:21.993866 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-9d745f8b5-mng8b" Nov 24 08:53:22 crc kubenswrapper[4886]: I1124 08:53:22.020063 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-vtbkx"] Nov 24 08:53:22 crc kubenswrapper[4886]: I1124 08:53:22.025744 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-vtbkx"] Nov 24 08:53:22 crc kubenswrapper[4886]: I1124 08:53:22.393741 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-9d745f8b5-mng8b"] Nov 24 08:53:22 crc kubenswrapper[4886]: I1124 08:53:22.691728 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-9d745f8b5-mng8b" event={"ID":"1fa6a847-810d-4487-bbca-6a8dbc2c6a5c","Type":"ContainerStarted","Data":"2620591687ef408a7141cb13a4f1c532ed42233ad469c483817b6a99bc085ef1"} Nov 24 08:53:22 crc kubenswrapper[4886]: I1124 08:53:22.691783 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-9d745f8b5-mng8b" event={"ID":"1fa6a847-810d-4487-bbca-6a8dbc2c6a5c","Type":"ContainerStarted","Data":"c2b730fda2d35952420cd92760697cb6fc72ab4d174e5f783c54def4a77576f6"} Nov 24 08:53:22 crc kubenswrapper[4886]: I1124 08:53:22.692162 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-9d745f8b5-mng8b" Nov 24 08:53:22 crc kubenswrapper[4886]: I1124 08:53:22.713474 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-9d745f8b5-mng8b" podStartSLOduration=26.713442676 podStartE2EDuration="26.713442676s" podCreationTimestamp="2025-11-24 08:52:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 08:53:22.710891272 +0000 UTC m=+258.597629427" watchObservedRunningTime="2025-11-24 08:53:22.713442676 +0000 UTC m=+258.600180821" Nov 24 08:53:22 crc kubenswrapper[4886]: I1124 08:53:22.858793 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="168f234d-da70-475a-b6df-2771ab11368e" path="/var/lib/kubelet/pods/168f234d-da70-475a-b6df-2771ab11368e/volumes" Nov 24 08:53:23 crc kubenswrapper[4886]: I1124 08:53:23.229305 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-9d745f8b5-mng8b" Nov 24 08:53:35 crc kubenswrapper[4886]: I1124 08:53:35.880847 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-2s97s"] Nov 24 08:53:35 crc kubenswrapper[4886]: I1124 08:53:35.883573 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-2s97s" podUID="44504d41-1a7d-4a15-a270-24325b0954a9" containerName="registry-server" containerID="cri-o://2545c2efdab5cce529af95431ab622627a48921caa34bd04e537da85fc028b1d" gracePeriod=30 Nov 24 08:53:35 crc kubenswrapper[4886]: I1124 08:53:35.889152 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-l8vn8"] Nov 24 08:53:35 crc kubenswrapper[4886]: I1124 08:53:35.889489 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-l8vn8" podUID="d89bb378-d235-4377-9908-0008691b9174" containerName="registry-server" containerID="cri-o://af19d2ea3795eee061798d35cb35e3b17d9b4a723418e8f5dcc3af517c73514f" gracePeriod=30 Nov 24 08:53:35 crc kubenswrapper[4886]: I1124 08:53:35.903574 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-tsmf5"] Nov 24 08:53:35 crc kubenswrapper[4886]: I1124 08:53:35.904241 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-tsmf5" podUID="364b3e42-dafa-45cd-bf38-545cc2eb9e21" containerName="marketplace-operator" containerID="cri-o://c05e0230c62c59cb2853c62c63d283b0eef0c8e84b47e899b77f46f53237b732" gracePeriod=30 Nov 24 08:53:35 crc kubenswrapper[4886]: I1124 08:53:35.914683 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-tbg5c"] Nov 24 08:53:35 crc kubenswrapper[4886]: I1124 08:53:35.914994 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-tbg5c" podUID="a5f30dce-707e-45e7-a928-4602478ac07d" containerName="registry-server" containerID="cri-o://d3b3deb7b25f0c5deb76617e12fa4f03827524fced20eeb6e9526ab61bca8ff2" gracePeriod=30 Nov 24 08:53:35 crc kubenswrapper[4886]: I1124 08:53:35.920928 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-pg5ls"] Nov 24 08:53:35 crc kubenswrapper[4886]: I1124 08:53:35.933447 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-pg5ls" podUID="5cfd24b4-c215-49b2-af8e-a3875c05c738" containerName="registry-server" containerID="cri-o://aa329a1320b154fc105a618e811e6fe0c475d56c2386e8f046fe411bb19cc7ca" gracePeriod=30 Nov 24 08:53:35 crc kubenswrapper[4886]: I1124 08:53:35.936218 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-psbrg"] Nov 24 08:53:35 crc kubenswrapper[4886]: I1124 08:53:35.937191 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-psbrg" Nov 24 08:53:35 crc kubenswrapper[4886]: I1124 08:53:35.952311 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-psbrg"] Nov 24 08:53:36 crc kubenswrapper[4886]: I1124 08:53:36.030962 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/5c32598f-bb74-4615-b8f9-77f36f97f80a-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-psbrg\" (UID: \"5c32598f-bb74-4615-b8f9-77f36f97f80a\") " pod="openshift-marketplace/marketplace-operator-79b997595-psbrg" Nov 24 08:53:36 crc kubenswrapper[4886]: I1124 08:53:36.031014 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rlsb9\" (UniqueName: \"kubernetes.io/projected/5c32598f-bb74-4615-b8f9-77f36f97f80a-kube-api-access-rlsb9\") pod \"marketplace-operator-79b997595-psbrg\" (UID: \"5c32598f-bb74-4615-b8f9-77f36f97f80a\") " pod="openshift-marketplace/marketplace-operator-79b997595-psbrg" Nov 24 08:53:36 crc kubenswrapper[4886]: I1124 08:53:36.031050 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5c32598f-bb74-4615-b8f9-77f36f97f80a-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-psbrg\" (UID: \"5c32598f-bb74-4615-b8f9-77f36f97f80a\") " pod="openshift-marketplace/marketplace-operator-79b997595-psbrg" Nov 24 08:53:36 crc kubenswrapper[4886]: I1124 08:53:36.132580 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/5c32598f-bb74-4615-b8f9-77f36f97f80a-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-psbrg\" (UID: \"5c32598f-bb74-4615-b8f9-77f36f97f80a\") " pod="openshift-marketplace/marketplace-operator-79b997595-psbrg" Nov 24 08:53:36 crc kubenswrapper[4886]: I1124 08:53:36.132889 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rlsb9\" (UniqueName: \"kubernetes.io/projected/5c32598f-bb74-4615-b8f9-77f36f97f80a-kube-api-access-rlsb9\") pod \"marketplace-operator-79b997595-psbrg\" (UID: \"5c32598f-bb74-4615-b8f9-77f36f97f80a\") " pod="openshift-marketplace/marketplace-operator-79b997595-psbrg" Nov 24 08:53:36 crc kubenswrapper[4886]: I1124 08:53:36.132930 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5c32598f-bb74-4615-b8f9-77f36f97f80a-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-psbrg\" (UID: \"5c32598f-bb74-4615-b8f9-77f36f97f80a\") " pod="openshift-marketplace/marketplace-operator-79b997595-psbrg" Nov 24 08:53:36 crc kubenswrapper[4886]: I1124 08:53:36.134991 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5c32598f-bb74-4615-b8f9-77f36f97f80a-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-psbrg\" (UID: \"5c32598f-bb74-4615-b8f9-77f36f97f80a\") " pod="openshift-marketplace/marketplace-operator-79b997595-psbrg" Nov 24 08:53:36 crc kubenswrapper[4886]: I1124 08:53:36.139085 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/5c32598f-bb74-4615-b8f9-77f36f97f80a-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-psbrg\" (UID: \"5c32598f-bb74-4615-b8f9-77f36f97f80a\") " pod="openshift-marketplace/marketplace-operator-79b997595-psbrg" Nov 24 08:53:36 crc kubenswrapper[4886]: I1124 08:53:36.152039 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rlsb9\" (UniqueName: \"kubernetes.io/projected/5c32598f-bb74-4615-b8f9-77f36f97f80a-kube-api-access-rlsb9\") pod \"marketplace-operator-79b997595-psbrg\" (UID: \"5c32598f-bb74-4615-b8f9-77f36f97f80a\") " pod="openshift-marketplace/marketplace-operator-79b997595-psbrg" Nov 24 08:53:36 crc kubenswrapper[4886]: I1124 08:53:36.387533 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-psbrg" Nov 24 08:53:36 crc kubenswrapper[4886]: I1124 08:53:36.426938 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-l8vn8" Nov 24 08:53:36 crc kubenswrapper[4886]: I1124 08:53:36.539795 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-74hf9\" (UniqueName: \"kubernetes.io/projected/d89bb378-d235-4377-9908-0008691b9174-kube-api-access-74hf9\") pod \"d89bb378-d235-4377-9908-0008691b9174\" (UID: \"d89bb378-d235-4377-9908-0008691b9174\") " Nov 24 08:53:36 crc kubenswrapper[4886]: I1124 08:53:36.539988 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d89bb378-d235-4377-9908-0008691b9174-utilities\") pod \"d89bb378-d235-4377-9908-0008691b9174\" (UID: \"d89bb378-d235-4377-9908-0008691b9174\") " Nov 24 08:53:36 crc kubenswrapper[4886]: I1124 08:53:36.540050 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d89bb378-d235-4377-9908-0008691b9174-catalog-content\") pod \"d89bb378-d235-4377-9908-0008691b9174\" (UID: \"d89bb378-d235-4377-9908-0008691b9174\") " Nov 24 08:53:36 crc kubenswrapper[4886]: I1124 08:53:36.541605 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d89bb378-d235-4377-9908-0008691b9174-utilities" (OuterVolumeSpecName: "utilities") pod "d89bb378-d235-4377-9908-0008691b9174" (UID: "d89bb378-d235-4377-9908-0008691b9174"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 08:53:36 crc kubenswrapper[4886]: I1124 08:53:36.544402 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d89bb378-d235-4377-9908-0008691b9174-kube-api-access-74hf9" (OuterVolumeSpecName: "kube-api-access-74hf9") pod "d89bb378-d235-4377-9908-0008691b9174" (UID: "d89bb378-d235-4377-9908-0008691b9174"). InnerVolumeSpecName "kube-api-access-74hf9". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 08:53:36 crc kubenswrapper[4886]: I1124 08:53:36.641308 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-74hf9\" (UniqueName: \"kubernetes.io/projected/d89bb378-d235-4377-9908-0008691b9174-kube-api-access-74hf9\") on node \"crc\" DevicePath \"\"" Nov 24 08:53:36 crc kubenswrapper[4886]: I1124 08:53:36.641640 4886 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d89bb378-d235-4377-9908-0008691b9174-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 08:53:36 crc kubenswrapper[4886]: I1124 08:53:36.644452 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d89bb378-d235-4377-9908-0008691b9174-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d89bb378-d235-4377-9908-0008691b9174" (UID: "d89bb378-d235-4377-9908-0008691b9174"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 08:53:36 crc kubenswrapper[4886]: I1124 08:53:36.644521 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2s97s" Nov 24 08:53:36 crc kubenswrapper[4886]: I1124 08:53:36.673798 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pg5ls" Nov 24 08:53:36 crc kubenswrapper[4886]: I1124 08:53:36.680807 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-tsmf5" Nov 24 08:53:36 crc kubenswrapper[4886]: I1124 08:53:36.683354 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-tbg5c" Nov 24 08:53:36 crc kubenswrapper[4886]: I1124 08:53:36.742124 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/44504d41-1a7d-4a15-a270-24325b0954a9-utilities\") pod \"44504d41-1a7d-4a15-a270-24325b0954a9\" (UID: \"44504d41-1a7d-4a15-a270-24325b0954a9\") " Nov 24 08:53:36 crc kubenswrapper[4886]: I1124 08:53:36.742194 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a5f30dce-707e-45e7-a928-4602478ac07d-catalog-content\") pod \"a5f30dce-707e-45e7-a928-4602478ac07d\" (UID: \"a5f30dce-707e-45e7-a928-4602478ac07d\") " Nov 24 08:53:36 crc kubenswrapper[4886]: I1124 08:53:36.742229 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vkrdt\" (UniqueName: \"kubernetes.io/projected/44504d41-1a7d-4a15-a270-24325b0954a9-kube-api-access-vkrdt\") pod \"44504d41-1a7d-4a15-a270-24325b0954a9\" (UID: \"44504d41-1a7d-4a15-a270-24325b0954a9\") " Nov 24 08:53:36 crc kubenswrapper[4886]: I1124 08:53:36.742282 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/364b3e42-dafa-45cd-bf38-545cc2eb9e21-marketplace-operator-metrics\") pod \"364b3e42-dafa-45cd-bf38-545cc2eb9e21\" (UID: \"364b3e42-dafa-45cd-bf38-545cc2eb9e21\") " Nov 24 08:53:36 crc kubenswrapper[4886]: I1124 08:53:36.742322 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a5f30dce-707e-45e7-a928-4602478ac07d-utilities\") pod \"a5f30dce-707e-45e7-a928-4602478ac07d\" (UID: \"a5f30dce-707e-45e7-a928-4602478ac07d\") " Nov 24 08:53:36 crc kubenswrapper[4886]: I1124 08:53:36.742348 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v8fwn\" (UniqueName: \"kubernetes.io/projected/5cfd24b4-c215-49b2-af8e-a3875c05c738-kube-api-access-v8fwn\") pod \"5cfd24b4-c215-49b2-af8e-a3875c05c738\" (UID: \"5cfd24b4-c215-49b2-af8e-a3875c05c738\") " Nov 24 08:53:36 crc kubenswrapper[4886]: I1124 08:53:36.742388 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9z5tm\" (UniqueName: \"kubernetes.io/projected/a5f30dce-707e-45e7-a928-4602478ac07d-kube-api-access-9z5tm\") pod \"a5f30dce-707e-45e7-a928-4602478ac07d\" (UID: \"a5f30dce-707e-45e7-a928-4602478ac07d\") " Nov 24 08:53:36 crc kubenswrapper[4886]: I1124 08:53:36.742411 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5cfd24b4-c215-49b2-af8e-a3875c05c738-catalog-content\") pod \"5cfd24b4-c215-49b2-af8e-a3875c05c738\" (UID: \"5cfd24b4-c215-49b2-af8e-a3875c05c738\") " Nov 24 08:53:36 crc kubenswrapper[4886]: I1124 08:53:36.742473 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/364b3e42-dafa-45cd-bf38-545cc2eb9e21-marketplace-trusted-ca\") pod \"364b3e42-dafa-45cd-bf38-545cc2eb9e21\" (UID: \"364b3e42-dafa-45cd-bf38-545cc2eb9e21\") " Nov 24 08:53:36 crc kubenswrapper[4886]: I1124 08:53:36.742539 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lx559\" (UniqueName: \"kubernetes.io/projected/364b3e42-dafa-45cd-bf38-545cc2eb9e21-kube-api-access-lx559\") pod \"364b3e42-dafa-45cd-bf38-545cc2eb9e21\" (UID: \"364b3e42-dafa-45cd-bf38-545cc2eb9e21\") " Nov 24 08:53:36 crc kubenswrapper[4886]: I1124 08:53:36.742567 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/44504d41-1a7d-4a15-a270-24325b0954a9-catalog-content\") pod \"44504d41-1a7d-4a15-a270-24325b0954a9\" (UID: \"44504d41-1a7d-4a15-a270-24325b0954a9\") " Nov 24 08:53:36 crc kubenswrapper[4886]: I1124 08:53:36.742587 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5cfd24b4-c215-49b2-af8e-a3875c05c738-utilities\") pod \"5cfd24b4-c215-49b2-af8e-a3875c05c738\" (UID: \"5cfd24b4-c215-49b2-af8e-a3875c05c738\") " Nov 24 08:53:36 crc kubenswrapper[4886]: I1124 08:53:36.742857 4886 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d89bb378-d235-4377-9908-0008691b9174-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 08:53:36 crc kubenswrapper[4886]: I1124 08:53:36.742979 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/44504d41-1a7d-4a15-a270-24325b0954a9-utilities" (OuterVolumeSpecName: "utilities") pod "44504d41-1a7d-4a15-a270-24325b0954a9" (UID: "44504d41-1a7d-4a15-a270-24325b0954a9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 08:53:36 crc kubenswrapper[4886]: I1124 08:53:36.743487 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/364b3e42-dafa-45cd-bf38-545cc2eb9e21-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "364b3e42-dafa-45cd-bf38-545cc2eb9e21" (UID: "364b3e42-dafa-45cd-bf38-545cc2eb9e21"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 08:53:36 crc kubenswrapper[4886]: I1124 08:53:36.743480 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a5f30dce-707e-45e7-a928-4602478ac07d-utilities" (OuterVolumeSpecName: "utilities") pod "a5f30dce-707e-45e7-a928-4602478ac07d" (UID: "a5f30dce-707e-45e7-a928-4602478ac07d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 08:53:36 crc kubenswrapper[4886]: I1124 08:53:36.744443 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5cfd24b4-c215-49b2-af8e-a3875c05c738-utilities" (OuterVolumeSpecName: "utilities") pod "5cfd24b4-c215-49b2-af8e-a3875c05c738" (UID: "5cfd24b4-c215-49b2-af8e-a3875c05c738"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 08:53:36 crc kubenswrapper[4886]: I1124 08:53:36.746650 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a5f30dce-707e-45e7-a928-4602478ac07d-kube-api-access-9z5tm" (OuterVolumeSpecName: "kube-api-access-9z5tm") pod "a5f30dce-707e-45e7-a928-4602478ac07d" (UID: "a5f30dce-707e-45e7-a928-4602478ac07d"). InnerVolumeSpecName "kube-api-access-9z5tm". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 08:53:36 crc kubenswrapper[4886]: I1124 08:53:36.746766 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44504d41-1a7d-4a15-a270-24325b0954a9-kube-api-access-vkrdt" (OuterVolumeSpecName: "kube-api-access-vkrdt") pod "44504d41-1a7d-4a15-a270-24325b0954a9" (UID: "44504d41-1a7d-4a15-a270-24325b0954a9"). InnerVolumeSpecName "kube-api-access-vkrdt". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 08:53:36 crc kubenswrapper[4886]: I1124 08:53:36.746935 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/364b3e42-dafa-45cd-bf38-545cc2eb9e21-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "364b3e42-dafa-45cd-bf38-545cc2eb9e21" (UID: "364b3e42-dafa-45cd-bf38-545cc2eb9e21"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 08:53:36 crc kubenswrapper[4886]: I1124 08:53:36.747392 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/364b3e42-dafa-45cd-bf38-545cc2eb9e21-kube-api-access-lx559" (OuterVolumeSpecName: "kube-api-access-lx559") pod "364b3e42-dafa-45cd-bf38-545cc2eb9e21" (UID: "364b3e42-dafa-45cd-bf38-545cc2eb9e21"). InnerVolumeSpecName "kube-api-access-lx559". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 08:53:36 crc kubenswrapper[4886]: I1124 08:53:36.749349 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5cfd24b4-c215-49b2-af8e-a3875c05c738-kube-api-access-v8fwn" (OuterVolumeSpecName: "kube-api-access-v8fwn") pod "5cfd24b4-c215-49b2-af8e-a3875c05c738" (UID: "5cfd24b4-c215-49b2-af8e-a3875c05c738"). InnerVolumeSpecName "kube-api-access-v8fwn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 08:53:36 crc kubenswrapper[4886]: I1124 08:53:36.763280 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a5f30dce-707e-45e7-a928-4602478ac07d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a5f30dce-707e-45e7-a928-4602478ac07d" (UID: "a5f30dce-707e-45e7-a928-4602478ac07d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 08:53:36 crc kubenswrapper[4886]: I1124 08:53:36.785960 4886 generic.go:334] "Generic (PLEG): container finished" podID="d89bb378-d235-4377-9908-0008691b9174" containerID="af19d2ea3795eee061798d35cb35e3b17d9b4a723418e8f5dcc3af517c73514f" exitCode=0 Nov 24 08:53:36 crc kubenswrapper[4886]: I1124 08:53:36.786087 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-l8vn8" event={"ID":"d89bb378-d235-4377-9908-0008691b9174","Type":"ContainerDied","Data":"af19d2ea3795eee061798d35cb35e3b17d9b4a723418e8f5dcc3af517c73514f"} Nov 24 08:53:36 crc kubenswrapper[4886]: I1124 08:53:36.786128 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-l8vn8" event={"ID":"d89bb378-d235-4377-9908-0008691b9174","Type":"ContainerDied","Data":"9d477cb4e0cde3e72c15845d9afb63fdb4e8904dcd37a501adc1c90e4062e9a0"} Nov 24 08:53:36 crc kubenswrapper[4886]: I1124 08:53:36.786151 4886 scope.go:117] "RemoveContainer" containerID="af19d2ea3795eee061798d35cb35e3b17d9b4a723418e8f5dcc3af517c73514f" Nov 24 08:53:36 crc kubenswrapper[4886]: I1124 08:53:36.786328 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-l8vn8" Nov 24 08:53:36 crc kubenswrapper[4886]: I1124 08:53:36.791119 4886 generic.go:334] "Generic (PLEG): container finished" podID="364b3e42-dafa-45cd-bf38-545cc2eb9e21" containerID="c05e0230c62c59cb2853c62c63d283b0eef0c8e84b47e899b77f46f53237b732" exitCode=0 Nov 24 08:53:36 crc kubenswrapper[4886]: I1124 08:53:36.791304 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-tsmf5" event={"ID":"364b3e42-dafa-45cd-bf38-545cc2eb9e21","Type":"ContainerDied","Data":"c05e0230c62c59cb2853c62c63d283b0eef0c8e84b47e899b77f46f53237b732"} Nov 24 08:53:36 crc kubenswrapper[4886]: I1124 08:53:36.791370 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-tsmf5" event={"ID":"364b3e42-dafa-45cd-bf38-545cc2eb9e21","Type":"ContainerDied","Data":"e377aca0889ce4180db9788171f0bda4367d64116a43c0d48d257c4b2e5ef494"} Nov 24 08:53:36 crc kubenswrapper[4886]: I1124 08:53:36.791409 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-tsmf5" Nov 24 08:53:36 crc kubenswrapper[4886]: I1124 08:53:36.797935 4886 generic.go:334] "Generic (PLEG): container finished" podID="44504d41-1a7d-4a15-a270-24325b0954a9" containerID="2545c2efdab5cce529af95431ab622627a48921caa34bd04e537da85fc028b1d" exitCode=0 Nov 24 08:53:36 crc kubenswrapper[4886]: I1124 08:53:36.798018 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2s97s" event={"ID":"44504d41-1a7d-4a15-a270-24325b0954a9","Type":"ContainerDied","Data":"2545c2efdab5cce529af95431ab622627a48921caa34bd04e537da85fc028b1d"} Nov 24 08:53:36 crc kubenswrapper[4886]: I1124 08:53:36.798056 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2s97s" event={"ID":"44504d41-1a7d-4a15-a270-24325b0954a9","Type":"ContainerDied","Data":"a7ee8f094397260ac6af09a294086fb9caa9364a2b7ce9cb9f304f524e8e8c15"} Nov 24 08:53:36 crc kubenswrapper[4886]: I1124 08:53:36.798170 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2s97s" Nov 24 08:53:36 crc kubenswrapper[4886]: I1124 08:53:36.802220 4886 generic.go:334] "Generic (PLEG): container finished" podID="a5f30dce-707e-45e7-a928-4602478ac07d" containerID="d3b3deb7b25f0c5deb76617e12fa4f03827524fced20eeb6e9526ab61bca8ff2" exitCode=0 Nov 24 08:53:36 crc kubenswrapper[4886]: I1124 08:53:36.802293 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tbg5c" event={"ID":"a5f30dce-707e-45e7-a928-4602478ac07d","Type":"ContainerDied","Data":"d3b3deb7b25f0c5deb76617e12fa4f03827524fced20eeb6e9526ab61bca8ff2"} Nov 24 08:53:36 crc kubenswrapper[4886]: I1124 08:53:36.802322 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tbg5c" event={"ID":"a5f30dce-707e-45e7-a928-4602478ac07d","Type":"ContainerDied","Data":"62204c3138f6bb2fcd522e6fc706c1bd910bc9b22ad92d371867bb9c576be373"} Nov 24 08:53:36 crc kubenswrapper[4886]: I1124 08:53:36.802394 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-tbg5c" Nov 24 08:53:36 crc kubenswrapper[4886]: I1124 08:53:36.810824 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/44504d41-1a7d-4a15-a270-24325b0954a9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "44504d41-1a7d-4a15-a270-24325b0954a9" (UID: "44504d41-1a7d-4a15-a270-24325b0954a9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 08:53:36 crc kubenswrapper[4886]: I1124 08:53:36.811375 4886 generic.go:334] "Generic (PLEG): container finished" podID="5cfd24b4-c215-49b2-af8e-a3875c05c738" containerID="aa329a1320b154fc105a618e811e6fe0c475d56c2386e8f046fe411bb19cc7ca" exitCode=0 Nov 24 08:53:36 crc kubenswrapper[4886]: I1124 08:53:36.811456 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pg5ls" event={"ID":"5cfd24b4-c215-49b2-af8e-a3875c05c738","Type":"ContainerDied","Data":"aa329a1320b154fc105a618e811e6fe0c475d56c2386e8f046fe411bb19cc7ca"} Nov 24 08:53:36 crc kubenswrapper[4886]: I1124 08:53:36.811477 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pg5ls" Nov 24 08:53:36 crc kubenswrapper[4886]: I1124 08:53:36.811622 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pg5ls" event={"ID":"5cfd24b4-c215-49b2-af8e-a3875c05c738","Type":"ContainerDied","Data":"7a352e3ab3551542b21d9f379fc66befe1f041a2524d17733b2482398058a79e"} Nov 24 08:53:36 crc kubenswrapper[4886]: I1124 08:53:36.832636 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-l8vn8"] Nov 24 08:53:36 crc kubenswrapper[4886]: I1124 08:53:36.835420 4886 scope.go:117] "RemoveContainer" containerID="e45e9766eac36d89bfe48247824653f50563c00282cb4ab4e2371338cf454db1" Nov 24 08:53:36 crc kubenswrapper[4886]: I1124 08:53:36.837506 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-l8vn8"] Nov 24 08:53:36 crc kubenswrapper[4886]: I1124 08:53:36.844774 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5cfd24b4-c215-49b2-af8e-a3875c05c738-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5cfd24b4-c215-49b2-af8e-a3875c05c738" (UID: "5cfd24b4-c215-49b2-af8e-a3875c05c738"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 08:53:36 crc kubenswrapper[4886]: I1124 08:53:36.844950 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5cfd24b4-c215-49b2-af8e-a3875c05c738-catalog-content\") pod \"5cfd24b4-c215-49b2-af8e-a3875c05c738\" (UID: \"5cfd24b4-c215-49b2-af8e-a3875c05c738\") " Nov 24 08:53:36 crc kubenswrapper[4886]: W1124 08:53:36.845430 4886 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/5cfd24b4-c215-49b2-af8e-a3875c05c738/volumes/kubernetes.io~empty-dir/catalog-content Nov 24 08:53:36 crc kubenswrapper[4886]: I1124 08:53:36.845457 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5cfd24b4-c215-49b2-af8e-a3875c05c738-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5cfd24b4-c215-49b2-af8e-a3875c05c738" (UID: "5cfd24b4-c215-49b2-af8e-a3875c05c738"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 08:53:36 crc kubenswrapper[4886]: I1124 08:53:36.845754 4886 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/44504d41-1a7d-4a15-a270-24325b0954a9-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 08:53:36 crc kubenswrapper[4886]: I1124 08:53:36.845777 4886 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a5f30dce-707e-45e7-a928-4602478ac07d-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 08:53:36 crc kubenswrapper[4886]: I1124 08:53:36.845791 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vkrdt\" (UniqueName: \"kubernetes.io/projected/44504d41-1a7d-4a15-a270-24325b0954a9-kube-api-access-vkrdt\") on node \"crc\" DevicePath \"\"" Nov 24 08:53:36 crc kubenswrapper[4886]: I1124 08:53:36.845803 4886 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/364b3e42-dafa-45cd-bf38-545cc2eb9e21-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Nov 24 08:53:36 crc kubenswrapper[4886]: I1124 08:53:36.845815 4886 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a5f30dce-707e-45e7-a928-4602478ac07d-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 08:53:36 crc kubenswrapper[4886]: I1124 08:53:36.845829 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v8fwn\" (UniqueName: \"kubernetes.io/projected/5cfd24b4-c215-49b2-af8e-a3875c05c738-kube-api-access-v8fwn\") on node \"crc\" DevicePath \"\"" Nov 24 08:53:36 crc kubenswrapper[4886]: I1124 08:53:36.845841 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9z5tm\" (UniqueName: \"kubernetes.io/projected/a5f30dce-707e-45e7-a928-4602478ac07d-kube-api-access-9z5tm\") on node \"crc\" DevicePath \"\"" Nov 24 08:53:36 crc kubenswrapper[4886]: I1124 08:53:36.845852 4886 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5cfd24b4-c215-49b2-af8e-a3875c05c738-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 08:53:36 crc kubenswrapper[4886]: I1124 08:53:36.845863 4886 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/364b3e42-dafa-45cd-bf38-545cc2eb9e21-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 24 08:53:36 crc kubenswrapper[4886]: I1124 08:53:36.845876 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lx559\" (UniqueName: \"kubernetes.io/projected/364b3e42-dafa-45cd-bf38-545cc2eb9e21-kube-api-access-lx559\") on node \"crc\" DevicePath \"\"" Nov 24 08:53:36 crc kubenswrapper[4886]: I1124 08:53:36.845889 4886 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/44504d41-1a7d-4a15-a270-24325b0954a9-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 08:53:36 crc kubenswrapper[4886]: I1124 08:53:36.845900 4886 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5cfd24b4-c215-49b2-af8e-a3875c05c738-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 08:53:36 crc kubenswrapper[4886]: I1124 08:53:36.861249 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-tsmf5"] Nov 24 08:53:36 crc kubenswrapper[4886]: I1124 08:53:36.882972 4886 scope.go:117] "RemoveContainer" containerID="d89e70664807edd91193d5a1620e87b60bd480d51d2eb41e57e890ce5b03ed1b" Nov 24 08:53:36 crc kubenswrapper[4886]: I1124 08:53:36.888298 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d89bb378-d235-4377-9908-0008691b9174" path="/var/lib/kubelet/pods/d89bb378-d235-4377-9908-0008691b9174/volumes" Nov 24 08:53:36 crc kubenswrapper[4886]: I1124 08:53:36.892841 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-tsmf5"] Nov 24 08:53:36 crc kubenswrapper[4886]: I1124 08:53:36.896281 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-tbg5c"] Nov 24 08:53:36 crc kubenswrapper[4886]: I1124 08:53:36.900805 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-tbg5c"] Nov 24 08:53:36 crc kubenswrapper[4886]: I1124 08:53:36.912360 4886 scope.go:117] "RemoveContainer" containerID="af19d2ea3795eee061798d35cb35e3b17d9b4a723418e8f5dcc3af517c73514f" Nov 24 08:53:36 crc kubenswrapper[4886]: E1124 08:53:36.913643 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"af19d2ea3795eee061798d35cb35e3b17d9b4a723418e8f5dcc3af517c73514f\": container with ID starting with af19d2ea3795eee061798d35cb35e3b17d9b4a723418e8f5dcc3af517c73514f not found: ID does not exist" containerID="af19d2ea3795eee061798d35cb35e3b17d9b4a723418e8f5dcc3af517c73514f" Nov 24 08:53:36 crc kubenswrapper[4886]: I1124 08:53:36.913701 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"af19d2ea3795eee061798d35cb35e3b17d9b4a723418e8f5dcc3af517c73514f"} err="failed to get container status \"af19d2ea3795eee061798d35cb35e3b17d9b4a723418e8f5dcc3af517c73514f\": rpc error: code = NotFound desc = could not find container \"af19d2ea3795eee061798d35cb35e3b17d9b4a723418e8f5dcc3af517c73514f\": container with ID starting with af19d2ea3795eee061798d35cb35e3b17d9b4a723418e8f5dcc3af517c73514f not found: ID does not exist" Nov 24 08:53:36 crc kubenswrapper[4886]: I1124 08:53:36.913737 4886 scope.go:117] "RemoveContainer" containerID="e45e9766eac36d89bfe48247824653f50563c00282cb4ab4e2371338cf454db1" Nov 24 08:53:36 crc kubenswrapper[4886]: E1124 08:53:36.914880 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e45e9766eac36d89bfe48247824653f50563c00282cb4ab4e2371338cf454db1\": container with ID starting with e45e9766eac36d89bfe48247824653f50563c00282cb4ab4e2371338cf454db1 not found: ID does not exist" containerID="e45e9766eac36d89bfe48247824653f50563c00282cb4ab4e2371338cf454db1" Nov 24 08:53:36 crc kubenswrapper[4886]: I1124 08:53:36.914931 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e45e9766eac36d89bfe48247824653f50563c00282cb4ab4e2371338cf454db1"} err="failed to get container status \"e45e9766eac36d89bfe48247824653f50563c00282cb4ab4e2371338cf454db1\": rpc error: code = NotFound desc = could not find container \"e45e9766eac36d89bfe48247824653f50563c00282cb4ab4e2371338cf454db1\": container with ID starting with e45e9766eac36d89bfe48247824653f50563c00282cb4ab4e2371338cf454db1 not found: ID does not exist" Nov 24 08:53:36 crc kubenswrapper[4886]: I1124 08:53:36.914958 4886 scope.go:117] "RemoveContainer" containerID="d89e70664807edd91193d5a1620e87b60bd480d51d2eb41e57e890ce5b03ed1b" Nov 24 08:53:36 crc kubenswrapper[4886]: E1124 08:53:36.915604 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d89e70664807edd91193d5a1620e87b60bd480d51d2eb41e57e890ce5b03ed1b\": container with ID starting with d89e70664807edd91193d5a1620e87b60bd480d51d2eb41e57e890ce5b03ed1b not found: ID does not exist" containerID="d89e70664807edd91193d5a1620e87b60bd480d51d2eb41e57e890ce5b03ed1b" Nov 24 08:53:36 crc kubenswrapper[4886]: I1124 08:53:36.915663 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d89e70664807edd91193d5a1620e87b60bd480d51d2eb41e57e890ce5b03ed1b"} err="failed to get container status \"d89e70664807edd91193d5a1620e87b60bd480d51d2eb41e57e890ce5b03ed1b\": rpc error: code = NotFound desc = could not find container \"d89e70664807edd91193d5a1620e87b60bd480d51d2eb41e57e890ce5b03ed1b\": container with ID starting with d89e70664807edd91193d5a1620e87b60bd480d51d2eb41e57e890ce5b03ed1b not found: ID does not exist" Nov 24 08:53:36 crc kubenswrapper[4886]: I1124 08:53:36.915703 4886 scope.go:117] "RemoveContainer" containerID="c05e0230c62c59cb2853c62c63d283b0eef0c8e84b47e899b77f46f53237b732" Nov 24 08:53:36 crc kubenswrapper[4886]: I1124 08:53:36.937869 4886 scope.go:117] "RemoveContainer" containerID="c05e0230c62c59cb2853c62c63d283b0eef0c8e84b47e899b77f46f53237b732" Nov 24 08:53:36 crc kubenswrapper[4886]: E1124 08:53:36.938725 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c05e0230c62c59cb2853c62c63d283b0eef0c8e84b47e899b77f46f53237b732\": container with ID starting with c05e0230c62c59cb2853c62c63d283b0eef0c8e84b47e899b77f46f53237b732 not found: ID does not exist" containerID="c05e0230c62c59cb2853c62c63d283b0eef0c8e84b47e899b77f46f53237b732" Nov 24 08:53:36 crc kubenswrapper[4886]: I1124 08:53:36.938803 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c05e0230c62c59cb2853c62c63d283b0eef0c8e84b47e899b77f46f53237b732"} err="failed to get container status \"c05e0230c62c59cb2853c62c63d283b0eef0c8e84b47e899b77f46f53237b732\": rpc error: code = NotFound desc = could not find container \"c05e0230c62c59cb2853c62c63d283b0eef0c8e84b47e899b77f46f53237b732\": container with ID starting with c05e0230c62c59cb2853c62c63d283b0eef0c8e84b47e899b77f46f53237b732 not found: ID does not exist" Nov 24 08:53:36 crc kubenswrapper[4886]: I1124 08:53:36.938841 4886 scope.go:117] "RemoveContainer" containerID="2545c2efdab5cce529af95431ab622627a48921caa34bd04e537da85fc028b1d" Nov 24 08:53:36 crc kubenswrapper[4886]: I1124 08:53:36.957511 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-psbrg"] Nov 24 08:53:36 crc kubenswrapper[4886]: I1124 08:53:36.964879 4886 scope.go:117] "RemoveContainer" containerID="eabe9c9e352d8a22eb653518d139c5f7a370b4a20c10cf8132ed9f764aeb714b" Nov 24 08:53:36 crc kubenswrapper[4886]: I1124 08:53:36.985998 4886 scope.go:117] "RemoveContainer" containerID="2c4f9b75de0254d2bac38b77f41425b042f9c78bffd82437037ed38add7d45cc" Nov 24 08:53:37 crc kubenswrapper[4886]: I1124 08:53:37.009047 4886 scope.go:117] "RemoveContainer" containerID="2545c2efdab5cce529af95431ab622627a48921caa34bd04e537da85fc028b1d" Nov 24 08:53:37 crc kubenswrapper[4886]: E1124 08:53:37.009754 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2545c2efdab5cce529af95431ab622627a48921caa34bd04e537da85fc028b1d\": container with ID starting with 2545c2efdab5cce529af95431ab622627a48921caa34bd04e537da85fc028b1d not found: ID does not exist" containerID="2545c2efdab5cce529af95431ab622627a48921caa34bd04e537da85fc028b1d" Nov 24 08:53:37 crc kubenswrapper[4886]: I1124 08:53:37.009810 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2545c2efdab5cce529af95431ab622627a48921caa34bd04e537da85fc028b1d"} err="failed to get container status \"2545c2efdab5cce529af95431ab622627a48921caa34bd04e537da85fc028b1d\": rpc error: code = NotFound desc = could not find container \"2545c2efdab5cce529af95431ab622627a48921caa34bd04e537da85fc028b1d\": container with ID starting with 2545c2efdab5cce529af95431ab622627a48921caa34bd04e537da85fc028b1d not found: ID does not exist" Nov 24 08:53:37 crc kubenswrapper[4886]: I1124 08:53:37.009850 4886 scope.go:117] "RemoveContainer" containerID="eabe9c9e352d8a22eb653518d139c5f7a370b4a20c10cf8132ed9f764aeb714b" Nov 24 08:53:37 crc kubenswrapper[4886]: E1124 08:53:37.010259 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eabe9c9e352d8a22eb653518d139c5f7a370b4a20c10cf8132ed9f764aeb714b\": container with ID starting with eabe9c9e352d8a22eb653518d139c5f7a370b4a20c10cf8132ed9f764aeb714b not found: ID does not exist" containerID="eabe9c9e352d8a22eb653518d139c5f7a370b4a20c10cf8132ed9f764aeb714b" Nov 24 08:53:37 crc kubenswrapper[4886]: I1124 08:53:37.010310 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eabe9c9e352d8a22eb653518d139c5f7a370b4a20c10cf8132ed9f764aeb714b"} err="failed to get container status \"eabe9c9e352d8a22eb653518d139c5f7a370b4a20c10cf8132ed9f764aeb714b\": rpc error: code = NotFound desc = could not find container \"eabe9c9e352d8a22eb653518d139c5f7a370b4a20c10cf8132ed9f764aeb714b\": container with ID starting with eabe9c9e352d8a22eb653518d139c5f7a370b4a20c10cf8132ed9f764aeb714b not found: ID does not exist" Nov 24 08:53:37 crc kubenswrapper[4886]: I1124 08:53:37.010346 4886 scope.go:117] "RemoveContainer" containerID="2c4f9b75de0254d2bac38b77f41425b042f9c78bffd82437037ed38add7d45cc" Nov 24 08:53:37 crc kubenswrapper[4886]: E1124 08:53:37.010907 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2c4f9b75de0254d2bac38b77f41425b042f9c78bffd82437037ed38add7d45cc\": container with ID starting with 2c4f9b75de0254d2bac38b77f41425b042f9c78bffd82437037ed38add7d45cc not found: ID does not exist" containerID="2c4f9b75de0254d2bac38b77f41425b042f9c78bffd82437037ed38add7d45cc" Nov 24 08:53:37 crc kubenswrapper[4886]: I1124 08:53:37.010941 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2c4f9b75de0254d2bac38b77f41425b042f9c78bffd82437037ed38add7d45cc"} err="failed to get container status \"2c4f9b75de0254d2bac38b77f41425b042f9c78bffd82437037ed38add7d45cc\": rpc error: code = NotFound desc = could not find container \"2c4f9b75de0254d2bac38b77f41425b042f9c78bffd82437037ed38add7d45cc\": container with ID starting with 2c4f9b75de0254d2bac38b77f41425b042f9c78bffd82437037ed38add7d45cc not found: ID does not exist" Nov 24 08:53:37 crc kubenswrapper[4886]: I1124 08:53:37.010965 4886 scope.go:117] "RemoveContainer" containerID="d3b3deb7b25f0c5deb76617e12fa4f03827524fced20eeb6e9526ab61bca8ff2" Nov 24 08:53:37 crc kubenswrapper[4886]: I1124 08:53:37.073980 4886 scope.go:117] "RemoveContainer" containerID="3fae1e7325a7c0119ee0031edf1a21c21d03295739b3dd30a26dcfbb13e22921" Nov 24 08:53:37 crc kubenswrapper[4886]: I1124 08:53:37.099932 4886 scope.go:117] "RemoveContainer" containerID="c77230ecd0034d41c03604dc53595cd12e8594c482d6f0daf54a2d9d3bf6dd4d" Nov 24 08:53:37 crc kubenswrapper[4886]: I1124 08:53:37.120443 4886 scope.go:117] "RemoveContainer" containerID="d3b3deb7b25f0c5deb76617e12fa4f03827524fced20eeb6e9526ab61bca8ff2" Nov 24 08:53:37 crc kubenswrapper[4886]: E1124 08:53:37.125836 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d3b3deb7b25f0c5deb76617e12fa4f03827524fced20eeb6e9526ab61bca8ff2\": container with ID starting with d3b3deb7b25f0c5deb76617e12fa4f03827524fced20eeb6e9526ab61bca8ff2 not found: ID does not exist" containerID="d3b3deb7b25f0c5deb76617e12fa4f03827524fced20eeb6e9526ab61bca8ff2" Nov 24 08:53:37 crc kubenswrapper[4886]: I1124 08:53:37.125900 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d3b3deb7b25f0c5deb76617e12fa4f03827524fced20eeb6e9526ab61bca8ff2"} err="failed to get container status \"d3b3deb7b25f0c5deb76617e12fa4f03827524fced20eeb6e9526ab61bca8ff2\": rpc error: code = NotFound desc = could not find container \"d3b3deb7b25f0c5deb76617e12fa4f03827524fced20eeb6e9526ab61bca8ff2\": container with ID starting with d3b3deb7b25f0c5deb76617e12fa4f03827524fced20eeb6e9526ab61bca8ff2 not found: ID does not exist" Nov 24 08:53:37 crc kubenswrapper[4886]: I1124 08:53:37.125934 4886 scope.go:117] "RemoveContainer" containerID="3fae1e7325a7c0119ee0031edf1a21c21d03295739b3dd30a26dcfbb13e22921" Nov 24 08:53:37 crc kubenswrapper[4886]: E1124 08:53:37.126463 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3fae1e7325a7c0119ee0031edf1a21c21d03295739b3dd30a26dcfbb13e22921\": container with ID starting with 3fae1e7325a7c0119ee0031edf1a21c21d03295739b3dd30a26dcfbb13e22921 not found: ID does not exist" containerID="3fae1e7325a7c0119ee0031edf1a21c21d03295739b3dd30a26dcfbb13e22921" Nov 24 08:53:37 crc kubenswrapper[4886]: I1124 08:53:37.126484 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3fae1e7325a7c0119ee0031edf1a21c21d03295739b3dd30a26dcfbb13e22921"} err="failed to get container status \"3fae1e7325a7c0119ee0031edf1a21c21d03295739b3dd30a26dcfbb13e22921\": rpc error: code = NotFound desc = could not find container \"3fae1e7325a7c0119ee0031edf1a21c21d03295739b3dd30a26dcfbb13e22921\": container with ID starting with 3fae1e7325a7c0119ee0031edf1a21c21d03295739b3dd30a26dcfbb13e22921 not found: ID does not exist" Nov 24 08:53:37 crc kubenswrapper[4886]: I1124 08:53:37.126497 4886 scope.go:117] "RemoveContainer" containerID="c77230ecd0034d41c03604dc53595cd12e8594c482d6f0daf54a2d9d3bf6dd4d" Nov 24 08:53:37 crc kubenswrapper[4886]: E1124 08:53:37.126786 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c77230ecd0034d41c03604dc53595cd12e8594c482d6f0daf54a2d9d3bf6dd4d\": container with ID starting with c77230ecd0034d41c03604dc53595cd12e8594c482d6f0daf54a2d9d3bf6dd4d not found: ID does not exist" containerID="c77230ecd0034d41c03604dc53595cd12e8594c482d6f0daf54a2d9d3bf6dd4d" Nov 24 08:53:37 crc kubenswrapper[4886]: I1124 08:53:37.126810 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c77230ecd0034d41c03604dc53595cd12e8594c482d6f0daf54a2d9d3bf6dd4d"} err="failed to get container status \"c77230ecd0034d41c03604dc53595cd12e8594c482d6f0daf54a2d9d3bf6dd4d\": rpc error: code = NotFound desc = could not find container \"c77230ecd0034d41c03604dc53595cd12e8594c482d6f0daf54a2d9d3bf6dd4d\": container with ID starting with c77230ecd0034d41c03604dc53595cd12e8594c482d6f0daf54a2d9d3bf6dd4d not found: ID does not exist" Nov 24 08:53:37 crc kubenswrapper[4886]: I1124 08:53:37.126822 4886 scope.go:117] "RemoveContainer" containerID="aa329a1320b154fc105a618e811e6fe0c475d56c2386e8f046fe411bb19cc7ca" Nov 24 08:53:37 crc kubenswrapper[4886]: I1124 08:53:37.145969 4886 scope.go:117] "RemoveContainer" containerID="fc6a81a47c2d6e1722b1a2b363a893b73ad5699fad2ea91e08b20533dd5fada3" Nov 24 08:53:37 crc kubenswrapper[4886]: I1124 08:53:37.156824 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-2s97s"] Nov 24 08:53:37 crc kubenswrapper[4886]: I1124 08:53:37.163807 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-2s97s"] Nov 24 08:53:37 crc kubenswrapper[4886]: I1124 08:53:37.169350 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-pg5ls"] Nov 24 08:53:37 crc kubenswrapper[4886]: I1124 08:53:37.172023 4886 scope.go:117] "RemoveContainer" containerID="53ea7abd2ef81847cfd683e89cdce37086a1d41a8d43a32c3a7c4b04cee438ee" Nov 24 08:53:37 crc kubenswrapper[4886]: I1124 08:53:37.172086 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-pg5ls"] Nov 24 08:53:37 crc kubenswrapper[4886]: I1124 08:53:37.190635 4886 scope.go:117] "RemoveContainer" containerID="aa329a1320b154fc105a618e811e6fe0c475d56c2386e8f046fe411bb19cc7ca" Nov 24 08:53:37 crc kubenswrapper[4886]: E1124 08:53:37.191262 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aa329a1320b154fc105a618e811e6fe0c475d56c2386e8f046fe411bb19cc7ca\": container with ID starting with aa329a1320b154fc105a618e811e6fe0c475d56c2386e8f046fe411bb19cc7ca not found: ID does not exist" containerID="aa329a1320b154fc105a618e811e6fe0c475d56c2386e8f046fe411bb19cc7ca" Nov 24 08:53:37 crc kubenswrapper[4886]: I1124 08:53:37.191316 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aa329a1320b154fc105a618e811e6fe0c475d56c2386e8f046fe411bb19cc7ca"} err="failed to get container status \"aa329a1320b154fc105a618e811e6fe0c475d56c2386e8f046fe411bb19cc7ca\": rpc error: code = NotFound desc = could not find container \"aa329a1320b154fc105a618e811e6fe0c475d56c2386e8f046fe411bb19cc7ca\": container with ID starting with aa329a1320b154fc105a618e811e6fe0c475d56c2386e8f046fe411bb19cc7ca not found: ID does not exist" Nov 24 08:53:37 crc kubenswrapper[4886]: I1124 08:53:37.191351 4886 scope.go:117] "RemoveContainer" containerID="fc6a81a47c2d6e1722b1a2b363a893b73ad5699fad2ea91e08b20533dd5fada3" Nov 24 08:53:37 crc kubenswrapper[4886]: E1124 08:53:37.191718 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fc6a81a47c2d6e1722b1a2b363a893b73ad5699fad2ea91e08b20533dd5fada3\": container with ID starting with fc6a81a47c2d6e1722b1a2b363a893b73ad5699fad2ea91e08b20533dd5fada3 not found: ID does not exist" containerID="fc6a81a47c2d6e1722b1a2b363a893b73ad5699fad2ea91e08b20533dd5fada3" Nov 24 08:53:37 crc kubenswrapper[4886]: I1124 08:53:37.191758 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fc6a81a47c2d6e1722b1a2b363a893b73ad5699fad2ea91e08b20533dd5fada3"} err="failed to get container status \"fc6a81a47c2d6e1722b1a2b363a893b73ad5699fad2ea91e08b20533dd5fada3\": rpc error: code = NotFound desc = could not find container \"fc6a81a47c2d6e1722b1a2b363a893b73ad5699fad2ea91e08b20533dd5fada3\": container with ID starting with fc6a81a47c2d6e1722b1a2b363a893b73ad5699fad2ea91e08b20533dd5fada3 not found: ID does not exist" Nov 24 08:53:37 crc kubenswrapper[4886]: I1124 08:53:37.191785 4886 scope.go:117] "RemoveContainer" containerID="53ea7abd2ef81847cfd683e89cdce37086a1d41a8d43a32c3a7c4b04cee438ee" Nov 24 08:53:37 crc kubenswrapper[4886]: E1124 08:53:37.192037 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"53ea7abd2ef81847cfd683e89cdce37086a1d41a8d43a32c3a7c4b04cee438ee\": container with ID starting with 53ea7abd2ef81847cfd683e89cdce37086a1d41a8d43a32c3a7c4b04cee438ee not found: ID does not exist" containerID="53ea7abd2ef81847cfd683e89cdce37086a1d41a8d43a32c3a7c4b04cee438ee" Nov 24 08:53:37 crc kubenswrapper[4886]: I1124 08:53:37.192070 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"53ea7abd2ef81847cfd683e89cdce37086a1d41a8d43a32c3a7c4b04cee438ee"} err="failed to get container status \"53ea7abd2ef81847cfd683e89cdce37086a1d41a8d43a32c3a7c4b04cee438ee\": rpc error: code = NotFound desc = could not find container \"53ea7abd2ef81847cfd683e89cdce37086a1d41a8d43a32c3a7c4b04cee438ee\": container with ID starting with 53ea7abd2ef81847cfd683e89cdce37086a1d41a8d43a32c3a7c4b04cee438ee not found: ID does not exist" Nov 24 08:53:37 crc kubenswrapper[4886]: I1124 08:53:37.830091 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-psbrg" event={"ID":"5c32598f-bb74-4615-b8f9-77f36f97f80a","Type":"ContainerStarted","Data":"50aa11afee627a6d12464ed23fde336d5229b5ed3c47f72a693cd8cae588b82b"} Nov 24 08:53:37 crc kubenswrapper[4886]: I1124 08:53:37.830485 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-psbrg" event={"ID":"5c32598f-bb74-4615-b8f9-77f36f97f80a","Type":"ContainerStarted","Data":"0261ad686644eac9948e67a27023a2f28dcc59275e947625b22b100b617a752b"} Nov 24 08:53:37 crc kubenswrapper[4886]: I1124 08:53:37.830516 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-psbrg" Nov 24 08:53:37 crc kubenswrapper[4886]: I1124 08:53:37.834554 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-psbrg" Nov 24 08:53:37 crc kubenswrapper[4886]: I1124 08:53:37.857212 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-psbrg" podStartSLOduration=2.857185313 podStartE2EDuration="2.857185313s" podCreationTimestamp="2025-11-24 08:53:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 08:53:37.851828496 +0000 UTC m=+273.738566651" watchObservedRunningTime="2025-11-24 08:53:37.857185313 +0000 UTC m=+273.743923448" Nov 24 08:53:38 crc kubenswrapper[4886]: I1124 08:53:38.103293 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-hpdtc"] Nov 24 08:53:38 crc kubenswrapper[4886]: E1124 08:53:38.103606 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="44504d41-1a7d-4a15-a270-24325b0954a9" containerName="extract-content" Nov 24 08:53:38 crc kubenswrapper[4886]: I1124 08:53:38.103626 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="44504d41-1a7d-4a15-a270-24325b0954a9" containerName="extract-content" Nov 24 08:53:38 crc kubenswrapper[4886]: E1124 08:53:38.103644 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5cfd24b4-c215-49b2-af8e-a3875c05c738" containerName="extract-utilities" Nov 24 08:53:38 crc kubenswrapper[4886]: I1124 08:53:38.103652 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="5cfd24b4-c215-49b2-af8e-a3875c05c738" containerName="extract-utilities" Nov 24 08:53:38 crc kubenswrapper[4886]: E1124 08:53:38.103662 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a5f30dce-707e-45e7-a928-4602478ac07d" containerName="registry-server" Nov 24 08:53:38 crc kubenswrapper[4886]: I1124 08:53:38.103674 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="a5f30dce-707e-45e7-a928-4602478ac07d" containerName="registry-server" Nov 24 08:53:38 crc kubenswrapper[4886]: E1124 08:53:38.103687 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d89bb378-d235-4377-9908-0008691b9174" containerName="extract-utilities" Nov 24 08:53:38 crc kubenswrapper[4886]: I1124 08:53:38.103697 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="d89bb378-d235-4377-9908-0008691b9174" containerName="extract-utilities" Nov 24 08:53:38 crc kubenswrapper[4886]: E1124 08:53:38.103709 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5cfd24b4-c215-49b2-af8e-a3875c05c738" containerName="extract-content" Nov 24 08:53:38 crc kubenswrapper[4886]: I1124 08:53:38.103717 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="5cfd24b4-c215-49b2-af8e-a3875c05c738" containerName="extract-content" Nov 24 08:53:38 crc kubenswrapper[4886]: E1124 08:53:38.103728 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="44504d41-1a7d-4a15-a270-24325b0954a9" containerName="extract-utilities" Nov 24 08:53:38 crc kubenswrapper[4886]: I1124 08:53:38.103736 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="44504d41-1a7d-4a15-a270-24325b0954a9" containerName="extract-utilities" Nov 24 08:53:38 crc kubenswrapper[4886]: E1124 08:53:38.103744 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a5f30dce-707e-45e7-a928-4602478ac07d" containerName="extract-utilities" Nov 24 08:53:38 crc kubenswrapper[4886]: I1124 08:53:38.103751 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="a5f30dce-707e-45e7-a928-4602478ac07d" containerName="extract-utilities" Nov 24 08:53:38 crc kubenswrapper[4886]: E1124 08:53:38.103759 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="44504d41-1a7d-4a15-a270-24325b0954a9" containerName="registry-server" Nov 24 08:53:38 crc kubenswrapper[4886]: I1124 08:53:38.103765 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="44504d41-1a7d-4a15-a270-24325b0954a9" containerName="registry-server" Nov 24 08:53:38 crc kubenswrapper[4886]: E1124 08:53:38.103779 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5cfd24b4-c215-49b2-af8e-a3875c05c738" containerName="registry-server" Nov 24 08:53:38 crc kubenswrapper[4886]: I1124 08:53:38.103786 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="5cfd24b4-c215-49b2-af8e-a3875c05c738" containerName="registry-server" Nov 24 08:53:38 crc kubenswrapper[4886]: E1124 08:53:38.103798 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a5f30dce-707e-45e7-a928-4602478ac07d" containerName="extract-content" Nov 24 08:53:38 crc kubenswrapper[4886]: I1124 08:53:38.103806 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="a5f30dce-707e-45e7-a928-4602478ac07d" containerName="extract-content" Nov 24 08:53:38 crc kubenswrapper[4886]: E1124 08:53:38.103817 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d89bb378-d235-4377-9908-0008691b9174" containerName="extract-content" Nov 24 08:53:38 crc kubenswrapper[4886]: I1124 08:53:38.103825 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="d89bb378-d235-4377-9908-0008691b9174" containerName="extract-content" Nov 24 08:53:38 crc kubenswrapper[4886]: E1124 08:53:38.103837 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d89bb378-d235-4377-9908-0008691b9174" containerName="registry-server" Nov 24 08:53:38 crc kubenswrapper[4886]: I1124 08:53:38.103845 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="d89bb378-d235-4377-9908-0008691b9174" containerName="registry-server" Nov 24 08:53:38 crc kubenswrapper[4886]: E1124 08:53:38.103855 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="364b3e42-dafa-45cd-bf38-545cc2eb9e21" containerName="marketplace-operator" Nov 24 08:53:38 crc kubenswrapper[4886]: I1124 08:53:38.103864 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="364b3e42-dafa-45cd-bf38-545cc2eb9e21" containerName="marketplace-operator" Nov 24 08:53:38 crc kubenswrapper[4886]: I1124 08:53:38.103964 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="364b3e42-dafa-45cd-bf38-545cc2eb9e21" containerName="marketplace-operator" Nov 24 08:53:38 crc kubenswrapper[4886]: I1124 08:53:38.103978 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="d89bb378-d235-4377-9908-0008691b9174" containerName="registry-server" Nov 24 08:53:38 crc kubenswrapper[4886]: I1124 08:53:38.103986 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="a5f30dce-707e-45e7-a928-4602478ac07d" containerName="registry-server" Nov 24 08:53:38 crc kubenswrapper[4886]: I1124 08:53:38.103993 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="5cfd24b4-c215-49b2-af8e-a3875c05c738" containerName="registry-server" Nov 24 08:53:38 crc kubenswrapper[4886]: I1124 08:53:38.104001 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="44504d41-1a7d-4a15-a270-24325b0954a9" containerName="registry-server" Nov 24 08:53:38 crc kubenswrapper[4886]: I1124 08:53:38.105038 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hpdtc" Nov 24 08:53:38 crc kubenswrapper[4886]: I1124 08:53:38.108717 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Nov 24 08:53:38 crc kubenswrapper[4886]: I1124 08:53:38.115306 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-hpdtc"] Nov 24 08:53:38 crc kubenswrapper[4886]: I1124 08:53:38.165063 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b7685eb7-7670-424e-834e-cbe8c0a62dc9-catalog-content\") pod \"redhat-marketplace-hpdtc\" (UID: \"b7685eb7-7670-424e-834e-cbe8c0a62dc9\") " pod="openshift-marketplace/redhat-marketplace-hpdtc" Nov 24 08:53:38 crc kubenswrapper[4886]: I1124 08:53:38.165156 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b7685eb7-7670-424e-834e-cbe8c0a62dc9-utilities\") pod \"redhat-marketplace-hpdtc\" (UID: \"b7685eb7-7670-424e-834e-cbe8c0a62dc9\") " pod="openshift-marketplace/redhat-marketplace-hpdtc" Nov 24 08:53:38 crc kubenswrapper[4886]: I1124 08:53:38.165232 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-29t57\" (UniqueName: \"kubernetes.io/projected/b7685eb7-7670-424e-834e-cbe8c0a62dc9-kube-api-access-29t57\") pod \"redhat-marketplace-hpdtc\" (UID: \"b7685eb7-7670-424e-834e-cbe8c0a62dc9\") " pod="openshift-marketplace/redhat-marketplace-hpdtc" Nov 24 08:53:38 crc kubenswrapper[4886]: I1124 08:53:38.266920 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-29t57\" (UniqueName: \"kubernetes.io/projected/b7685eb7-7670-424e-834e-cbe8c0a62dc9-kube-api-access-29t57\") pod \"redhat-marketplace-hpdtc\" (UID: \"b7685eb7-7670-424e-834e-cbe8c0a62dc9\") " pod="openshift-marketplace/redhat-marketplace-hpdtc" Nov 24 08:53:38 crc kubenswrapper[4886]: I1124 08:53:38.267414 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b7685eb7-7670-424e-834e-cbe8c0a62dc9-catalog-content\") pod \"redhat-marketplace-hpdtc\" (UID: \"b7685eb7-7670-424e-834e-cbe8c0a62dc9\") " pod="openshift-marketplace/redhat-marketplace-hpdtc" Nov 24 08:53:38 crc kubenswrapper[4886]: I1124 08:53:38.267465 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b7685eb7-7670-424e-834e-cbe8c0a62dc9-utilities\") pod \"redhat-marketplace-hpdtc\" (UID: \"b7685eb7-7670-424e-834e-cbe8c0a62dc9\") " pod="openshift-marketplace/redhat-marketplace-hpdtc" Nov 24 08:53:38 crc kubenswrapper[4886]: I1124 08:53:38.267995 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b7685eb7-7670-424e-834e-cbe8c0a62dc9-catalog-content\") pod \"redhat-marketplace-hpdtc\" (UID: \"b7685eb7-7670-424e-834e-cbe8c0a62dc9\") " pod="openshift-marketplace/redhat-marketplace-hpdtc" Nov 24 08:53:38 crc kubenswrapper[4886]: I1124 08:53:38.268061 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b7685eb7-7670-424e-834e-cbe8c0a62dc9-utilities\") pod \"redhat-marketplace-hpdtc\" (UID: \"b7685eb7-7670-424e-834e-cbe8c0a62dc9\") " pod="openshift-marketplace/redhat-marketplace-hpdtc" Nov 24 08:53:38 crc kubenswrapper[4886]: I1124 08:53:38.288857 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-29t57\" (UniqueName: \"kubernetes.io/projected/b7685eb7-7670-424e-834e-cbe8c0a62dc9-kube-api-access-29t57\") pod \"redhat-marketplace-hpdtc\" (UID: \"b7685eb7-7670-424e-834e-cbe8c0a62dc9\") " pod="openshift-marketplace/redhat-marketplace-hpdtc" Nov 24 08:53:38 crc kubenswrapper[4886]: I1124 08:53:38.316862 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-27q9d"] Nov 24 08:53:38 crc kubenswrapper[4886]: I1124 08:53:38.318559 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-27q9d" Nov 24 08:53:38 crc kubenswrapper[4886]: I1124 08:53:38.321263 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Nov 24 08:53:38 crc kubenswrapper[4886]: I1124 08:53:38.322541 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-27q9d"] Nov 24 08:53:38 crc kubenswrapper[4886]: I1124 08:53:38.369487 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f32efa8c-706c-4a05-a3a0-6d3be84722c3-catalog-content\") pod \"redhat-operators-27q9d\" (UID: \"f32efa8c-706c-4a05-a3a0-6d3be84722c3\") " pod="openshift-marketplace/redhat-operators-27q9d" Nov 24 08:53:38 crc kubenswrapper[4886]: I1124 08:53:38.369563 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m5nkk\" (UniqueName: \"kubernetes.io/projected/f32efa8c-706c-4a05-a3a0-6d3be84722c3-kube-api-access-m5nkk\") pod \"redhat-operators-27q9d\" (UID: \"f32efa8c-706c-4a05-a3a0-6d3be84722c3\") " pod="openshift-marketplace/redhat-operators-27q9d" Nov 24 08:53:38 crc kubenswrapper[4886]: I1124 08:53:38.369602 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f32efa8c-706c-4a05-a3a0-6d3be84722c3-utilities\") pod \"redhat-operators-27q9d\" (UID: \"f32efa8c-706c-4a05-a3a0-6d3be84722c3\") " pod="openshift-marketplace/redhat-operators-27q9d" Nov 24 08:53:38 crc kubenswrapper[4886]: I1124 08:53:38.430519 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hpdtc" Nov 24 08:53:38 crc kubenswrapper[4886]: I1124 08:53:38.478982 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f32efa8c-706c-4a05-a3a0-6d3be84722c3-catalog-content\") pod \"redhat-operators-27q9d\" (UID: \"f32efa8c-706c-4a05-a3a0-6d3be84722c3\") " pod="openshift-marketplace/redhat-operators-27q9d" Nov 24 08:53:38 crc kubenswrapper[4886]: I1124 08:53:38.479054 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m5nkk\" (UniqueName: \"kubernetes.io/projected/f32efa8c-706c-4a05-a3a0-6d3be84722c3-kube-api-access-m5nkk\") pod \"redhat-operators-27q9d\" (UID: \"f32efa8c-706c-4a05-a3a0-6d3be84722c3\") " pod="openshift-marketplace/redhat-operators-27q9d" Nov 24 08:53:38 crc kubenswrapper[4886]: I1124 08:53:38.479091 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f32efa8c-706c-4a05-a3a0-6d3be84722c3-utilities\") pod \"redhat-operators-27q9d\" (UID: \"f32efa8c-706c-4a05-a3a0-6d3be84722c3\") " pod="openshift-marketplace/redhat-operators-27q9d" Nov 24 08:53:38 crc kubenswrapper[4886]: I1124 08:53:38.479671 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f32efa8c-706c-4a05-a3a0-6d3be84722c3-utilities\") pod \"redhat-operators-27q9d\" (UID: \"f32efa8c-706c-4a05-a3a0-6d3be84722c3\") " pod="openshift-marketplace/redhat-operators-27q9d" Nov 24 08:53:38 crc kubenswrapper[4886]: I1124 08:53:38.479727 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f32efa8c-706c-4a05-a3a0-6d3be84722c3-catalog-content\") pod \"redhat-operators-27q9d\" (UID: \"f32efa8c-706c-4a05-a3a0-6d3be84722c3\") " pod="openshift-marketplace/redhat-operators-27q9d" Nov 24 08:53:38 crc kubenswrapper[4886]: I1124 08:53:38.499774 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m5nkk\" (UniqueName: \"kubernetes.io/projected/f32efa8c-706c-4a05-a3a0-6d3be84722c3-kube-api-access-m5nkk\") pod \"redhat-operators-27q9d\" (UID: \"f32efa8c-706c-4a05-a3a0-6d3be84722c3\") " pod="openshift-marketplace/redhat-operators-27q9d" Nov 24 08:53:38 crc kubenswrapper[4886]: I1124 08:53:38.652067 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-27q9d" Nov 24 08:53:38 crc kubenswrapper[4886]: I1124 08:53:38.859441 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="364b3e42-dafa-45cd-bf38-545cc2eb9e21" path="/var/lib/kubelet/pods/364b3e42-dafa-45cd-bf38-545cc2eb9e21/volumes" Nov 24 08:53:38 crc kubenswrapper[4886]: I1124 08:53:38.860079 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44504d41-1a7d-4a15-a270-24325b0954a9" path="/var/lib/kubelet/pods/44504d41-1a7d-4a15-a270-24325b0954a9/volumes" Nov 24 08:53:38 crc kubenswrapper[4886]: I1124 08:53:38.860736 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5cfd24b4-c215-49b2-af8e-a3875c05c738" path="/var/lib/kubelet/pods/5cfd24b4-c215-49b2-af8e-a3875c05c738/volumes" Nov 24 08:53:38 crc kubenswrapper[4886]: I1124 08:53:38.861815 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a5f30dce-707e-45e7-a928-4602478ac07d" path="/var/lib/kubelet/pods/a5f30dce-707e-45e7-a928-4602478ac07d/volumes" Nov 24 08:53:38 crc kubenswrapper[4886]: I1124 08:53:38.862681 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-hpdtc"] Nov 24 08:53:38 crc kubenswrapper[4886]: W1124 08:53:38.865108 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb7685eb7_7670_424e_834e_cbe8c0a62dc9.slice/crio-4139c5ac1d1a2c8572d73a41d19009c1f076a301aa2a1c488afbe68f628db033 WatchSource:0}: Error finding container 4139c5ac1d1a2c8572d73a41d19009c1f076a301aa2a1c488afbe68f628db033: Status 404 returned error can't find the container with id 4139c5ac1d1a2c8572d73a41d19009c1f076a301aa2a1c488afbe68f628db033 Nov 24 08:53:39 crc kubenswrapper[4886]: I1124 08:53:39.044524 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-27q9d"] Nov 24 08:53:39 crc kubenswrapper[4886]: W1124 08:53:39.084597 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf32efa8c_706c_4a05_a3a0_6d3be84722c3.slice/crio-2c7cdac66915e43dc803614a8ef277f448057cd3745ff04baa05d52eb63b0d95 WatchSource:0}: Error finding container 2c7cdac66915e43dc803614a8ef277f448057cd3745ff04baa05d52eb63b0d95: Status 404 returned error can't find the container with id 2c7cdac66915e43dc803614a8ef277f448057cd3745ff04baa05d52eb63b0d95 Nov 24 08:53:39 crc kubenswrapper[4886]: I1124 08:53:39.852142 4886 generic.go:334] "Generic (PLEG): container finished" podID="f32efa8c-706c-4a05-a3a0-6d3be84722c3" containerID="b81cff2f00edafe81809ac5f442737d84dc29ae6ad0f8eb3829cf4c6ceaa8dc6" exitCode=0 Nov 24 08:53:39 crc kubenswrapper[4886]: I1124 08:53:39.852221 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-27q9d" event={"ID":"f32efa8c-706c-4a05-a3a0-6d3be84722c3","Type":"ContainerDied","Data":"b81cff2f00edafe81809ac5f442737d84dc29ae6ad0f8eb3829cf4c6ceaa8dc6"} Nov 24 08:53:39 crc kubenswrapper[4886]: I1124 08:53:39.852317 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-27q9d" event={"ID":"f32efa8c-706c-4a05-a3a0-6d3be84722c3","Type":"ContainerStarted","Data":"2c7cdac66915e43dc803614a8ef277f448057cd3745ff04baa05d52eb63b0d95"} Nov 24 08:53:39 crc kubenswrapper[4886]: I1124 08:53:39.853730 4886 generic.go:334] "Generic (PLEG): container finished" podID="b7685eb7-7670-424e-834e-cbe8c0a62dc9" containerID="281154462a1e910f0a0d8fb90013b18d75bd991c3d2072e6234d214c4411d216" exitCode=0 Nov 24 08:53:39 crc kubenswrapper[4886]: I1124 08:53:39.854860 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hpdtc" event={"ID":"b7685eb7-7670-424e-834e-cbe8c0a62dc9","Type":"ContainerDied","Data":"281154462a1e910f0a0d8fb90013b18d75bd991c3d2072e6234d214c4411d216"} Nov 24 08:53:39 crc kubenswrapper[4886]: I1124 08:53:39.854892 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hpdtc" event={"ID":"b7685eb7-7670-424e-834e-cbe8c0a62dc9","Type":"ContainerStarted","Data":"4139c5ac1d1a2c8572d73a41d19009c1f076a301aa2a1c488afbe68f628db033"} Nov 24 08:53:40 crc kubenswrapper[4886]: I1124 08:53:40.501725 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-2wf7k"] Nov 24 08:53:40 crc kubenswrapper[4886]: I1124 08:53:40.503392 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2wf7k" Nov 24 08:53:40 crc kubenswrapper[4886]: I1124 08:53:40.505263 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Nov 24 08:53:40 crc kubenswrapper[4886]: I1124 08:53:40.516856 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-2wf7k"] Nov 24 08:53:40 crc kubenswrapper[4886]: I1124 08:53:40.609777 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ptwdz\" (UniqueName: \"kubernetes.io/projected/1fb9d8ba-cdd5-4186-8905-8e06876efe9c-kube-api-access-ptwdz\") pod \"community-operators-2wf7k\" (UID: \"1fb9d8ba-cdd5-4186-8905-8e06876efe9c\") " pod="openshift-marketplace/community-operators-2wf7k" Nov 24 08:53:40 crc kubenswrapper[4886]: I1124 08:53:40.609855 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1fb9d8ba-cdd5-4186-8905-8e06876efe9c-utilities\") pod \"community-operators-2wf7k\" (UID: \"1fb9d8ba-cdd5-4186-8905-8e06876efe9c\") " pod="openshift-marketplace/community-operators-2wf7k" Nov 24 08:53:40 crc kubenswrapper[4886]: I1124 08:53:40.609888 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1fb9d8ba-cdd5-4186-8905-8e06876efe9c-catalog-content\") pod \"community-operators-2wf7k\" (UID: \"1fb9d8ba-cdd5-4186-8905-8e06876efe9c\") " pod="openshift-marketplace/community-operators-2wf7k" Nov 24 08:53:40 crc kubenswrapper[4886]: I1124 08:53:40.702158 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-5zslr"] Nov 24 08:53:40 crc kubenswrapper[4886]: I1124 08:53:40.703306 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5zslr" Nov 24 08:53:40 crc kubenswrapper[4886]: I1124 08:53:40.705856 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Nov 24 08:53:40 crc kubenswrapper[4886]: I1124 08:53:40.712084 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ptwdz\" (UniqueName: \"kubernetes.io/projected/1fb9d8ba-cdd5-4186-8905-8e06876efe9c-kube-api-access-ptwdz\") pod \"community-operators-2wf7k\" (UID: \"1fb9d8ba-cdd5-4186-8905-8e06876efe9c\") " pod="openshift-marketplace/community-operators-2wf7k" Nov 24 08:53:40 crc kubenswrapper[4886]: I1124 08:53:40.712198 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1fb9d8ba-cdd5-4186-8905-8e06876efe9c-utilities\") pod \"community-operators-2wf7k\" (UID: \"1fb9d8ba-cdd5-4186-8905-8e06876efe9c\") " pod="openshift-marketplace/community-operators-2wf7k" Nov 24 08:53:40 crc kubenswrapper[4886]: I1124 08:53:40.712232 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1fb9d8ba-cdd5-4186-8905-8e06876efe9c-catalog-content\") pod \"community-operators-2wf7k\" (UID: \"1fb9d8ba-cdd5-4186-8905-8e06876efe9c\") " pod="openshift-marketplace/community-operators-2wf7k" Nov 24 08:53:40 crc kubenswrapper[4886]: I1124 08:53:40.712779 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1fb9d8ba-cdd5-4186-8905-8e06876efe9c-catalog-content\") pod \"community-operators-2wf7k\" (UID: \"1fb9d8ba-cdd5-4186-8905-8e06876efe9c\") " pod="openshift-marketplace/community-operators-2wf7k" Nov 24 08:53:40 crc kubenswrapper[4886]: I1124 08:53:40.713236 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1fb9d8ba-cdd5-4186-8905-8e06876efe9c-utilities\") pod \"community-operators-2wf7k\" (UID: \"1fb9d8ba-cdd5-4186-8905-8e06876efe9c\") " pod="openshift-marketplace/community-operators-2wf7k" Nov 24 08:53:40 crc kubenswrapper[4886]: I1124 08:53:40.715800 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-5zslr"] Nov 24 08:53:40 crc kubenswrapper[4886]: I1124 08:53:40.745488 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ptwdz\" (UniqueName: \"kubernetes.io/projected/1fb9d8ba-cdd5-4186-8905-8e06876efe9c-kube-api-access-ptwdz\") pod \"community-operators-2wf7k\" (UID: \"1fb9d8ba-cdd5-4186-8905-8e06876efe9c\") " pod="openshift-marketplace/community-operators-2wf7k" Nov 24 08:53:40 crc kubenswrapper[4886]: I1124 08:53:40.813578 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/acc3d90c-d18b-48b6-94b2-8ef5250fd6c3-utilities\") pod \"certified-operators-5zslr\" (UID: \"acc3d90c-d18b-48b6-94b2-8ef5250fd6c3\") " pod="openshift-marketplace/certified-operators-5zslr" Nov 24 08:53:40 crc kubenswrapper[4886]: I1124 08:53:40.813640 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7h7gf\" (UniqueName: \"kubernetes.io/projected/acc3d90c-d18b-48b6-94b2-8ef5250fd6c3-kube-api-access-7h7gf\") pod \"certified-operators-5zslr\" (UID: \"acc3d90c-d18b-48b6-94b2-8ef5250fd6c3\") " pod="openshift-marketplace/certified-operators-5zslr" Nov 24 08:53:40 crc kubenswrapper[4886]: I1124 08:53:40.813708 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/acc3d90c-d18b-48b6-94b2-8ef5250fd6c3-catalog-content\") pod \"certified-operators-5zslr\" (UID: \"acc3d90c-d18b-48b6-94b2-8ef5250fd6c3\") " pod="openshift-marketplace/certified-operators-5zslr" Nov 24 08:53:40 crc kubenswrapper[4886]: I1124 08:53:40.830915 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2wf7k" Nov 24 08:53:40 crc kubenswrapper[4886]: I1124 08:53:40.860673 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-27q9d" event={"ID":"f32efa8c-706c-4a05-a3a0-6d3be84722c3","Type":"ContainerStarted","Data":"13108e6edd6e8a04c5fa2189f863da877fabe23761f13a3a80708e98afb20ce5"} Nov 24 08:53:40 crc kubenswrapper[4886]: I1124 08:53:40.862346 4886 generic.go:334] "Generic (PLEG): container finished" podID="b7685eb7-7670-424e-834e-cbe8c0a62dc9" containerID="83e77035bd78f58abb6d9d93f24c7b6b2d77574e27690008ddbfa5dfc78595ea" exitCode=0 Nov 24 08:53:40 crc kubenswrapper[4886]: I1124 08:53:40.862382 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hpdtc" event={"ID":"b7685eb7-7670-424e-834e-cbe8c0a62dc9","Type":"ContainerDied","Data":"83e77035bd78f58abb6d9d93f24c7b6b2d77574e27690008ddbfa5dfc78595ea"} Nov 24 08:53:40 crc kubenswrapper[4886]: I1124 08:53:40.915177 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/acc3d90c-d18b-48b6-94b2-8ef5250fd6c3-utilities\") pod \"certified-operators-5zslr\" (UID: \"acc3d90c-d18b-48b6-94b2-8ef5250fd6c3\") " pod="openshift-marketplace/certified-operators-5zslr" Nov 24 08:53:40 crc kubenswrapper[4886]: I1124 08:53:40.915242 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7h7gf\" (UniqueName: \"kubernetes.io/projected/acc3d90c-d18b-48b6-94b2-8ef5250fd6c3-kube-api-access-7h7gf\") pod \"certified-operators-5zslr\" (UID: \"acc3d90c-d18b-48b6-94b2-8ef5250fd6c3\") " pod="openshift-marketplace/certified-operators-5zslr" Nov 24 08:53:40 crc kubenswrapper[4886]: I1124 08:53:40.915322 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/acc3d90c-d18b-48b6-94b2-8ef5250fd6c3-catalog-content\") pod \"certified-operators-5zslr\" (UID: \"acc3d90c-d18b-48b6-94b2-8ef5250fd6c3\") " pod="openshift-marketplace/certified-operators-5zslr" Nov 24 08:53:40 crc kubenswrapper[4886]: I1124 08:53:40.915856 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/acc3d90c-d18b-48b6-94b2-8ef5250fd6c3-utilities\") pod \"certified-operators-5zslr\" (UID: \"acc3d90c-d18b-48b6-94b2-8ef5250fd6c3\") " pod="openshift-marketplace/certified-operators-5zslr" Nov 24 08:53:40 crc kubenswrapper[4886]: I1124 08:53:40.915901 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/acc3d90c-d18b-48b6-94b2-8ef5250fd6c3-catalog-content\") pod \"certified-operators-5zslr\" (UID: \"acc3d90c-d18b-48b6-94b2-8ef5250fd6c3\") " pod="openshift-marketplace/certified-operators-5zslr" Nov 24 08:53:40 crc kubenswrapper[4886]: I1124 08:53:40.940667 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7h7gf\" (UniqueName: \"kubernetes.io/projected/acc3d90c-d18b-48b6-94b2-8ef5250fd6c3-kube-api-access-7h7gf\") pod \"certified-operators-5zslr\" (UID: \"acc3d90c-d18b-48b6-94b2-8ef5250fd6c3\") " pod="openshift-marketplace/certified-operators-5zslr" Nov 24 08:53:41 crc kubenswrapper[4886]: I1124 08:53:41.020213 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5zslr" Nov 24 08:53:41 crc kubenswrapper[4886]: I1124 08:53:41.257744 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-2wf7k"] Nov 24 08:53:41 crc kubenswrapper[4886]: W1124 08:53:41.268655 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1fb9d8ba_cdd5_4186_8905_8e06876efe9c.slice/crio-d8e45c498cd4fa45425dab451cc9c977fda6bf7330c398d892685176af4e9e92 WatchSource:0}: Error finding container d8e45c498cd4fa45425dab451cc9c977fda6bf7330c398d892685176af4e9e92: Status 404 returned error can't find the container with id d8e45c498cd4fa45425dab451cc9c977fda6bf7330c398d892685176af4e9e92 Nov 24 08:53:41 crc kubenswrapper[4886]: I1124 08:53:41.430591 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-5zslr"] Nov 24 08:53:41 crc kubenswrapper[4886]: W1124 08:53:41.456845 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podacc3d90c_d18b_48b6_94b2_8ef5250fd6c3.slice/crio-c4a6adf51ba6693b7716bb600e564358686e2d73723e771eda71dc63a53f4d8f WatchSource:0}: Error finding container c4a6adf51ba6693b7716bb600e564358686e2d73723e771eda71dc63a53f4d8f: Status 404 returned error can't find the container with id c4a6adf51ba6693b7716bb600e564358686e2d73723e771eda71dc63a53f4d8f Nov 24 08:53:41 crc kubenswrapper[4886]: I1124 08:53:41.870548 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hpdtc" event={"ID":"b7685eb7-7670-424e-834e-cbe8c0a62dc9","Type":"ContainerStarted","Data":"7829f531dd10deb87c78d59f93a71c1f3b367ff4645f65974608f5823561f836"} Nov 24 08:53:41 crc kubenswrapper[4886]: I1124 08:53:41.872674 4886 generic.go:334] "Generic (PLEG): container finished" podID="f32efa8c-706c-4a05-a3a0-6d3be84722c3" containerID="13108e6edd6e8a04c5fa2189f863da877fabe23761f13a3a80708e98afb20ce5" exitCode=0 Nov 24 08:53:41 crc kubenswrapper[4886]: I1124 08:53:41.872734 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-27q9d" event={"ID":"f32efa8c-706c-4a05-a3a0-6d3be84722c3","Type":"ContainerDied","Data":"13108e6edd6e8a04c5fa2189f863da877fabe23761f13a3a80708e98afb20ce5"} Nov 24 08:53:41 crc kubenswrapper[4886]: I1124 08:53:41.876844 4886 generic.go:334] "Generic (PLEG): container finished" podID="1fb9d8ba-cdd5-4186-8905-8e06876efe9c" containerID="be765b09a79d692d98cdc31f653b5d704300a7aaa00c40d130d2473776a31ec7" exitCode=0 Nov 24 08:53:41 crc kubenswrapper[4886]: I1124 08:53:41.876908 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2wf7k" event={"ID":"1fb9d8ba-cdd5-4186-8905-8e06876efe9c","Type":"ContainerDied","Data":"be765b09a79d692d98cdc31f653b5d704300a7aaa00c40d130d2473776a31ec7"} Nov 24 08:53:41 crc kubenswrapper[4886]: I1124 08:53:41.876929 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2wf7k" event={"ID":"1fb9d8ba-cdd5-4186-8905-8e06876efe9c","Type":"ContainerStarted","Data":"d8e45c498cd4fa45425dab451cc9c977fda6bf7330c398d892685176af4e9e92"} Nov 24 08:53:41 crc kubenswrapper[4886]: I1124 08:53:41.882492 4886 generic.go:334] "Generic (PLEG): container finished" podID="acc3d90c-d18b-48b6-94b2-8ef5250fd6c3" containerID="3b7db29849f13925c56d8a6ec99c2244b1bde65ea48d47f81a36cea7266ede94" exitCode=0 Nov 24 08:53:41 crc kubenswrapper[4886]: I1124 08:53:41.882536 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5zslr" event={"ID":"acc3d90c-d18b-48b6-94b2-8ef5250fd6c3","Type":"ContainerDied","Data":"3b7db29849f13925c56d8a6ec99c2244b1bde65ea48d47f81a36cea7266ede94"} Nov 24 08:53:41 crc kubenswrapper[4886]: I1124 08:53:41.882567 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5zslr" event={"ID":"acc3d90c-d18b-48b6-94b2-8ef5250fd6c3","Type":"ContainerStarted","Data":"c4a6adf51ba6693b7716bb600e564358686e2d73723e771eda71dc63a53f4d8f"} Nov 24 08:53:41 crc kubenswrapper[4886]: I1124 08:53:41.897193 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-hpdtc" podStartSLOduration=2.495847891 podStartE2EDuration="3.897154238s" podCreationTimestamp="2025-11-24 08:53:38 +0000 UTC" firstStartedPulling="2025-11-24 08:53:39.856474598 +0000 UTC m=+275.743212733" lastFinishedPulling="2025-11-24 08:53:41.257780945 +0000 UTC m=+277.144519080" observedRunningTime="2025-11-24 08:53:41.89449003 +0000 UTC m=+277.781228165" watchObservedRunningTime="2025-11-24 08:53:41.897154238 +0000 UTC m=+277.783892383" Nov 24 08:53:42 crc kubenswrapper[4886]: I1124 08:53:42.892260 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-27q9d" event={"ID":"f32efa8c-706c-4a05-a3a0-6d3be84722c3","Type":"ContainerStarted","Data":"35ce95ccaf0cdc818aca0e15931b60e260aae73fe6e671bee84829250a7b6a43"} Nov 24 08:53:42 crc kubenswrapper[4886]: I1124 08:53:42.895887 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2wf7k" event={"ID":"1fb9d8ba-cdd5-4186-8905-8e06876efe9c","Type":"ContainerStarted","Data":"c1dd0d6378e25235e6dbc6914b3e78f1f68c6fa57a2a99ddb2f8116ff78ca27a"} Nov 24 08:53:42 crc kubenswrapper[4886]: I1124 08:53:42.941872 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-27q9d" podStartSLOduration=2.446293726 podStartE2EDuration="4.941848533s" podCreationTimestamp="2025-11-24 08:53:38 +0000 UTC" firstStartedPulling="2025-11-24 08:53:39.854251073 +0000 UTC m=+275.740989208" lastFinishedPulling="2025-11-24 08:53:42.34980587 +0000 UTC m=+278.236544015" observedRunningTime="2025-11-24 08:53:42.920919833 +0000 UTC m=+278.807657968" watchObservedRunningTime="2025-11-24 08:53:42.941848533 +0000 UTC m=+278.828586668" Nov 24 08:53:43 crc kubenswrapper[4886]: I1124 08:53:43.906145 4886 generic.go:334] "Generic (PLEG): container finished" podID="1fb9d8ba-cdd5-4186-8905-8e06876efe9c" containerID="c1dd0d6378e25235e6dbc6914b3e78f1f68c6fa57a2a99ddb2f8116ff78ca27a" exitCode=0 Nov 24 08:53:43 crc kubenswrapper[4886]: I1124 08:53:43.906275 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2wf7k" event={"ID":"1fb9d8ba-cdd5-4186-8905-8e06876efe9c","Type":"ContainerDied","Data":"c1dd0d6378e25235e6dbc6914b3e78f1f68c6fa57a2a99ddb2f8116ff78ca27a"} Nov 24 08:53:43 crc kubenswrapper[4886]: I1124 08:53:43.913778 4886 generic.go:334] "Generic (PLEG): container finished" podID="acc3d90c-d18b-48b6-94b2-8ef5250fd6c3" containerID="6da688cc461543523f654f04c681f763390c2e79a92dbc326a223656a7a3aceb" exitCode=0 Nov 24 08:53:43 crc kubenswrapper[4886]: I1124 08:53:43.914247 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5zslr" event={"ID":"acc3d90c-d18b-48b6-94b2-8ef5250fd6c3","Type":"ContainerDied","Data":"6da688cc461543523f654f04c681f763390c2e79a92dbc326a223656a7a3aceb"} Nov 24 08:53:45 crc kubenswrapper[4886]: I1124 08:53:45.930139 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2wf7k" event={"ID":"1fb9d8ba-cdd5-4186-8905-8e06876efe9c","Type":"ContainerStarted","Data":"9c629811f714cdb7fffa210e2f21990623600c9b92afcacdd974dcbadf2e9479"} Nov 24 08:53:45 crc kubenswrapper[4886]: I1124 08:53:45.933286 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5zslr" event={"ID":"acc3d90c-d18b-48b6-94b2-8ef5250fd6c3","Type":"ContainerStarted","Data":"56e914d0b968db857d71ae532c6a6e752c8ca55f08745245d68a80827ef91377"} Nov 24 08:53:45 crc kubenswrapper[4886]: I1124 08:53:45.955474 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-2wf7k" podStartSLOduration=3.49833624 podStartE2EDuration="5.955444727s" podCreationTimestamp="2025-11-24 08:53:40 +0000 UTC" firstStartedPulling="2025-11-24 08:53:41.878528525 +0000 UTC m=+277.765266660" lastFinishedPulling="2025-11-24 08:53:44.335637002 +0000 UTC m=+280.222375147" observedRunningTime="2025-11-24 08:53:45.950915165 +0000 UTC m=+281.837653320" watchObservedRunningTime="2025-11-24 08:53:45.955444727 +0000 UTC m=+281.842182862" Nov 24 08:53:45 crc kubenswrapper[4886]: I1124 08:53:45.980898 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-5zslr" podStartSLOduration=3.474176565 podStartE2EDuration="5.980877198s" podCreationTimestamp="2025-11-24 08:53:40 +0000 UTC" firstStartedPulling="2025-11-24 08:53:41.88385282 +0000 UTC m=+277.770590955" lastFinishedPulling="2025-11-24 08:53:44.390553453 +0000 UTC m=+280.277291588" observedRunningTime="2025-11-24 08:53:45.980331662 +0000 UTC m=+281.867069797" watchObservedRunningTime="2025-11-24 08:53:45.980877198 +0000 UTC m=+281.867615333" Nov 24 08:53:48 crc kubenswrapper[4886]: I1124 08:53:48.432399 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-hpdtc" Nov 24 08:53:48 crc kubenswrapper[4886]: I1124 08:53:48.432909 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-hpdtc" Nov 24 08:53:48 crc kubenswrapper[4886]: I1124 08:53:48.481477 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-hpdtc" Nov 24 08:53:48 crc kubenswrapper[4886]: I1124 08:53:48.654189 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-27q9d" Nov 24 08:53:48 crc kubenswrapper[4886]: I1124 08:53:48.654287 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-27q9d" Nov 24 08:53:48 crc kubenswrapper[4886]: I1124 08:53:48.700416 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-27q9d" Nov 24 08:53:48 crc kubenswrapper[4886]: I1124 08:53:48.991683 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-27q9d" Nov 24 08:53:48 crc kubenswrapper[4886]: I1124 08:53:48.996997 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-hpdtc" Nov 24 08:53:50 crc kubenswrapper[4886]: I1124 08:53:50.832071 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-2wf7k" Nov 24 08:53:50 crc kubenswrapper[4886]: I1124 08:53:50.832525 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-2wf7k" Nov 24 08:53:50 crc kubenswrapper[4886]: I1124 08:53:50.877495 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-2wf7k" Nov 24 08:53:51 crc kubenswrapper[4886]: I1124 08:53:51.009846 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-2wf7k" Nov 24 08:53:51 crc kubenswrapper[4886]: I1124 08:53:51.021850 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-5zslr" Nov 24 08:53:51 crc kubenswrapper[4886]: I1124 08:53:51.021928 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-5zslr" Nov 24 08:53:51 crc kubenswrapper[4886]: I1124 08:53:51.072513 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-5zslr" Nov 24 08:53:52 crc kubenswrapper[4886]: I1124 08:53:52.036050 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-5zslr" Nov 24 08:55:01 crc kubenswrapper[4886]: I1124 08:55:01.784449 4886 patch_prober.go:28] interesting pod/machine-config-daemon-zc46q container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 08:55:01 crc kubenswrapper[4886]: I1124 08:55:01.787765 4886 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zc46q" podUID="23cb993e-0360-4449-b604-8ddd825a6502" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 08:55:31 crc kubenswrapper[4886]: I1124 08:55:31.784529 4886 patch_prober.go:28] interesting pod/machine-config-daemon-zc46q container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 08:55:31 crc kubenswrapper[4886]: I1124 08:55:31.785479 4886 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zc46q" podUID="23cb993e-0360-4449-b604-8ddd825a6502" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 08:56:01 crc kubenswrapper[4886]: I1124 08:56:01.785628 4886 patch_prober.go:28] interesting pod/machine-config-daemon-zc46q container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 08:56:01 crc kubenswrapper[4886]: I1124 08:56:01.786334 4886 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zc46q" podUID="23cb993e-0360-4449-b604-8ddd825a6502" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 08:56:01 crc kubenswrapper[4886]: I1124 08:56:01.786409 4886 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-zc46q" Nov 24 08:56:01 crc kubenswrapper[4886]: I1124 08:56:01.787329 4886 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"71eb5673abcc11e0163c9266fe128b74e3ad31a62badd22878a0c5c714b5f6d8"} pod="openshift-machine-config-operator/machine-config-daemon-zc46q" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 24 08:56:01 crc kubenswrapper[4886]: I1124 08:56:01.787415 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-zc46q" podUID="23cb993e-0360-4449-b604-8ddd825a6502" containerName="machine-config-daemon" containerID="cri-o://71eb5673abcc11e0163c9266fe128b74e3ad31a62badd22878a0c5c714b5f6d8" gracePeriod=600 Nov 24 08:56:02 crc kubenswrapper[4886]: I1124 08:56:02.147710 4886 generic.go:334] "Generic (PLEG): container finished" podID="23cb993e-0360-4449-b604-8ddd825a6502" containerID="71eb5673abcc11e0163c9266fe128b74e3ad31a62badd22878a0c5c714b5f6d8" exitCode=0 Nov 24 08:56:02 crc kubenswrapper[4886]: I1124 08:56:02.147782 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zc46q" event={"ID":"23cb993e-0360-4449-b604-8ddd825a6502","Type":"ContainerDied","Data":"71eb5673abcc11e0163c9266fe128b74e3ad31a62badd22878a0c5c714b5f6d8"} Nov 24 08:56:02 crc kubenswrapper[4886]: I1124 08:56:02.148222 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zc46q" event={"ID":"23cb993e-0360-4449-b604-8ddd825a6502","Type":"ContainerStarted","Data":"10588af6709fb47b831a7119f79d39a2660cc9b0982198d8ef6ad1d8444269b4"} Nov 24 08:56:02 crc kubenswrapper[4886]: I1124 08:56:02.148259 4886 scope.go:117] "RemoveContainer" containerID="5b6a02c10c81171f23bf0e623a3f86710da37f6dd62f885c0366830a60b2be34" Nov 24 08:56:48 crc kubenswrapper[4886]: I1124 08:56:48.549548 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-ns4v7"] Nov 24 08:56:48 crc kubenswrapper[4886]: I1124 08:56:48.550951 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-ns4v7" Nov 24 08:56:48 crc kubenswrapper[4886]: I1124 08:56:48.569965 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-ns4v7"] Nov 24 08:56:48 crc kubenswrapper[4886]: I1124 08:56:48.610543 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-ns4v7\" (UID: \"67cacba3-24ae-4870-9923-95c4682e9963\") " pod="openshift-image-registry/image-registry-66df7c8f76-ns4v7" Nov 24 08:56:48 crc kubenswrapper[4886]: I1124 08:56:48.610607 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/67cacba3-24ae-4870-9923-95c4682e9963-installation-pull-secrets\") pod \"image-registry-66df7c8f76-ns4v7\" (UID: \"67cacba3-24ae-4870-9923-95c4682e9963\") " pod="openshift-image-registry/image-registry-66df7c8f76-ns4v7" Nov 24 08:56:48 crc kubenswrapper[4886]: I1124 08:56:48.610635 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cczgf\" (UniqueName: \"kubernetes.io/projected/67cacba3-24ae-4870-9923-95c4682e9963-kube-api-access-cczgf\") pod \"image-registry-66df7c8f76-ns4v7\" (UID: \"67cacba3-24ae-4870-9923-95c4682e9963\") " pod="openshift-image-registry/image-registry-66df7c8f76-ns4v7" Nov 24 08:56:48 crc kubenswrapper[4886]: I1124 08:56:48.610657 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/67cacba3-24ae-4870-9923-95c4682e9963-bound-sa-token\") pod \"image-registry-66df7c8f76-ns4v7\" (UID: \"67cacba3-24ae-4870-9923-95c4682e9963\") " pod="openshift-image-registry/image-registry-66df7c8f76-ns4v7" Nov 24 08:56:48 crc kubenswrapper[4886]: I1124 08:56:48.610689 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/67cacba3-24ae-4870-9923-95c4682e9963-registry-certificates\") pod \"image-registry-66df7c8f76-ns4v7\" (UID: \"67cacba3-24ae-4870-9923-95c4682e9963\") " pod="openshift-image-registry/image-registry-66df7c8f76-ns4v7" Nov 24 08:56:48 crc kubenswrapper[4886]: I1124 08:56:48.610801 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/67cacba3-24ae-4870-9923-95c4682e9963-trusted-ca\") pod \"image-registry-66df7c8f76-ns4v7\" (UID: \"67cacba3-24ae-4870-9923-95c4682e9963\") " pod="openshift-image-registry/image-registry-66df7c8f76-ns4v7" Nov 24 08:56:48 crc kubenswrapper[4886]: I1124 08:56:48.610866 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/67cacba3-24ae-4870-9923-95c4682e9963-registry-tls\") pod \"image-registry-66df7c8f76-ns4v7\" (UID: \"67cacba3-24ae-4870-9923-95c4682e9963\") " pod="openshift-image-registry/image-registry-66df7c8f76-ns4v7" Nov 24 08:56:48 crc kubenswrapper[4886]: I1124 08:56:48.610901 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/67cacba3-24ae-4870-9923-95c4682e9963-ca-trust-extracted\") pod \"image-registry-66df7c8f76-ns4v7\" (UID: \"67cacba3-24ae-4870-9923-95c4682e9963\") " pod="openshift-image-registry/image-registry-66df7c8f76-ns4v7" Nov 24 08:56:48 crc kubenswrapper[4886]: I1124 08:56:48.633344 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-ns4v7\" (UID: \"67cacba3-24ae-4870-9923-95c4682e9963\") " pod="openshift-image-registry/image-registry-66df7c8f76-ns4v7" Nov 24 08:56:48 crc kubenswrapper[4886]: I1124 08:56:48.712014 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/67cacba3-24ae-4870-9923-95c4682e9963-trusted-ca\") pod \"image-registry-66df7c8f76-ns4v7\" (UID: \"67cacba3-24ae-4870-9923-95c4682e9963\") " pod="openshift-image-registry/image-registry-66df7c8f76-ns4v7" Nov 24 08:56:48 crc kubenswrapper[4886]: I1124 08:56:48.712070 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/67cacba3-24ae-4870-9923-95c4682e9963-registry-tls\") pod \"image-registry-66df7c8f76-ns4v7\" (UID: \"67cacba3-24ae-4870-9923-95c4682e9963\") " pod="openshift-image-registry/image-registry-66df7c8f76-ns4v7" Nov 24 08:56:48 crc kubenswrapper[4886]: I1124 08:56:48.712106 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/67cacba3-24ae-4870-9923-95c4682e9963-ca-trust-extracted\") pod \"image-registry-66df7c8f76-ns4v7\" (UID: \"67cacba3-24ae-4870-9923-95c4682e9963\") " pod="openshift-image-registry/image-registry-66df7c8f76-ns4v7" Nov 24 08:56:48 crc kubenswrapper[4886]: I1124 08:56:48.712169 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/67cacba3-24ae-4870-9923-95c4682e9963-installation-pull-secrets\") pod \"image-registry-66df7c8f76-ns4v7\" (UID: \"67cacba3-24ae-4870-9923-95c4682e9963\") " pod="openshift-image-registry/image-registry-66df7c8f76-ns4v7" Nov 24 08:56:48 crc kubenswrapper[4886]: I1124 08:56:48.712207 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cczgf\" (UniqueName: \"kubernetes.io/projected/67cacba3-24ae-4870-9923-95c4682e9963-kube-api-access-cczgf\") pod \"image-registry-66df7c8f76-ns4v7\" (UID: \"67cacba3-24ae-4870-9923-95c4682e9963\") " pod="openshift-image-registry/image-registry-66df7c8f76-ns4v7" Nov 24 08:56:48 crc kubenswrapper[4886]: I1124 08:56:48.712241 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/67cacba3-24ae-4870-9923-95c4682e9963-bound-sa-token\") pod \"image-registry-66df7c8f76-ns4v7\" (UID: \"67cacba3-24ae-4870-9923-95c4682e9963\") " pod="openshift-image-registry/image-registry-66df7c8f76-ns4v7" Nov 24 08:56:48 crc kubenswrapper[4886]: I1124 08:56:48.712722 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/67cacba3-24ae-4870-9923-95c4682e9963-registry-certificates\") pod \"image-registry-66df7c8f76-ns4v7\" (UID: \"67cacba3-24ae-4870-9923-95c4682e9963\") " pod="openshift-image-registry/image-registry-66df7c8f76-ns4v7" Nov 24 08:56:48 crc kubenswrapper[4886]: I1124 08:56:48.712722 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/67cacba3-24ae-4870-9923-95c4682e9963-ca-trust-extracted\") pod \"image-registry-66df7c8f76-ns4v7\" (UID: \"67cacba3-24ae-4870-9923-95c4682e9963\") " pod="openshift-image-registry/image-registry-66df7c8f76-ns4v7" Nov 24 08:56:48 crc kubenswrapper[4886]: I1124 08:56:48.714269 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/67cacba3-24ae-4870-9923-95c4682e9963-registry-certificates\") pod \"image-registry-66df7c8f76-ns4v7\" (UID: \"67cacba3-24ae-4870-9923-95c4682e9963\") " pod="openshift-image-registry/image-registry-66df7c8f76-ns4v7" Nov 24 08:56:48 crc kubenswrapper[4886]: I1124 08:56:48.715361 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/67cacba3-24ae-4870-9923-95c4682e9963-trusted-ca\") pod \"image-registry-66df7c8f76-ns4v7\" (UID: \"67cacba3-24ae-4870-9923-95c4682e9963\") " pod="openshift-image-registry/image-registry-66df7c8f76-ns4v7" Nov 24 08:56:48 crc kubenswrapper[4886]: I1124 08:56:48.719454 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/67cacba3-24ae-4870-9923-95c4682e9963-installation-pull-secrets\") pod \"image-registry-66df7c8f76-ns4v7\" (UID: \"67cacba3-24ae-4870-9923-95c4682e9963\") " pod="openshift-image-registry/image-registry-66df7c8f76-ns4v7" Nov 24 08:56:48 crc kubenswrapper[4886]: I1124 08:56:48.720380 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/67cacba3-24ae-4870-9923-95c4682e9963-registry-tls\") pod \"image-registry-66df7c8f76-ns4v7\" (UID: \"67cacba3-24ae-4870-9923-95c4682e9963\") " pod="openshift-image-registry/image-registry-66df7c8f76-ns4v7" Nov 24 08:56:48 crc kubenswrapper[4886]: I1124 08:56:48.738297 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cczgf\" (UniqueName: \"kubernetes.io/projected/67cacba3-24ae-4870-9923-95c4682e9963-kube-api-access-cczgf\") pod \"image-registry-66df7c8f76-ns4v7\" (UID: \"67cacba3-24ae-4870-9923-95c4682e9963\") " pod="openshift-image-registry/image-registry-66df7c8f76-ns4v7" Nov 24 08:56:48 crc kubenswrapper[4886]: I1124 08:56:48.741012 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/67cacba3-24ae-4870-9923-95c4682e9963-bound-sa-token\") pod \"image-registry-66df7c8f76-ns4v7\" (UID: \"67cacba3-24ae-4870-9923-95c4682e9963\") " pod="openshift-image-registry/image-registry-66df7c8f76-ns4v7" Nov 24 08:56:48 crc kubenswrapper[4886]: I1124 08:56:48.869366 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-ns4v7" Nov 24 08:56:49 crc kubenswrapper[4886]: I1124 08:56:49.085949 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-ns4v7"] Nov 24 08:56:49 crc kubenswrapper[4886]: I1124 08:56:49.420017 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-ns4v7" event={"ID":"67cacba3-24ae-4870-9923-95c4682e9963","Type":"ContainerStarted","Data":"537e62e74f47438162c31b74535c0cc398394812a14c522417045e829012529a"} Nov 24 08:56:49 crc kubenswrapper[4886]: I1124 08:56:49.420072 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-ns4v7" event={"ID":"67cacba3-24ae-4870-9923-95c4682e9963","Type":"ContainerStarted","Data":"1e83f0920b82a0dbce92bc03bae6b3d9e93f827465411f7d0c608c3767908a33"} Nov 24 08:56:49 crc kubenswrapper[4886]: I1124 08:56:49.420214 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-ns4v7" Nov 24 08:56:49 crc kubenswrapper[4886]: I1124 08:56:49.441623 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-ns4v7" podStartSLOduration=1.441602569 podStartE2EDuration="1.441602569s" podCreationTimestamp="2025-11-24 08:56:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 08:56:49.440129265 +0000 UTC m=+465.326867410" watchObservedRunningTime="2025-11-24 08:56:49.441602569 +0000 UTC m=+465.328340704" Nov 24 08:57:08 crc kubenswrapper[4886]: I1124 08:57:08.876250 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-ns4v7" Nov 24 08:57:08 crc kubenswrapper[4886]: I1124 08:57:08.931033 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-n794d"] Nov 24 08:57:33 crc kubenswrapper[4886]: I1124 08:57:33.973016 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-n794d" podUID="30599c42-eef7-4967-b84f-95b49a225bd6" containerName="registry" containerID="cri-o://010db2b5dc87e29f75af5234e09b2797b542092c922d9420b2be76a296dcd162" gracePeriod=30 Nov 24 08:57:34 crc kubenswrapper[4886]: I1124 08:57:34.403879 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-n794d" Nov 24 08:57:34 crc kubenswrapper[4886]: I1124 08:57:34.498928 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"30599c42-eef7-4967-b84f-95b49a225bd6\" (UID: \"30599c42-eef7-4967-b84f-95b49a225bd6\") " Nov 24 08:57:34 crc kubenswrapper[4886]: I1124 08:57:34.499006 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/30599c42-eef7-4967-b84f-95b49a225bd6-registry-certificates\") pod \"30599c42-eef7-4967-b84f-95b49a225bd6\" (UID: \"30599c42-eef7-4967-b84f-95b49a225bd6\") " Nov 24 08:57:34 crc kubenswrapper[4886]: I1124 08:57:34.499064 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/30599c42-eef7-4967-b84f-95b49a225bd6-installation-pull-secrets\") pod \"30599c42-eef7-4967-b84f-95b49a225bd6\" (UID: \"30599c42-eef7-4967-b84f-95b49a225bd6\") " Nov 24 08:57:34 crc kubenswrapper[4886]: I1124 08:57:34.499089 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4hmv\" (UniqueName: \"kubernetes.io/projected/30599c42-eef7-4967-b84f-95b49a225bd6-kube-api-access-d4hmv\") pod \"30599c42-eef7-4967-b84f-95b49a225bd6\" (UID: \"30599c42-eef7-4967-b84f-95b49a225bd6\") " Nov 24 08:57:34 crc kubenswrapper[4886]: I1124 08:57:34.499113 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/30599c42-eef7-4967-b84f-95b49a225bd6-ca-trust-extracted\") pod \"30599c42-eef7-4967-b84f-95b49a225bd6\" (UID: \"30599c42-eef7-4967-b84f-95b49a225bd6\") " Nov 24 08:57:34 crc kubenswrapper[4886]: I1124 08:57:34.499136 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/30599c42-eef7-4967-b84f-95b49a225bd6-trusted-ca\") pod \"30599c42-eef7-4967-b84f-95b49a225bd6\" (UID: \"30599c42-eef7-4967-b84f-95b49a225bd6\") " Nov 24 08:57:34 crc kubenswrapper[4886]: I1124 08:57:34.499182 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/30599c42-eef7-4967-b84f-95b49a225bd6-registry-tls\") pod \"30599c42-eef7-4967-b84f-95b49a225bd6\" (UID: \"30599c42-eef7-4967-b84f-95b49a225bd6\") " Nov 24 08:57:34 crc kubenswrapper[4886]: I1124 08:57:34.499222 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/30599c42-eef7-4967-b84f-95b49a225bd6-bound-sa-token\") pod \"30599c42-eef7-4967-b84f-95b49a225bd6\" (UID: \"30599c42-eef7-4967-b84f-95b49a225bd6\") " Nov 24 08:57:34 crc kubenswrapper[4886]: I1124 08:57:34.500382 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/30599c42-eef7-4967-b84f-95b49a225bd6-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "30599c42-eef7-4967-b84f-95b49a225bd6" (UID: "30599c42-eef7-4967-b84f-95b49a225bd6"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 08:57:34 crc kubenswrapper[4886]: I1124 08:57:34.500411 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/30599c42-eef7-4967-b84f-95b49a225bd6-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "30599c42-eef7-4967-b84f-95b49a225bd6" (UID: "30599c42-eef7-4967-b84f-95b49a225bd6"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 08:57:34 crc kubenswrapper[4886]: I1124 08:57:34.511766 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/30599c42-eef7-4967-b84f-95b49a225bd6-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "30599c42-eef7-4967-b84f-95b49a225bd6" (UID: "30599c42-eef7-4967-b84f-95b49a225bd6"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 08:57:34 crc kubenswrapper[4886]: I1124 08:57:34.512173 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/30599c42-eef7-4967-b84f-95b49a225bd6-kube-api-access-d4hmv" (OuterVolumeSpecName: "kube-api-access-d4hmv") pod "30599c42-eef7-4967-b84f-95b49a225bd6" (UID: "30599c42-eef7-4967-b84f-95b49a225bd6"). InnerVolumeSpecName "kube-api-access-d4hmv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 08:57:34 crc kubenswrapper[4886]: I1124 08:57:34.514754 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30599c42-eef7-4967-b84f-95b49a225bd6-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "30599c42-eef7-4967-b84f-95b49a225bd6" (UID: "30599c42-eef7-4967-b84f-95b49a225bd6"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 08:57:34 crc kubenswrapper[4886]: I1124 08:57:34.515620 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/30599c42-eef7-4967-b84f-95b49a225bd6-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "30599c42-eef7-4967-b84f-95b49a225bd6" (UID: "30599c42-eef7-4967-b84f-95b49a225bd6"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 08:57:34 crc kubenswrapper[4886]: I1124 08:57:34.518096 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "30599c42-eef7-4967-b84f-95b49a225bd6" (UID: "30599c42-eef7-4967-b84f-95b49a225bd6"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Nov 24 08:57:34 crc kubenswrapper[4886]: I1124 08:57:34.519425 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/30599c42-eef7-4967-b84f-95b49a225bd6-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "30599c42-eef7-4967-b84f-95b49a225bd6" (UID: "30599c42-eef7-4967-b84f-95b49a225bd6"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 08:57:34 crc kubenswrapper[4886]: I1124 08:57:34.600775 4886 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/30599c42-eef7-4967-b84f-95b49a225bd6-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Nov 24 08:57:34 crc kubenswrapper[4886]: I1124 08:57:34.600868 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4hmv\" (UniqueName: \"kubernetes.io/projected/30599c42-eef7-4967-b84f-95b49a225bd6-kube-api-access-d4hmv\") on node \"crc\" DevicePath \"\"" Nov 24 08:57:34 crc kubenswrapper[4886]: I1124 08:57:34.600881 4886 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/30599c42-eef7-4967-b84f-95b49a225bd6-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Nov 24 08:57:34 crc kubenswrapper[4886]: I1124 08:57:34.600891 4886 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/30599c42-eef7-4967-b84f-95b49a225bd6-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 24 08:57:34 crc kubenswrapper[4886]: I1124 08:57:34.600901 4886 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/30599c42-eef7-4967-b84f-95b49a225bd6-registry-tls\") on node \"crc\" DevicePath \"\"" Nov 24 08:57:34 crc kubenswrapper[4886]: I1124 08:57:34.600909 4886 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/30599c42-eef7-4967-b84f-95b49a225bd6-bound-sa-token\") on node \"crc\" DevicePath \"\"" Nov 24 08:57:34 crc kubenswrapper[4886]: I1124 08:57:34.600918 4886 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/30599c42-eef7-4967-b84f-95b49a225bd6-registry-certificates\") on node \"crc\" DevicePath \"\"" Nov 24 08:57:34 crc kubenswrapper[4886]: I1124 08:57:34.684534 4886 generic.go:334] "Generic (PLEG): container finished" podID="30599c42-eef7-4967-b84f-95b49a225bd6" containerID="010db2b5dc87e29f75af5234e09b2797b542092c922d9420b2be76a296dcd162" exitCode=0 Nov 24 08:57:34 crc kubenswrapper[4886]: I1124 08:57:34.684581 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-n794d" event={"ID":"30599c42-eef7-4967-b84f-95b49a225bd6","Type":"ContainerDied","Data":"010db2b5dc87e29f75af5234e09b2797b542092c922d9420b2be76a296dcd162"} Nov 24 08:57:34 crc kubenswrapper[4886]: I1124 08:57:34.684610 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-n794d" event={"ID":"30599c42-eef7-4967-b84f-95b49a225bd6","Type":"ContainerDied","Data":"35d6c55573a07de6320c4d8e5979aae5d1ccd14f60ec0ab082181b41f8322274"} Nov 24 08:57:34 crc kubenswrapper[4886]: I1124 08:57:34.684627 4886 scope.go:117] "RemoveContainer" containerID="010db2b5dc87e29f75af5234e09b2797b542092c922d9420b2be76a296dcd162" Nov 24 08:57:34 crc kubenswrapper[4886]: I1124 08:57:34.684635 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-n794d" Nov 24 08:57:34 crc kubenswrapper[4886]: I1124 08:57:34.701598 4886 scope.go:117] "RemoveContainer" containerID="010db2b5dc87e29f75af5234e09b2797b542092c922d9420b2be76a296dcd162" Nov 24 08:57:34 crc kubenswrapper[4886]: E1124 08:57:34.702748 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"010db2b5dc87e29f75af5234e09b2797b542092c922d9420b2be76a296dcd162\": container with ID starting with 010db2b5dc87e29f75af5234e09b2797b542092c922d9420b2be76a296dcd162 not found: ID does not exist" containerID="010db2b5dc87e29f75af5234e09b2797b542092c922d9420b2be76a296dcd162" Nov 24 08:57:34 crc kubenswrapper[4886]: I1124 08:57:34.702814 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"010db2b5dc87e29f75af5234e09b2797b542092c922d9420b2be76a296dcd162"} err="failed to get container status \"010db2b5dc87e29f75af5234e09b2797b542092c922d9420b2be76a296dcd162\": rpc error: code = NotFound desc = could not find container \"010db2b5dc87e29f75af5234e09b2797b542092c922d9420b2be76a296dcd162\": container with ID starting with 010db2b5dc87e29f75af5234e09b2797b542092c922d9420b2be76a296dcd162 not found: ID does not exist" Nov 24 08:57:34 crc kubenswrapper[4886]: I1124 08:57:34.726779 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-n794d"] Nov 24 08:57:34 crc kubenswrapper[4886]: I1124 08:57:34.729515 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-n794d"] Nov 24 08:57:34 crc kubenswrapper[4886]: I1124 08:57:34.858383 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="30599c42-eef7-4967-b84f-95b49a225bd6" path="/var/lib/kubelet/pods/30599c42-eef7-4967-b84f-95b49a225bd6/volumes" Nov 24 08:58:05 crc kubenswrapper[4886]: I1124 08:58:05.070408 4886 scope.go:117] "RemoveContainer" containerID="0dedc518dbe713a3614a67d79e2f30f969a0d8104d2030a7aa5866b969fdef1a" Nov 24 08:58:31 crc kubenswrapper[4886]: I1124 08:58:31.784899 4886 patch_prober.go:28] interesting pod/machine-config-daemon-zc46q container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 08:58:31 crc kubenswrapper[4886]: I1124 08:58:31.785846 4886 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zc46q" podUID="23cb993e-0360-4449-b604-8ddd825a6502" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 08:59:01 crc kubenswrapper[4886]: I1124 08:59:01.785280 4886 patch_prober.go:28] interesting pod/machine-config-daemon-zc46q container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 08:59:01 crc kubenswrapper[4886]: I1124 08:59:01.786036 4886 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zc46q" podUID="23cb993e-0360-4449-b604-8ddd825a6502" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 08:59:05 crc kubenswrapper[4886]: I1124 08:59:05.113811 4886 scope.go:117] "RemoveContainer" containerID="f3630721528ba5c1692d53305eed9f6f2b54706bca1ec91c9cdd444af71f51f3" Nov 24 08:59:05 crc kubenswrapper[4886]: I1124 08:59:05.142098 4886 scope.go:117] "RemoveContainer" containerID="56f0d139681d9520145179f9e2a3644963add65d174a849256275e4515a8ae09" Nov 24 08:59:31 crc kubenswrapper[4886]: I1124 08:59:31.784702 4886 patch_prober.go:28] interesting pod/machine-config-daemon-zc46q container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 08:59:31 crc kubenswrapper[4886]: I1124 08:59:31.785745 4886 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zc46q" podUID="23cb993e-0360-4449-b604-8ddd825a6502" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 08:59:31 crc kubenswrapper[4886]: I1124 08:59:31.785828 4886 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-zc46q" Nov 24 08:59:31 crc kubenswrapper[4886]: I1124 08:59:31.786771 4886 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"10588af6709fb47b831a7119f79d39a2660cc9b0982198d8ef6ad1d8444269b4"} pod="openshift-machine-config-operator/machine-config-daemon-zc46q" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 24 08:59:31 crc kubenswrapper[4886]: I1124 08:59:31.786853 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-zc46q" podUID="23cb993e-0360-4449-b604-8ddd825a6502" containerName="machine-config-daemon" containerID="cri-o://10588af6709fb47b831a7119f79d39a2660cc9b0982198d8ef6ad1d8444269b4" gracePeriod=600 Nov 24 08:59:32 crc kubenswrapper[4886]: I1124 08:59:32.340571 4886 generic.go:334] "Generic (PLEG): container finished" podID="23cb993e-0360-4449-b604-8ddd825a6502" containerID="10588af6709fb47b831a7119f79d39a2660cc9b0982198d8ef6ad1d8444269b4" exitCode=0 Nov 24 08:59:32 crc kubenswrapper[4886]: I1124 08:59:32.340671 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zc46q" event={"ID":"23cb993e-0360-4449-b604-8ddd825a6502","Type":"ContainerDied","Data":"10588af6709fb47b831a7119f79d39a2660cc9b0982198d8ef6ad1d8444269b4"} Nov 24 08:59:32 crc kubenswrapper[4886]: I1124 08:59:32.341133 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zc46q" event={"ID":"23cb993e-0360-4449-b604-8ddd825a6502","Type":"ContainerStarted","Data":"2e3a75d48f5b6c64a0453de51e83f56ff421f563e7ead2b6374e43297260b2ce"} Nov 24 08:59:32 crc kubenswrapper[4886]: I1124 08:59:32.341187 4886 scope.go:117] "RemoveContainer" containerID="71eb5673abcc11e0163c9266fe128b74e3ad31a62badd22878a0c5c714b5f6d8" Nov 24 08:59:56 crc kubenswrapper[4886]: I1124 08:59:56.408832 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-7f985d654d-gsdqn"] Nov 24 08:59:56 crc kubenswrapper[4886]: E1124 08:59:56.410311 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30599c42-eef7-4967-b84f-95b49a225bd6" containerName="registry" Nov 24 08:59:56 crc kubenswrapper[4886]: I1124 08:59:56.410334 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="30599c42-eef7-4967-b84f-95b49a225bd6" containerName="registry" Nov 24 08:59:56 crc kubenswrapper[4886]: I1124 08:59:56.410503 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="30599c42-eef7-4967-b84f-95b49a225bd6" containerName="registry" Nov 24 08:59:56 crc kubenswrapper[4886]: I1124 08:59:56.411249 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-7f985d654d-gsdqn" Nov 24 08:59:56 crc kubenswrapper[4886]: I1124 08:59:56.411592 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-5b446d88c5-ff82d"] Nov 24 08:59:56 crc kubenswrapper[4886]: I1124 08:59:56.412658 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-5b446d88c5-ff82d" Nov 24 08:59:56 crc kubenswrapper[4886]: I1124 08:59:56.414837 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Nov 24 08:59:56 crc kubenswrapper[4886]: I1124 08:59:56.414941 4886 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-9v7fj" Nov 24 08:59:56 crc kubenswrapper[4886]: I1124 08:59:56.414837 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Nov 24 08:59:56 crc kubenswrapper[4886]: I1124 08:59:56.417507 4886 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-xfhvv" Nov 24 08:59:56 crc kubenswrapper[4886]: I1124 08:59:56.424412 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-5655c58dd6-9cfx6"] Nov 24 08:59:56 crc kubenswrapper[4886]: I1124 08:59:56.425385 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-5655c58dd6-9cfx6" Nov 24 08:59:56 crc kubenswrapper[4886]: I1124 08:59:56.428566 4886 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-599vl" Nov 24 08:59:56 crc kubenswrapper[4886]: I1124 08:59:56.438591 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-5655c58dd6-9cfx6"] Nov 24 08:59:56 crc kubenswrapper[4886]: I1124 08:59:56.448992 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-5b446d88c5-ff82d"] Nov 24 08:59:56 crc kubenswrapper[4886]: I1124 08:59:56.476192 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-7f985d654d-gsdqn"] Nov 24 08:59:56 crc kubenswrapper[4886]: I1124 08:59:56.544452 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rnhpj\" (UniqueName: \"kubernetes.io/projected/9475a865-8fb9-4c93-aeb0-09e9b8285a88-kube-api-access-rnhpj\") pod \"cert-manager-cainjector-7f985d654d-gsdqn\" (UID: \"9475a865-8fb9-4c93-aeb0-09e9b8285a88\") " pod="cert-manager/cert-manager-cainjector-7f985d654d-gsdqn" Nov 24 08:59:56 crc kubenswrapper[4886]: I1124 08:59:56.544591 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fq772\" (UniqueName: \"kubernetes.io/projected/fc0d7b30-aa61-4f00-a908-d13689ed0b04-kube-api-access-fq772\") pod \"cert-manager-5b446d88c5-ff82d\" (UID: \"fc0d7b30-aa61-4f00-a908-d13689ed0b04\") " pod="cert-manager/cert-manager-5b446d88c5-ff82d" Nov 24 08:59:56 crc kubenswrapper[4886]: I1124 08:59:56.544691 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jvk25\" (UniqueName: \"kubernetes.io/projected/7b1b394b-0362-4ee6-a956-48d7598ef6a2-kube-api-access-jvk25\") pod \"cert-manager-webhook-5655c58dd6-9cfx6\" (UID: \"7b1b394b-0362-4ee6-a956-48d7598ef6a2\") " pod="cert-manager/cert-manager-webhook-5655c58dd6-9cfx6" Nov 24 08:59:56 crc kubenswrapper[4886]: I1124 08:59:56.646571 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fq772\" (UniqueName: \"kubernetes.io/projected/fc0d7b30-aa61-4f00-a908-d13689ed0b04-kube-api-access-fq772\") pod \"cert-manager-5b446d88c5-ff82d\" (UID: \"fc0d7b30-aa61-4f00-a908-d13689ed0b04\") " pod="cert-manager/cert-manager-5b446d88c5-ff82d" Nov 24 08:59:56 crc kubenswrapper[4886]: I1124 08:59:56.646619 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jvk25\" (UniqueName: \"kubernetes.io/projected/7b1b394b-0362-4ee6-a956-48d7598ef6a2-kube-api-access-jvk25\") pod \"cert-manager-webhook-5655c58dd6-9cfx6\" (UID: \"7b1b394b-0362-4ee6-a956-48d7598ef6a2\") " pod="cert-manager/cert-manager-webhook-5655c58dd6-9cfx6" Nov 24 08:59:56 crc kubenswrapper[4886]: I1124 08:59:56.646667 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rnhpj\" (UniqueName: \"kubernetes.io/projected/9475a865-8fb9-4c93-aeb0-09e9b8285a88-kube-api-access-rnhpj\") pod \"cert-manager-cainjector-7f985d654d-gsdqn\" (UID: \"9475a865-8fb9-4c93-aeb0-09e9b8285a88\") " pod="cert-manager/cert-manager-cainjector-7f985d654d-gsdqn" Nov 24 08:59:56 crc kubenswrapper[4886]: I1124 08:59:56.666645 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jvk25\" (UniqueName: \"kubernetes.io/projected/7b1b394b-0362-4ee6-a956-48d7598ef6a2-kube-api-access-jvk25\") pod \"cert-manager-webhook-5655c58dd6-9cfx6\" (UID: \"7b1b394b-0362-4ee6-a956-48d7598ef6a2\") " pod="cert-manager/cert-manager-webhook-5655c58dd6-9cfx6" Nov 24 08:59:56 crc kubenswrapper[4886]: I1124 08:59:56.669221 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fq772\" (UniqueName: \"kubernetes.io/projected/fc0d7b30-aa61-4f00-a908-d13689ed0b04-kube-api-access-fq772\") pod \"cert-manager-5b446d88c5-ff82d\" (UID: \"fc0d7b30-aa61-4f00-a908-d13689ed0b04\") " pod="cert-manager/cert-manager-5b446d88c5-ff82d" Nov 24 08:59:56 crc kubenswrapper[4886]: I1124 08:59:56.676288 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rnhpj\" (UniqueName: \"kubernetes.io/projected/9475a865-8fb9-4c93-aeb0-09e9b8285a88-kube-api-access-rnhpj\") pod \"cert-manager-cainjector-7f985d654d-gsdqn\" (UID: \"9475a865-8fb9-4c93-aeb0-09e9b8285a88\") " pod="cert-manager/cert-manager-cainjector-7f985d654d-gsdqn" Nov 24 08:59:56 crc kubenswrapper[4886]: I1124 08:59:56.735441 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-7f985d654d-gsdqn" Nov 24 08:59:56 crc kubenswrapper[4886]: I1124 08:59:56.748598 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-5b446d88c5-ff82d" Nov 24 08:59:56 crc kubenswrapper[4886]: I1124 08:59:56.760238 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-5655c58dd6-9cfx6" Nov 24 08:59:57 crc kubenswrapper[4886]: I1124 08:59:57.021846 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-7f985d654d-gsdqn"] Nov 24 08:59:57 crc kubenswrapper[4886]: I1124 08:59:57.029084 4886 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 24 08:59:57 crc kubenswrapper[4886]: I1124 08:59:57.054431 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-5655c58dd6-9cfx6"] Nov 24 08:59:57 crc kubenswrapper[4886]: I1124 08:59:57.183691 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-5b446d88c5-ff82d"] Nov 24 08:59:57 crc kubenswrapper[4886]: W1124 08:59:57.190209 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfc0d7b30_aa61_4f00_a908_d13689ed0b04.slice/crio-16336db5303eba9a1586da3f85f5e2536a58ae11667ec599428f5f8015f56c51 WatchSource:0}: Error finding container 16336db5303eba9a1586da3f85f5e2536a58ae11667ec599428f5f8015f56c51: Status 404 returned error can't find the container with id 16336db5303eba9a1586da3f85f5e2536a58ae11667ec599428f5f8015f56c51 Nov 24 08:59:57 crc kubenswrapper[4886]: I1124 08:59:57.546995 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-5b446d88c5-ff82d" event={"ID":"fc0d7b30-aa61-4f00-a908-d13689ed0b04","Type":"ContainerStarted","Data":"16336db5303eba9a1586da3f85f5e2536a58ae11667ec599428f5f8015f56c51"} Nov 24 08:59:57 crc kubenswrapper[4886]: I1124 08:59:57.548260 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-5655c58dd6-9cfx6" event={"ID":"7b1b394b-0362-4ee6-a956-48d7598ef6a2","Type":"ContainerStarted","Data":"01a258821b256e123a94ee894a8fbeaeb8e1feb4eb6c48e51472d3f77772181d"} Nov 24 08:59:57 crc kubenswrapper[4886]: I1124 08:59:57.549431 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-7f985d654d-gsdqn" event={"ID":"9475a865-8fb9-4c93-aeb0-09e9b8285a88","Type":"ContainerStarted","Data":"19c759858f4362c45b66263ec6b31098c97e92d76d04e75a1c31a8f9f9a8ed59"} Nov 24 09:00:00 crc kubenswrapper[4886]: I1124 09:00:00.139387 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29399580-vkx6n"] Nov 24 09:00:00 crc kubenswrapper[4886]: I1124 09:00:00.141320 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29399580-vkx6n" Nov 24 09:00:00 crc kubenswrapper[4886]: I1124 09:00:00.145836 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29399580-vkx6n"] Nov 24 09:00:00 crc kubenswrapper[4886]: I1124 09:00:00.158808 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 24 09:00:00 crc kubenswrapper[4886]: I1124 09:00:00.158899 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 24 09:00:00 crc kubenswrapper[4886]: I1124 09:00:00.308350 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h7wwq\" (UniqueName: \"kubernetes.io/projected/f350f6e8-25d9-410b-be41-c4d511d67599-kube-api-access-h7wwq\") pod \"collect-profiles-29399580-vkx6n\" (UID: \"f350f6e8-25d9-410b-be41-c4d511d67599\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399580-vkx6n" Nov 24 09:00:00 crc kubenswrapper[4886]: I1124 09:00:00.308490 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f350f6e8-25d9-410b-be41-c4d511d67599-config-volume\") pod \"collect-profiles-29399580-vkx6n\" (UID: \"f350f6e8-25d9-410b-be41-c4d511d67599\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399580-vkx6n" Nov 24 09:00:00 crc kubenswrapper[4886]: I1124 09:00:00.308578 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f350f6e8-25d9-410b-be41-c4d511d67599-secret-volume\") pod \"collect-profiles-29399580-vkx6n\" (UID: \"f350f6e8-25d9-410b-be41-c4d511d67599\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399580-vkx6n" Nov 24 09:00:00 crc kubenswrapper[4886]: I1124 09:00:00.410207 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f350f6e8-25d9-410b-be41-c4d511d67599-secret-volume\") pod \"collect-profiles-29399580-vkx6n\" (UID: \"f350f6e8-25d9-410b-be41-c4d511d67599\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399580-vkx6n" Nov 24 09:00:00 crc kubenswrapper[4886]: I1124 09:00:00.410451 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h7wwq\" (UniqueName: \"kubernetes.io/projected/f350f6e8-25d9-410b-be41-c4d511d67599-kube-api-access-h7wwq\") pod \"collect-profiles-29399580-vkx6n\" (UID: \"f350f6e8-25d9-410b-be41-c4d511d67599\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399580-vkx6n" Nov 24 09:00:00 crc kubenswrapper[4886]: I1124 09:00:00.410496 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f350f6e8-25d9-410b-be41-c4d511d67599-config-volume\") pod \"collect-profiles-29399580-vkx6n\" (UID: \"f350f6e8-25d9-410b-be41-c4d511d67599\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399580-vkx6n" Nov 24 09:00:00 crc kubenswrapper[4886]: I1124 09:00:00.412828 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f350f6e8-25d9-410b-be41-c4d511d67599-config-volume\") pod \"collect-profiles-29399580-vkx6n\" (UID: \"f350f6e8-25d9-410b-be41-c4d511d67599\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399580-vkx6n" Nov 24 09:00:00 crc kubenswrapper[4886]: I1124 09:00:00.428256 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f350f6e8-25d9-410b-be41-c4d511d67599-secret-volume\") pod \"collect-profiles-29399580-vkx6n\" (UID: \"f350f6e8-25d9-410b-be41-c4d511d67599\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399580-vkx6n" Nov 24 09:00:00 crc kubenswrapper[4886]: I1124 09:00:00.436294 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h7wwq\" (UniqueName: \"kubernetes.io/projected/f350f6e8-25d9-410b-be41-c4d511d67599-kube-api-access-h7wwq\") pod \"collect-profiles-29399580-vkx6n\" (UID: \"f350f6e8-25d9-410b-be41-c4d511d67599\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399580-vkx6n" Nov 24 09:00:00 crc kubenswrapper[4886]: I1124 09:00:00.484025 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29399580-vkx6n" Nov 24 09:00:00 crc kubenswrapper[4886]: I1124 09:00:00.811561 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29399580-vkx6n"] Nov 24 09:00:00 crc kubenswrapper[4886]: W1124 09:00:00.818738 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf350f6e8_25d9_410b_be41_c4d511d67599.slice/crio-46ea99b67bcd6685065fbc072e526916c370494cecfca3ee73cfeea634ca86b7 WatchSource:0}: Error finding container 46ea99b67bcd6685065fbc072e526916c370494cecfca3ee73cfeea634ca86b7: Status 404 returned error can't find the container with id 46ea99b67bcd6685065fbc072e526916c370494cecfca3ee73cfeea634ca86b7 Nov 24 09:00:01 crc kubenswrapper[4886]: I1124 09:00:01.580841 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-7f985d654d-gsdqn" event={"ID":"9475a865-8fb9-4c93-aeb0-09e9b8285a88","Type":"ContainerStarted","Data":"64d283fa2c7c276979ebdb565a87ae248b7f57c096b6f0d00dfac35bb0ebbe72"} Nov 24 09:00:01 crc kubenswrapper[4886]: I1124 09:00:01.583863 4886 generic.go:334] "Generic (PLEG): container finished" podID="f350f6e8-25d9-410b-be41-c4d511d67599" containerID="fc00c94d88dea5b57b5c85a20ee620fad7ff66b4b9bd361fa15ab65aa74ec015" exitCode=0 Nov 24 09:00:01 crc kubenswrapper[4886]: I1124 09:00:01.583924 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29399580-vkx6n" event={"ID":"f350f6e8-25d9-410b-be41-c4d511d67599","Type":"ContainerDied","Data":"fc00c94d88dea5b57b5c85a20ee620fad7ff66b4b9bd361fa15ab65aa74ec015"} Nov 24 09:00:01 crc kubenswrapper[4886]: I1124 09:00:01.583954 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29399580-vkx6n" event={"ID":"f350f6e8-25d9-410b-be41-c4d511d67599","Type":"ContainerStarted","Data":"46ea99b67bcd6685065fbc072e526916c370494cecfca3ee73cfeea634ca86b7"} Nov 24 09:00:01 crc kubenswrapper[4886]: I1124 09:00:01.585317 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-5b446d88c5-ff82d" event={"ID":"fc0d7b30-aa61-4f00-a908-d13689ed0b04","Type":"ContainerStarted","Data":"4546df6e578ee5430e66888f8682606d3ed75bbd5946890335f0b867ccf8f4c2"} Nov 24 09:00:01 crc kubenswrapper[4886]: I1124 09:00:01.587820 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-5655c58dd6-9cfx6" event={"ID":"7b1b394b-0362-4ee6-a956-48d7598ef6a2","Type":"ContainerStarted","Data":"476a31acb3fb6298c7430818d25c8072c7e2bd9cf6b9281c4136585e2062112e"} Nov 24 09:00:01 crc kubenswrapper[4886]: I1124 09:00:01.587973 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-5655c58dd6-9cfx6" Nov 24 09:00:01 crc kubenswrapper[4886]: I1124 09:00:01.598170 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-7f985d654d-gsdqn" podStartSLOduration=2.035392966 podStartE2EDuration="5.598126867s" podCreationTimestamp="2025-11-24 08:59:56 +0000 UTC" firstStartedPulling="2025-11-24 08:59:57.028866836 +0000 UTC m=+652.915604971" lastFinishedPulling="2025-11-24 09:00:00.591600737 +0000 UTC m=+656.478338872" observedRunningTime="2025-11-24 09:00:01.597912171 +0000 UTC m=+657.484650326" watchObservedRunningTime="2025-11-24 09:00:01.598126867 +0000 UTC m=+657.484865002" Nov 24 09:00:01 crc kubenswrapper[4886]: I1124 09:00:01.648703 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-5b446d88c5-ff82d" podStartSLOduration=2.144085512 podStartE2EDuration="5.648675985s" podCreationTimestamp="2025-11-24 08:59:56 +0000 UTC" firstStartedPulling="2025-11-24 08:59:57.19293783 +0000 UTC m=+653.079675965" lastFinishedPulling="2025-11-24 09:00:00.697528303 +0000 UTC m=+656.584266438" observedRunningTime="2025-11-24 09:00:01.629007858 +0000 UTC m=+657.515745993" watchObservedRunningTime="2025-11-24 09:00:01.648675985 +0000 UTC m=+657.535414120" Nov 24 09:00:01 crc kubenswrapper[4886]: I1124 09:00:01.651802 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-5655c58dd6-9cfx6" podStartSLOduration=2.126189786 podStartE2EDuration="5.651787115s" podCreationTimestamp="2025-11-24 08:59:56 +0000 UTC" firstStartedPulling="2025-11-24 08:59:57.066023428 +0000 UTC m=+652.952761563" lastFinishedPulling="2025-11-24 09:00:00.591620747 +0000 UTC m=+656.478358892" observedRunningTime="2025-11-24 09:00:01.648352346 +0000 UTC m=+657.535090501" watchObservedRunningTime="2025-11-24 09:00:01.651787115 +0000 UTC m=+657.538525260" Nov 24 09:00:02 crc kubenswrapper[4886]: I1124 09:00:02.809268 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29399580-vkx6n" Nov 24 09:00:02 crc kubenswrapper[4886]: I1124 09:00:02.951061 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h7wwq\" (UniqueName: \"kubernetes.io/projected/f350f6e8-25d9-410b-be41-c4d511d67599-kube-api-access-h7wwq\") pod \"f350f6e8-25d9-410b-be41-c4d511d67599\" (UID: \"f350f6e8-25d9-410b-be41-c4d511d67599\") " Nov 24 09:00:02 crc kubenswrapper[4886]: I1124 09:00:02.951431 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f350f6e8-25d9-410b-be41-c4d511d67599-secret-volume\") pod \"f350f6e8-25d9-410b-be41-c4d511d67599\" (UID: \"f350f6e8-25d9-410b-be41-c4d511d67599\") " Nov 24 09:00:02 crc kubenswrapper[4886]: I1124 09:00:02.951481 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f350f6e8-25d9-410b-be41-c4d511d67599-config-volume\") pod \"f350f6e8-25d9-410b-be41-c4d511d67599\" (UID: \"f350f6e8-25d9-410b-be41-c4d511d67599\") " Nov 24 09:00:02 crc kubenswrapper[4886]: I1124 09:00:02.952188 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f350f6e8-25d9-410b-be41-c4d511d67599-config-volume" (OuterVolumeSpecName: "config-volume") pod "f350f6e8-25d9-410b-be41-c4d511d67599" (UID: "f350f6e8-25d9-410b-be41-c4d511d67599"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 09:00:02 crc kubenswrapper[4886]: I1124 09:00:02.956368 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f350f6e8-25d9-410b-be41-c4d511d67599-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "f350f6e8-25d9-410b-be41-c4d511d67599" (UID: "f350f6e8-25d9-410b-be41-c4d511d67599"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:00:02 crc kubenswrapper[4886]: I1124 09:00:02.957212 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f350f6e8-25d9-410b-be41-c4d511d67599-kube-api-access-h7wwq" (OuterVolumeSpecName: "kube-api-access-h7wwq") pod "f350f6e8-25d9-410b-be41-c4d511d67599" (UID: "f350f6e8-25d9-410b-be41-c4d511d67599"). InnerVolumeSpecName "kube-api-access-h7wwq". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 09:00:03 crc kubenswrapper[4886]: I1124 09:00:03.053246 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h7wwq\" (UniqueName: \"kubernetes.io/projected/f350f6e8-25d9-410b-be41-c4d511d67599-kube-api-access-h7wwq\") on node \"crc\" DevicePath \"\"" Nov 24 09:00:03 crc kubenswrapper[4886]: I1124 09:00:03.053310 4886 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f350f6e8-25d9-410b-be41-c4d511d67599-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 24 09:00:03 crc kubenswrapper[4886]: I1124 09:00:03.053331 4886 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f350f6e8-25d9-410b-be41-c4d511d67599-config-volume\") on node \"crc\" DevicePath \"\"" Nov 24 09:00:03 crc kubenswrapper[4886]: I1124 09:00:03.599535 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29399580-vkx6n" event={"ID":"f350f6e8-25d9-410b-be41-c4d511d67599","Type":"ContainerDied","Data":"46ea99b67bcd6685065fbc072e526916c370494cecfca3ee73cfeea634ca86b7"} Nov 24 09:00:03 crc kubenswrapper[4886]: I1124 09:00:03.599581 4886 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="46ea99b67bcd6685065fbc072e526916c370494cecfca3ee73cfeea634ca86b7" Nov 24 09:00:03 crc kubenswrapper[4886]: I1124 09:00:03.599614 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29399580-vkx6n" Nov 24 09:00:06 crc kubenswrapper[4886]: I1124 09:00:06.763968 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-5655c58dd6-9cfx6" Nov 24 09:00:26 crc kubenswrapper[4886]: I1124 09:00:26.728927 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-657wc"] Nov 24 09:00:26 crc kubenswrapper[4886]: I1124 09:00:26.734410 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-657wc" podUID="03f9078c-6b20-46d5-ae2a-2eb20e236769" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://e8c129c56a2c69042516d9caa6abdf4095aefde8fdaef4953a8eed353940be89" gracePeriod=30 Nov 24 09:00:26 crc kubenswrapper[4886]: I1124 09:00:26.734393 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-657wc" podUID="03f9078c-6b20-46d5-ae2a-2eb20e236769" containerName="northd" containerID="cri-o://51cd23eb1b9056a44b5716ce542fafec8e3ebde494f136655a2a5d838ec90ab0" gracePeriod=30 Nov 24 09:00:26 crc kubenswrapper[4886]: I1124 09:00:26.734439 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-657wc" podUID="03f9078c-6b20-46d5-ae2a-2eb20e236769" containerName="ovn-controller" containerID="cri-o://97477bfe13de2286806b9a8e3c723e3828c1b0ef182329585eaef507dd0c36f9" gracePeriod=30 Nov 24 09:00:26 crc kubenswrapper[4886]: I1124 09:00:26.734505 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-657wc" podUID="03f9078c-6b20-46d5-ae2a-2eb20e236769" containerName="sbdb" containerID="cri-o://070c77876ce83a479d0ab423d8107a14d506c655b476b2719766d78ad3c88dc4" gracePeriod=30 Nov 24 09:00:26 crc kubenswrapper[4886]: I1124 09:00:26.734595 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-657wc" podUID="03f9078c-6b20-46d5-ae2a-2eb20e236769" containerName="ovn-acl-logging" containerID="cri-o://48584e4993cf06ba4b29a0732136f0f86c23d81adea76e945015ffafb462819d" gracePeriod=30 Nov 24 09:00:26 crc kubenswrapper[4886]: I1124 09:00:26.734563 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-657wc" podUID="03f9078c-6b20-46d5-ae2a-2eb20e236769" containerName="kube-rbac-proxy-node" containerID="cri-o://f59f3a090f9824d71c23fcb40dde8c9d34fa8116c17ba79a45c005b2719f3dc7" gracePeriod=30 Nov 24 09:00:26 crc kubenswrapper[4886]: I1124 09:00:26.734535 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-657wc" podUID="03f9078c-6b20-46d5-ae2a-2eb20e236769" containerName="nbdb" containerID="cri-o://6f504f7f421bd783399a6d08fa713f0a885d8edd5f5245933a98b6830c5573d5" gracePeriod=30 Nov 24 09:00:26 crc kubenswrapper[4886]: I1124 09:00:26.796279 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-657wc" podUID="03f9078c-6b20-46d5-ae2a-2eb20e236769" containerName="ovnkube-controller" containerID="cri-o://0d891a12cf25b868d102f1ffa18fa60b421774bbfe09a824891723acc4d93fd3" gracePeriod=30 Nov 24 09:00:27 crc kubenswrapper[4886]: E1124 09:00:27.034123 4886 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 070c77876ce83a479d0ab423d8107a14d506c655b476b2719766d78ad3c88dc4 is running failed: container process not found" containerID="070c77876ce83a479d0ab423d8107a14d506c655b476b2719766d78ad3c88dc4" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"sb\"\n"] Nov 24 09:00:27 crc kubenswrapper[4886]: E1124 09:00:27.034184 4886 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 6f504f7f421bd783399a6d08fa713f0a885d8edd5f5245933a98b6830c5573d5 is running failed: container process not found" containerID="6f504f7f421bd783399a6d08fa713f0a885d8edd5f5245933a98b6830c5573d5" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"nb\"\n"] Nov 24 09:00:27 crc kubenswrapper[4886]: E1124 09:00:27.035185 4886 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 070c77876ce83a479d0ab423d8107a14d506c655b476b2719766d78ad3c88dc4 is running failed: container process not found" containerID="070c77876ce83a479d0ab423d8107a14d506c655b476b2719766d78ad3c88dc4" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"sb\"\n"] Nov 24 09:00:27 crc kubenswrapper[4886]: E1124 09:00:27.035269 4886 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 6f504f7f421bd783399a6d08fa713f0a885d8edd5f5245933a98b6830c5573d5 is running failed: container process not found" containerID="6f504f7f421bd783399a6d08fa713f0a885d8edd5f5245933a98b6830c5573d5" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"nb\"\n"] Nov 24 09:00:27 crc kubenswrapper[4886]: E1124 09:00:27.035448 4886 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 070c77876ce83a479d0ab423d8107a14d506c655b476b2719766d78ad3c88dc4 is running failed: container process not found" containerID="070c77876ce83a479d0ab423d8107a14d506c655b476b2719766d78ad3c88dc4" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"sb\"\n"] Nov 24 09:00:27 crc kubenswrapper[4886]: E1124 09:00:27.035479 4886 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 070c77876ce83a479d0ab423d8107a14d506c655b476b2719766d78ad3c88dc4 is running failed: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-657wc" podUID="03f9078c-6b20-46d5-ae2a-2eb20e236769" containerName="sbdb" Nov 24 09:00:27 crc kubenswrapper[4886]: E1124 09:00:27.035957 4886 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 6f504f7f421bd783399a6d08fa713f0a885d8edd5f5245933a98b6830c5573d5 is running failed: container process not found" containerID="6f504f7f421bd783399a6d08fa713f0a885d8edd5f5245933a98b6830c5573d5" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"nb\"\n"] Nov 24 09:00:27 crc kubenswrapper[4886]: E1124 09:00:27.036021 4886 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 6f504f7f421bd783399a6d08fa713f0a885d8edd5f5245933a98b6830c5573d5 is running failed: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-657wc" podUID="03f9078c-6b20-46d5-ae2a-2eb20e236769" containerName="nbdb" Nov 24 09:00:27 crc kubenswrapper[4886]: I1124 09:00:27.083409 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-657wc_03f9078c-6b20-46d5-ae2a-2eb20e236769/ovnkube-controller/3.log" Nov 24 09:00:27 crc kubenswrapper[4886]: I1124 09:00:27.086621 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-657wc_03f9078c-6b20-46d5-ae2a-2eb20e236769/ovn-acl-logging/0.log" Nov 24 09:00:27 crc kubenswrapper[4886]: I1124 09:00:27.087118 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-657wc_03f9078c-6b20-46d5-ae2a-2eb20e236769/ovn-controller/0.log" Nov 24 09:00:27 crc kubenswrapper[4886]: I1124 09:00:27.087739 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-657wc" Nov 24 09:00:27 crc kubenswrapper[4886]: I1124 09:00:27.146938 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-hc97l"] Nov 24 09:00:27 crc kubenswrapper[4886]: E1124 09:00:27.147174 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03f9078c-6b20-46d5-ae2a-2eb20e236769" containerName="nbdb" Nov 24 09:00:27 crc kubenswrapper[4886]: I1124 09:00:27.147190 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="03f9078c-6b20-46d5-ae2a-2eb20e236769" containerName="nbdb" Nov 24 09:00:27 crc kubenswrapper[4886]: E1124 09:00:27.147199 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03f9078c-6b20-46d5-ae2a-2eb20e236769" containerName="ovnkube-controller" Nov 24 09:00:27 crc kubenswrapper[4886]: I1124 09:00:27.147204 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="03f9078c-6b20-46d5-ae2a-2eb20e236769" containerName="ovnkube-controller" Nov 24 09:00:27 crc kubenswrapper[4886]: E1124 09:00:27.147211 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03f9078c-6b20-46d5-ae2a-2eb20e236769" containerName="ovn-controller" Nov 24 09:00:27 crc kubenswrapper[4886]: I1124 09:00:27.147217 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="03f9078c-6b20-46d5-ae2a-2eb20e236769" containerName="ovn-controller" Nov 24 09:00:27 crc kubenswrapper[4886]: E1124 09:00:27.147227 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03f9078c-6b20-46d5-ae2a-2eb20e236769" containerName="ovnkube-controller" Nov 24 09:00:27 crc kubenswrapper[4886]: I1124 09:00:27.147232 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="03f9078c-6b20-46d5-ae2a-2eb20e236769" containerName="ovnkube-controller" Nov 24 09:00:27 crc kubenswrapper[4886]: E1124 09:00:27.147241 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f350f6e8-25d9-410b-be41-c4d511d67599" containerName="collect-profiles" Nov 24 09:00:27 crc kubenswrapper[4886]: I1124 09:00:27.147250 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="f350f6e8-25d9-410b-be41-c4d511d67599" containerName="collect-profiles" Nov 24 09:00:27 crc kubenswrapper[4886]: E1124 09:00:27.147263 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03f9078c-6b20-46d5-ae2a-2eb20e236769" containerName="sbdb" Nov 24 09:00:27 crc kubenswrapper[4886]: I1124 09:00:27.147270 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="03f9078c-6b20-46d5-ae2a-2eb20e236769" containerName="sbdb" Nov 24 09:00:27 crc kubenswrapper[4886]: E1124 09:00:27.147279 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03f9078c-6b20-46d5-ae2a-2eb20e236769" containerName="kube-rbac-proxy-node" Nov 24 09:00:27 crc kubenswrapper[4886]: I1124 09:00:27.147288 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="03f9078c-6b20-46d5-ae2a-2eb20e236769" containerName="kube-rbac-proxy-node" Nov 24 09:00:27 crc kubenswrapper[4886]: E1124 09:00:27.147295 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03f9078c-6b20-46d5-ae2a-2eb20e236769" containerName="kube-rbac-proxy-ovn-metrics" Nov 24 09:00:27 crc kubenswrapper[4886]: I1124 09:00:27.147301 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="03f9078c-6b20-46d5-ae2a-2eb20e236769" containerName="kube-rbac-proxy-ovn-metrics" Nov 24 09:00:27 crc kubenswrapper[4886]: E1124 09:00:27.147313 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03f9078c-6b20-46d5-ae2a-2eb20e236769" containerName="northd" Nov 24 09:00:27 crc kubenswrapper[4886]: I1124 09:00:27.147319 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="03f9078c-6b20-46d5-ae2a-2eb20e236769" containerName="northd" Nov 24 09:00:27 crc kubenswrapper[4886]: E1124 09:00:27.147326 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03f9078c-6b20-46d5-ae2a-2eb20e236769" containerName="kubecfg-setup" Nov 24 09:00:27 crc kubenswrapper[4886]: I1124 09:00:27.147333 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="03f9078c-6b20-46d5-ae2a-2eb20e236769" containerName="kubecfg-setup" Nov 24 09:00:27 crc kubenswrapper[4886]: E1124 09:00:27.147343 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03f9078c-6b20-46d5-ae2a-2eb20e236769" containerName="ovnkube-controller" Nov 24 09:00:27 crc kubenswrapper[4886]: I1124 09:00:27.147350 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="03f9078c-6b20-46d5-ae2a-2eb20e236769" containerName="ovnkube-controller" Nov 24 09:00:27 crc kubenswrapper[4886]: E1124 09:00:27.147361 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03f9078c-6b20-46d5-ae2a-2eb20e236769" containerName="ovn-acl-logging" Nov 24 09:00:27 crc kubenswrapper[4886]: I1124 09:00:27.147367 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="03f9078c-6b20-46d5-ae2a-2eb20e236769" containerName="ovn-acl-logging" Nov 24 09:00:27 crc kubenswrapper[4886]: I1124 09:00:27.147469 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="03f9078c-6b20-46d5-ae2a-2eb20e236769" containerName="ovnkube-controller" Nov 24 09:00:27 crc kubenswrapper[4886]: I1124 09:00:27.147479 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="03f9078c-6b20-46d5-ae2a-2eb20e236769" containerName="ovnkube-controller" Nov 24 09:00:27 crc kubenswrapper[4886]: I1124 09:00:27.147486 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="03f9078c-6b20-46d5-ae2a-2eb20e236769" containerName="ovnkube-controller" Nov 24 09:00:27 crc kubenswrapper[4886]: I1124 09:00:27.147495 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="03f9078c-6b20-46d5-ae2a-2eb20e236769" containerName="ovn-controller" Nov 24 09:00:27 crc kubenswrapper[4886]: I1124 09:00:27.147503 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="03f9078c-6b20-46d5-ae2a-2eb20e236769" containerName="nbdb" Nov 24 09:00:27 crc kubenswrapper[4886]: I1124 09:00:27.147511 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="f350f6e8-25d9-410b-be41-c4d511d67599" containerName="collect-profiles" Nov 24 09:00:27 crc kubenswrapper[4886]: I1124 09:00:27.147523 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="03f9078c-6b20-46d5-ae2a-2eb20e236769" containerName="northd" Nov 24 09:00:27 crc kubenswrapper[4886]: I1124 09:00:27.147534 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="03f9078c-6b20-46d5-ae2a-2eb20e236769" containerName="kube-rbac-proxy-ovn-metrics" Nov 24 09:00:27 crc kubenswrapper[4886]: I1124 09:00:27.147546 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="03f9078c-6b20-46d5-ae2a-2eb20e236769" containerName="sbdb" Nov 24 09:00:27 crc kubenswrapper[4886]: I1124 09:00:27.147556 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="03f9078c-6b20-46d5-ae2a-2eb20e236769" containerName="ovn-acl-logging" Nov 24 09:00:27 crc kubenswrapper[4886]: I1124 09:00:27.147565 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="03f9078c-6b20-46d5-ae2a-2eb20e236769" containerName="kube-rbac-proxy-node" Nov 24 09:00:27 crc kubenswrapper[4886]: E1124 09:00:27.147680 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03f9078c-6b20-46d5-ae2a-2eb20e236769" containerName="ovnkube-controller" Nov 24 09:00:27 crc kubenswrapper[4886]: I1124 09:00:27.147689 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="03f9078c-6b20-46d5-ae2a-2eb20e236769" containerName="ovnkube-controller" Nov 24 09:00:27 crc kubenswrapper[4886]: E1124 09:00:27.147698 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03f9078c-6b20-46d5-ae2a-2eb20e236769" containerName="ovnkube-controller" Nov 24 09:00:27 crc kubenswrapper[4886]: I1124 09:00:27.147708 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="03f9078c-6b20-46d5-ae2a-2eb20e236769" containerName="ovnkube-controller" Nov 24 09:00:27 crc kubenswrapper[4886]: I1124 09:00:27.147845 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="03f9078c-6b20-46d5-ae2a-2eb20e236769" containerName="ovnkube-controller" Nov 24 09:00:27 crc kubenswrapper[4886]: I1124 09:00:27.147855 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="03f9078c-6b20-46d5-ae2a-2eb20e236769" containerName="ovnkube-controller" Nov 24 09:00:27 crc kubenswrapper[4886]: I1124 09:00:27.149843 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-hc97l" Nov 24 09:00:27 crc kubenswrapper[4886]: I1124 09:00:27.258790 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/03f9078c-6b20-46d5-ae2a-2eb20e236769-etc-openvswitch\") pod \"03f9078c-6b20-46d5-ae2a-2eb20e236769\" (UID: \"03f9078c-6b20-46d5-ae2a-2eb20e236769\") " Nov 24 09:00:27 crc kubenswrapper[4886]: I1124 09:00:27.258867 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/03f9078c-6b20-46d5-ae2a-2eb20e236769-ovnkube-config\") pod \"03f9078c-6b20-46d5-ae2a-2eb20e236769\" (UID: \"03f9078c-6b20-46d5-ae2a-2eb20e236769\") " Nov 24 09:00:27 crc kubenswrapper[4886]: I1124 09:00:27.258889 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/03f9078c-6b20-46d5-ae2a-2eb20e236769-run-ovn\") pod \"03f9078c-6b20-46d5-ae2a-2eb20e236769\" (UID: \"03f9078c-6b20-46d5-ae2a-2eb20e236769\") " Nov 24 09:00:27 crc kubenswrapper[4886]: I1124 09:00:27.258913 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m55nz\" (UniqueName: \"kubernetes.io/projected/03f9078c-6b20-46d5-ae2a-2eb20e236769-kube-api-access-m55nz\") pod \"03f9078c-6b20-46d5-ae2a-2eb20e236769\" (UID: \"03f9078c-6b20-46d5-ae2a-2eb20e236769\") " Nov 24 09:00:27 crc kubenswrapper[4886]: I1124 09:00:27.258959 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/03f9078c-6b20-46d5-ae2a-2eb20e236769-host-kubelet\") pod \"03f9078c-6b20-46d5-ae2a-2eb20e236769\" (UID: \"03f9078c-6b20-46d5-ae2a-2eb20e236769\") " Nov 24 09:00:27 crc kubenswrapper[4886]: I1124 09:00:27.258941 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/03f9078c-6b20-46d5-ae2a-2eb20e236769-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "03f9078c-6b20-46d5-ae2a-2eb20e236769" (UID: "03f9078c-6b20-46d5-ae2a-2eb20e236769"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 09:00:27 crc kubenswrapper[4886]: I1124 09:00:27.258980 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/03f9078c-6b20-46d5-ae2a-2eb20e236769-systemd-units\") pod \"03f9078c-6b20-46d5-ae2a-2eb20e236769\" (UID: \"03f9078c-6b20-46d5-ae2a-2eb20e236769\") " Nov 24 09:00:27 crc kubenswrapper[4886]: I1124 09:00:27.259035 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/03f9078c-6b20-46d5-ae2a-2eb20e236769-var-lib-openvswitch\") pod \"03f9078c-6b20-46d5-ae2a-2eb20e236769\" (UID: \"03f9078c-6b20-46d5-ae2a-2eb20e236769\") " Nov 24 09:00:27 crc kubenswrapper[4886]: I1124 09:00:27.259052 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/03f9078c-6b20-46d5-ae2a-2eb20e236769-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "03f9078c-6b20-46d5-ae2a-2eb20e236769" (UID: "03f9078c-6b20-46d5-ae2a-2eb20e236769"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 09:00:27 crc kubenswrapper[4886]: I1124 09:00:27.259099 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/03f9078c-6b20-46d5-ae2a-2eb20e236769-host-var-lib-cni-networks-ovn-kubernetes\") pod \"03f9078c-6b20-46d5-ae2a-2eb20e236769\" (UID: \"03f9078c-6b20-46d5-ae2a-2eb20e236769\") " Nov 24 09:00:27 crc kubenswrapper[4886]: I1124 09:00:27.259132 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/03f9078c-6b20-46d5-ae2a-2eb20e236769-host-cni-netd\") pod \"03f9078c-6b20-46d5-ae2a-2eb20e236769\" (UID: \"03f9078c-6b20-46d5-ae2a-2eb20e236769\") " Nov 24 09:00:27 crc kubenswrapper[4886]: I1124 09:00:27.259177 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/03f9078c-6b20-46d5-ae2a-2eb20e236769-node-log\") pod \"03f9078c-6b20-46d5-ae2a-2eb20e236769\" (UID: \"03f9078c-6b20-46d5-ae2a-2eb20e236769\") " Nov 24 09:00:27 crc kubenswrapper[4886]: I1124 09:00:27.259220 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/03f9078c-6b20-46d5-ae2a-2eb20e236769-ovnkube-script-lib\") pod \"03f9078c-6b20-46d5-ae2a-2eb20e236769\" (UID: \"03f9078c-6b20-46d5-ae2a-2eb20e236769\") " Nov 24 09:00:27 crc kubenswrapper[4886]: I1124 09:00:27.259266 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/03f9078c-6b20-46d5-ae2a-2eb20e236769-run-openvswitch\") pod \"03f9078c-6b20-46d5-ae2a-2eb20e236769\" (UID: \"03f9078c-6b20-46d5-ae2a-2eb20e236769\") " Nov 24 09:00:27 crc kubenswrapper[4886]: I1124 09:00:27.259310 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/03f9078c-6b20-46d5-ae2a-2eb20e236769-host-slash\") pod \"03f9078c-6b20-46d5-ae2a-2eb20e236769\" (UID: \"03f9078c-6b20-46d5-ae2a-2eb20e236769\") " Nov 24 09:00:27 crc kubenswrapper[4886]: I1124 09:00:27.259346 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/03f9078c-6b20-46d5-ae2a-2eb20e236769-host-cni-bin\") pod \"03f9078c-6b20-46d5-ae2a-2eb20e236769\" (UID: \"03f9078c-6b20-46d5-ae2a-2eb20e236769\") " Nov 24 09:00:27 crc kubenswrapper[4886]: I1124 09:00:27.259351 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/03f9078c-6b20-46d5-ae2a-2eb20e236769-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "03f9078c-6b20-46d5-ae2a-2eb20e236769" (UID: "03f9078c-6b20-46d5-ae2a-2eb20e236769"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 09:00:27 crc kubenswrapper[4886]: I1124 09:00:27.259353 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/03f9078c-6b20-46d5-ae2a-2eb20e236769-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "03f9078c-6b20-46d5-ae2a-2eb20e236769" (UID: "03f9078c-6b20-46d5-ae2a-2eb20e236769"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 09:00:27 crc kubenswrapper[4886]: I1124 09:00:27.259384 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/03f9078c-6b20-46d5-ae2a-2eb20e236769-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "03f9078c-6b20-46d5-ae2a-2eb20e236769" (UID: "03f9078c-6b20-46d5-ae2a-2eb20e236769"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 09:00:27 crc kubenswrapper[4886]: I1124 09:00:27.259400 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/03f9078c-6b20-46d5-ae2a-2eb20e236769-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "03f9078c-6b20-46d5-ae2a-2eb20e236769" (UID: "03f9078c-6b20-46d5-ae2a-2eb20e236769"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 09:00:27 crc kubenswrapper[4886]: I1124 09:00:27.259367 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/03f9078c-6b20-46d5-ae2a-2eb20e236769-host-run-ovn-kubernetes\") pod \"03f9078c-6b20-46d5-ae2a-2eb20e236769\" (UID: \"03f9078c-6b20-46d5-ae2a-2eb20e236769\") " Nov 24 09:00:27 crc kubenswrapper[4886]: I1124 09:00:27.259409 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/03f9078c-6b20-46d5-ae2a-2eb20e236769-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "03f9078c-6b20-46d5-ae2a-2eb20e236769" (UID: "03f9078c-6b20-46d5-ae2a-2eb20e236769"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 09:00:27 crc kubenswrapper[4886]: I1124 09:00:27.259485 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/03f9078c-6b20-46d5-ae2a-2eb20e236769-ovn-node-metrics-cert\") pod \"03f9078c-6b20-46d5-ae2a-2eb20e236769\" (UID: \"03f9078c-6b20-46d5-ae2a-2eb20e236769\") " Nov 24 09:00:27 crc kubenswrapper[4886]: I1124 09:00:27.259435 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/03f9078c-6b20-46d5-ae2a-2eb20e236769-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "03f9078c-6b20-46d5-ae2a-2eb20e236769" (UID: "03f9078c-6b20-46d5-ae2a-2eb20e236769"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 09:00:27 crc kubenswrapper[4886]: I1124 09:00:27.259429 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/03f9078c-6b20-46d5-ae2a-2eb20e236769-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "03f9078c-6b20-46d5-ae2a-2eb20e236769" (UID: "03f9078c-6b20-46d5-ae2a-2eb20e236769"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 09:00:27 crc kubenswrapper[4886]: I1124 09:00:27.259440 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/03f9078c-6b20-46d5-ae2a-2eb20e236769-node-log" (OuterVolumeSpecName: "node-log") pod "03f9078c-6b20-46d5-ae2a-2eb20e236769" (UID: "03f9078c-6b20-46d5-ae2a-2eb20e236769"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 09:00:27 crc kubenswrapper[4886]: I1124 09:00:27.259441 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/03f9078c-6b20-46d5-ae2a-2eb20e236769-host-slash" (OuterVolumeSpecName: "host-slash") pod "03f9078c-6b20-46d5-ae2a-2eb20e236769" (UID: "03f9078c-6b20-46d5-ae2a-2eb20e236769"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 09:00:27 crc kubenswrapper[4886]: I1124 09:00:27.259527 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/03f9078c-6b20-46d5-ae2a-2eb20e236769-run-systemd\") pod \"03f9078c-6b20-46d5-ae2a-2eb20e236769\" (UID: \"03f9078c-6b20-46d5-ae2a-2eb20e236769\") " Nov 24 09:00:27 crc kubenswrapper[4886]: I1124 09:00:27.259466 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/03f9078c-6b20-46d5-ae2a-2eb20e236769-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "03f9078c-6b20-46d5-ae2a-2eb20e236769" (UID: "03f9078c-6b20-46d5-ae2a-2eb20e236769"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 09:00:27 crc kubenswrapper[4886]: I1124 09:00:27.259602 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/03f9078c-6b20-46d5-ae2a-2eb20e236769-log-socket\") pod \"03f9078c-6b20-46d5-ae2a-2eb20e236769\" (UID: \"03f9078c-6b20-46d5-ae2a-2eb20e236769\") " Nov 24 09:00:27 crc kubenswrapper[4886]: I1124 09:00:27.259640 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/03f9078c-6b20-46d5-ae2a-2eb20e236769-host-run-netns\") pod \"03f9078c-6b20-46d5-ae2a-2eb20e236769\" (UID: \"03f9078c-6b20-46d5-ae2a-2eb20e236769\") " Nov 24 09:00:27 crc kubenswrapper[4886]: I1124 09:00:27.259565 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/03f9078c-6b20-46d5-ae2a-2eb20e236769-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "03f9078c-6b20-46d5-ae2a-2eb20e236769" (UID: "03f9078c-6b20-46d5-ae2a-2eb20e236769"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 09:00:27 crc kubenswrapper[4886]: I1124 09:00:27.259647 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/03f9078c-6b20-46d5-ae2a-2eb20e236769-log-socket" (OuterVolumeSpecName: "log-socket") pod "03f9078c-6b20-46d5-ae2a-2eb20e236769" (UID: "03f9078c-6b20-46d5-ae2a-2eb20e236769"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 09:00:27 crc kubenswrapper[4886]: I1124 09:00:27.259686 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/03f9078c-6b20-46d5-ae2a-2eb20e236769-env-overrides\") pod \"03f9078c-6b20-46d5-ae2a-2eb20e236769\" (UID: \"03f9078c-6b20-46d5-ae2a-2eb20e236769\") " Nov 24 09:00:27 crc kubenswrapper[4886]: I1124 09:00:27.259825 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/03f9078c-6b20-46d5-ae2a-2eb20e236769-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "03f9078c-6b20-46d5-ae2a-2eb20e236769" (UID: "03f9078c-6b20-46d5-ae2a-2eb20e236769"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 09:00:27 crc kubenswrapper[4886]: I1124 09:00:27.259924 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/03f9078c-6b20-46d5-ae2a-2eb20e236769-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "03f9078c-6b20-46d5-ae2a-2eb20e236769" (UID: "03f9078c-6b20-46d5-ae2a-2eb20e236769"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 09:00:27 crc kubenswrapper[4886]: I1124 09:00:27.259989 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6e112739-e842-4b28-9725-42f1d8907066-etc-openvswitch\") pod \"ovnkube-node-hc97l\" (UID: \"6e112739-e842-4b28-9725-42f1d8907066\") " pod="openshift-ovn-kubernetes/ovnkube-node-hc97l" Nov 24 09:00:27 crc kubenswrapper[4886]: I1124 09:00:27.260022 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/6e112739-e842-4b28-9725-42f1d8907066-host-cni-bin\") pod \"ovnkube-node-hc97l\" (UID: \"6e112739-e842-4b28-9725-42f1d8907066\") " pod="openshift-ovn-kubernetes/ovnkube-node-hc97l" Nov 24 09:00:27 crc kubenswrapper[4886]: I1124 09:00:27.260059 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/6e112739-e842-4b28-9725-42f1d8907066-host-kubelet\") pod \"ovnkube-node-hc97l\" (UID: \"6e112739-e842-4b28-9725-42f1d8907066\") " pod="openshift-ovn-kubernetes/ovnkube-node-hc97l" Nov 24 09:00:27 crc kubenswrapper[4886]: I1124 09:00:27.260083 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/6e112739-e842-4b28-9725-42f1d8907066-run-systemd\") pod \"ovnkube-node-hc97l\" (UID: \"6e112739-e842-4b28-9725-42f1d8907066\") " pod="openshift-ovn-kubernetes/ovnkube-node-hc97l" Nov 24 09:00:27 crc kubenswrapper[4886]: I1124 09:00:27.260107 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/6e112739-e842-4b28-9725-42f1d8907066-host-run-netns\") pod \"ovnkube-node-hc97l\" (UID: \"6e112739-e842-4b28-9725-42f1d8907066\") " pod="openshift-ovn-kubernetes/ovnkube-node-hc97l" Nov 24 09:00:27 crc kubenswrapper[4886]: I1124 09:00:27.260199 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6e112739-e842-4b28-9725-42f1d8907066-host-run-ovn-kubernetes\") pod \"ovnkube-node-hc97l\" (UID: \"6e112739-e842-4b28-9725-42f1d8907066\") " pod="openshift-ovn-kubernetes/ovnkube-node-hc97l" Nov 24 09:00:27 crc kubenswrapper[4886]: I1124 09:00:27.260228 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6e112739-e842-4b28-9725-42f1d8907066-host-cni-netd\") pod \"ovnkube-node-hc97l\" (UID: \"6e112739-e842-4b28-9725-42f1d8907066\") " pod="openshift-ovn-kubernetes/ovnkube-node-hc97l" Nov 24 09:00:27 crc kubenswrapper[4886]: I1124 09:00:27.260253 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6e112739-e842-4b28-9725-42f1d8907066-var-lib-openvswitch\") pod \"ovnkube-node-hc97l\" (UID: \"6e112739-e842-4b28-9725-42f1d8907066\") " pod="openshift-ovn-kubernetes/ovnkube-node-hc97l" Nov 24 09:00:27 crc kubenswrapper[4886]: I1124 09:00:27.260268 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/03f9078c-6b20-46d5-ae2a-2eb20e236769-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "03f9078c-6b20-46d5-ae2a-2eb20e236769" (UID: "03f9078c-6b20-46d5-ae2a-2eb20e236769"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 09:00:27 crc kubenswrapper[4886]: I1124 09:00:27.260274 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6e112739-e842-4b28-9725-42f1d8907066-env-overrides\") pod \"ovnkube-node-hc97l\" (UID: \"6e112739-e842-4b28-9725-42f1d8907066\") " pod="openshift-ovn-kubernetes/ovnkube-node-hc97l" Nov 24 09:00:27 crc kubenswrapper[4886]: I1124 09:00:27.260384 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/6e112739-e842-4b28-9725-42f1d8907066-log-socket\") pod \"ovnkube-node-hc97l\" (UID: \"6e112739-e842-4b28-9725-42f1d8907066\") " pod="openshift-ovn-kubernetes/ovnkube-node-hc97l" Nov 24 09:00:27 crc kubenswrapper[4886]: I1124 09:00:27.260566 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/6e112739-e842-4b28-9725-42f1d8907066-host-slash\") pod \"ovnkube-node-hc97l\" (UID: \"6e112739-e842-4b28-9725-42f1d8907066\") " pod="openshift-ovn-kubernetes/ovnkube-node-hc97l" Nov 24 09:00:27 crc kubenswrapper[4886]: I1124 09:00:27.260606 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6e112739-e842-4b28-9725-42f1d8907066-run-openvswitch\") pod \"ovnkube-node-hc97l\" (UID: \"6e112739-e842-4b28-9725-42f1d8907066\") " pod="openshift-ovn-kubernetes/ovnkube-node-hc97l" Nov 24 09:00:27 crc kubenswrapper[4886]: I1124 09:00:27.260629 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6e112739-e842-4b28-9725-42f1d8907066-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-hc97l\" (UID: \"6e112739-e842-4b28-9725-42f1d8907066\") " pod="openshift-ovn-kubernetes/ovnkube-node-hc97l" Nov 24 09:00:27 crc kubenswrapper[4886]: I1124 09:00:27.260653 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6e112739-e842-4b28-9725-42f1d8907066-ovn-node-metrics-cert\") pod \"ovnkube-node-hc97l\" (UID: \"6e112739-e842-4b28-9725-42f1d8907066\") " pod="openshift-ovn-kubernetes/ovnkube-node-hc97l" Nov 24 09:00:27 crc kubenswrapper[4886]: I1124 09:00:27.260686 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6e112739-e842-4b28-9725-42f1d8907066-ovnkube-script-lib\") pod \"ovnkube-node-hc97l\" (UID: \"6e112739-e842-4b28-9725-42f1d8907066\") " pod="openshift-ovn-kubernetes/ovnkube-node-hc97l" Nov 24 09:00:27 crc kubenswrapper[4886]: I1124 09:00:27.260773 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/6e112739-e842-4b28-9725-42f1d8907066-run-ovn\") pod \"ovnkube-node-hc97l\" (UID: \"6e112739-e842-4b28-9725-42f1d8907066\") " pod="openshift-ovn-kubernetes/ovnkube-node-hc97l" Nov 24 09:00:27 crc kubenswrapper[4886]: I1124 09:00:27.260843 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6e112739-e842-4b28-9725-42f1d8907066-ovnkube-config\") pod \"ovnkube-node-hc97l\" (UID: \"6e112739-e842-4b28-9725-42f1d8907066\") " pod="openshift-ovn-kubernetes/ovnkube-node-hc97l" Nov 24 09:00:27 crc kubenswrapper[4886]: I1124 09:00:27.260874 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-znckp\" (UniqueName: \"kubernetes.io/projected/6e112739-e842-4b28-9725-42f1d8907066-kube-api-access-znckp\") pod \"ovnkube-node-hc97l\" (UID: \"6e112739-e842-4b28-9725-42f1d8907066\") " pod="openshift-ovn-kubernetes/ovnkube-node-hc97l" Nov 24 09:00:27 crc kubenswrapper[4886]: I1124 09:00:27.260907 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/6e112739-e842-4b28-9725-42f1d8907066-systemd-units\") pod \"ovnkube-node-hc97l\" (UID: \"6e112739-e842-4b28-9725-42f1d8907066\") " pod="openshift-ovn-kubernetes/ovnkube-node-hc97l" Nov 24 09:00:27 crc kubenswrapper[4886]: I1124 09:00:27.260937 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/6e112739-e842-4b28-9725-42f1d8907066-node-log\") pod \"ovnkube-node-hc97l\" (UID: \"6e112739-e842-4b28-9725-42f1d8907066\") " pod="openshift-ovn-kubernetes/ovnkube-node-hc97l" Nov 24 09:00:27 crc kubenswrapper[4886]: I1124 09:00:27.261026 4886 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/03f9078c-6b20-46d5-ae2a-2eb20e236769-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Nov 24 09:00:27 crc kubenswrapper[4886]: I1124 09:00:27.261046 4886 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/03f9078c-6b20-46d5-ae2a-2eb20e236769-run-openvswitch\") on node \"crc\" DevicePath \"\"" Nov 24 09:00:27 crc kubenswrapper[4886]: I1124 09:00:27.261088 4886 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/03f9078c-6b20-46d5-ae2a-2eb20e236769-host-slash\") on node \"crc\" DevicePath \"\"" Nov 24 09:00:27 crc kubenswrapper[4886]: I1124 09:00:27.261101 4886 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/03f9078c-6b20-46d5-ae2a-2eb20e236769-host-cni-bin\") on node \"crc\" DevicePath \"\"" Nov 24 09:00:27 crc kubenswrapper[4886]: I1124 09:00:27.261114 4886 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/03f9078c-6b20-46d5-ae2a-2eb20e236769-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Nov 24 09:00:27 crc kubenswrapper[4886]: I1124 09:00:27.261126 4886 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/03f9078c-6b20-46d5-ae2a-2eb20e236769-log-socket\") on node \"crc\" DevicePath \"\"" Nov 24 09:00:27 crc kubenswrapper[4886]: I1124 09:00:27.261138 4886 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/03f9078c-6b20-46d5-ae2a-2eb20e236769-host-run-netns\") on node \"crc\" DevicePath \"\"" Nov 24 09:00:27 crc kubenswrapper[4886]: I1124 09:00:27.261172 4886 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/03f9078c-6b20-46d5-ae2a-2eb20e236769-env-overrides\") on node \"crc\" DevicePath \"\"" Nov 24 09:00:27 crc kubenswrapper[4886]: I1124 09:00:27.261187 4886 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/03f9078c-6b20-46d5-ae2a-2eb20e236769-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Nov 24 09:00:27 crc kubenswrapper[4886]: I1124 09:00:27.261199 4886 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/03f9078c-6b20-46d5-ae2a-2eb20e236769-ovnkube-config\") on node \"crc\" DevicePath \"\"" Nov 24 09:00:27 crc kubenswrapper[4886]: I1124 09:00:27.261210 4886 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/03f9078c-6b20-46d5-ae2a-2eb20e236769-run-ovn\") on node \"crc\" DevicePath \"\"" Nov 24 09:00:27 crc kubenswrapper[4886]: I1124 09:00:27.261223 4886 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/03f9078c-6b20-46d5-ae2a-2eb20e236769-host-kubelet\") on node \"crc\" DevicePath \"\"" Nov 24 09:00:27 crc kubenswrapper[4886]: I1124 09:00:27.261234 4886 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/03f9078c-6b20-46d5-ae2a-2eb20e236769-systemd-units\") on node \"crc\" DevicePath \"\"" Nov 24 09:00:27 crc kubenswrapper[4886]: I1124 09:00:27.261245 4886 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/03f9078c-6b20-46d5-ae2a-2eb20e236769-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Nov 24 09:00:27 crc kubenswrapper[4886]: I1124 09:00:27.261259 4886 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/03f9078c-6b20-46d5-ae2a-2eb20e236769-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Nov 24 09:00:27 crc kubenswrapper[4886]: I1124 09:00:27.261272 4886 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/03f9078c-6b20-46d5-ae2a-2eb20e236769-host-cni-netd\") on node \"crc\" DevicePath \"\"" Nov 24 09:00:27 crc kubenswrapper[4886]: I1124 09:00:27.261285 4886 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/03f9078c-6b20-46d5-ae2a-2eb20e236769-node-log\") on node \"crc\" DevicePath \"\"" Nov 24 09:00:27 crc kubenswrapper[4886]: I1124 09:00:27.265909 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/03f9078c-6b20-46d5-ae2a-2eb20e236769-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "03f9078c-6b20-46d5-ae2a-2eb20e236769" (UID: "03f9078c-6b20-46d5-ae2a-2eb20e236769"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:00:27 crc kubenswrapper[4886]: I1124 09:00:27.266489 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/03f9078c-6b20-46d5-ae2a-2eb20e236769-kube-api-access-m55nz" (OuterVolumeSpecName: "kube-api-access-m55nz") pod "03f9078c-6b20-46d5-ae2a-2eb20e236769" (UID: "03f9078c-6b20-46d5-ae2a-2eb20e236769"). InnerVolumeSpecName "kube-api-access-m55nz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 09:00:27 crc kubenswrapper[4886]: I1124 09:00:27.274055 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/03f9078c-6b20-46d5-ae2a-2eb20e236769-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "03f9078c-6b20-46d5-ae2a-2eb20e236769" (UID: "03f9078c-6b20-46d5-ae2a-2eb20e236769"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 09:00:27 crc kubenswrapper[4886]: I1124 09:00:27.362869 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/6e112739-e842-4b28-9725-42f1d8907066-host-kubelet\") pod \"ovnkube-node-hc97l\" (UID: \"6e112739-e842-4b28-9725-42f1d8907066\") " pod="openshift-ovn-kubernetes/ovnkube-node-hc97l" Nov 24 09:00:27 crc kubenswrapper[4886]: I1124 09:00:27.362937 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/6e112739-e842-4b28-9725-42f1d8907066-run-systemd\") pod \"ovnkube-node-hc97l\" (UID: \"6e112739-e842-4b28-9725-42f1d8907066\") " pod="openshift-ovn-kubernetes/ovnkube-node-hc97l" Nov 24 09:00:27 crc kubenswrapper[4886]: I1124 09:00:27.362957 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/6e112739-e842-4b28-9725-42f1d8907066-host-run-netns\") pod \"ovnkube-node-hc97l\" (UID: \"6e112739-e842-4b28-9725-42f1d8907066\") " pod="openshift-ovn-kubernetes/ovnkube-node-hc97l" Nov 24 09:00:27 crc kubenswrapper[4886]: I1124 09:00:27.362990 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6e112739-e842-4b28-9725-42f1d8907066-host-run-ovn-kubernetes\") pod \"ovnkube-node-hc97l\" (UID: \"6e112739-e842-4b28-9725-42f1d8907066\") " pod="openshift-ovn-kubernetes/ovnkube-node-hc97l" Nov 24 09:00:27 crc kubenswrapper[4886]: I1124 09:00:27.362985 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/6e112739-e842-4b28-9725-42f1d8907066-host-kubelet\") pod \"ovnkube-node-hc97l\" (UID: \"6e112739-e842-4b28-9725-42f1d8907066\") " pod="openshift-ovn-kubernetes/ovnkube-node-hc97l" Nov 24 09:00:27 crc kubenswrapper[4886]: I1124 09:00:27.363075 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6e112739-e842-4b28-9725-42f1d8907066-host-run-ovn-kubernetes\") pod \"ovnkube-node-hc97l\" (UID: \"6e112739-e842-4b28-9725-42f1d8907066\") " pod="openshift-ovn-kubernetes/ovnkube-node-hc97l" Nov 24 09:00:27 crc kubenswrapper[4886]: I1124 09:00:27.363019 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6e112739-e842-4b28-9725-42f1d8907066-host-cni-netd\") pod \"ovnkube-node-hc97l\" (UID: \"6e112739-e842-4b28-9725-42f1d8907066\") " pod="openshift-ovn-kubernetes/ovnkube-node-hc97l" Nov 24 09:00:27 crc kubenswrapper[4886]: I1124 09:00:27.363087 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6e112739-e842-4b28-9725-42f1d8907066-host-cni-netd\") pod \"ovnkube-node-hc97l\" (UID: \"6e112739-e842-4b28-9725-42f1d8907066\") " pod="openshift-ovn-kubernetes/ovnkube-node-hc97l" Nov 24 09:00:27 crc kubenswrapper[4886]: I1124 09:00:27.363076 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/6e112739-e842-4b28-9725-42f1d8907066-host-run-netns\") pod \"ovnkube-node-hc97l\" (UID: \"6e112739-e842-4b28-9725-42f1d8907066\") " pod="openshift-ovn-kubernetes/ovnkube-node-hc97l" Nov 24 09:00:27 crc kubenswrapper[4886]: I1124 09:00:27.363146 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6e112739-e842-4b28-9725-42f1d8907066-var-lib-openvswitch\") pod \"ovnkube-node-hc97l\" (UID: \"6e112739-e842-4b28-9725-42f1d8907066\") " pod="openshift-ovn-kubernetes/ovnkube-node-hc97l" Nov 24 09:00:27 crc kubenswrapper[4886]: I1124 09:00:27.363167 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/6e112739-e842-4b28-9725-42f1d8907066-run-systemd\") pod \"ovnkube-node-hc97l\" (UID: \"6e112739-e842-4b28-9725-42f1d8907066\") " pod="openshift-ovn-kubernetes/ovnkube-node-hc97l" Nov 24 09:00:27 crc kubenswrapper[4886]: I1124 09:00:27.363189 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6e112739-e842-4b28-9725-42f1d8907066-var-lib-openvswitch\") pod \"ovnkube-node-hc97l\" (UID: \"6e112739-e842-4b28-9725-42f1d8907066\") " pod="openshift-ovn-kubernetes/ovnkube-node-hc97l" Nov 24 09:00:27 crc kubenswrapper[4886]: I1124 09:00:27.363184 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6e112739-e842-4b28-9725-42f1d8907066-env-overrides\") pod \"ovnkube-node-hc97l\" (UID: \"6e112739-e842-4b28-9725-42f1d8907066\") " pod="openshift-ovn-kubernetes/ovnkube-node-hc97l" Nov 24 09:00:27 crc kubenswrapper[4886]: I1124 09:00:27.363371 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/6e112739-e842-4b28-9725-42f1d8907066-log-socket\") pod \"ovnkube-node-hc97l\" (UID: \"6e112739-e842-4b28-9725-42f1d8907066\") " pod="openshift-ovn-kubernetes/ovnkube-node-hc97l" Nov 24 09:00:27 crc kubenswrapper[4886]: I1124 09:00:27.363451 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/6e112739-e842-4b28-9725-42f1d8907066-log-socket\") pod \"ovnkube-node-hc97l\" (UID: \"6e112739-e842-4b28-9725-42f1d8907066\") " pod="openshift-ovn-kubernetes/ovnkube-node-hc97l" Nov 24 09:00:27 crc kubenswrapper[4886]: I1124 09:00:27.363457 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/6e112739-e842-4b28-9725-42f1d8907066-host-slash\") pod \"ovnkube-node-hc97l\" (UID: \"6e112739-e842-4b28-9725-42f1d8907066\") " pod="openshift-ovn-kubernetes/ovnkube-node-hc97l" Nov 24 09:00:27 crc kubenswrapper[4886]: I1124 09:00:27.363496 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6e112739-e842-4b28-9725-42f1d8907066-run-openvswitch\") pod \"ovnkube-node-hc97l\" (UID: \"6e112739-e842-4b28-9725-42f1d8907066\") " pod="openshift-ovn-kubernetes/ovnkube-node-hc97l" Nov 24 09:00:27 crc kubenswrapper[4886]: I1124 09:00:27.363532 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/6e112739-e842-4b28-9725-42f1d8907066-host-slash\") pod \"ovnkube-node-hc97l\" (UID: \"6e112739-e842-4b28-9725-42f1d8907066\") " pod="openshift-ovn-kubernetes/ovnkube-node-hc97l" Nov 24 09:00:27 crc kubenswrapper[4886]: I1124 09:00:27.363543 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6e112739-e842-4b28-9725-42f1d8907066-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-hc97l\" (UID: \"6e112739-e842-4b28-9725-42f1d8907066\") " pod="openshift-ovn-kubernetes/ovnkube-node-hc97l" Nov 24 09:00:27 crc kubenswrapper[4886]: I1124 09:00:27.363561 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6e112739-e842-4b28-9725-42f1d8907066-run-openvswitch\") pod \"ovnkube-node-hc97l\" (UID: \"6e112739-e842-4b28-9725-42f1d8907066\") " pod="openshift-ovn-kubernetes/ovnkube-node-hc97l" Nov 24 09:00:27 crc kubenswrapper[4886]: I1124 09:00:27.363585 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6e112739-e842-4b28-9725-42f1d8907066-ovn-node-metrics-cert\") pod \"ovnkube-node-hc97l\" (UID: \"6e112739-e842-4b28-9725-42f1d8907066\") " pod="openshift-ovn-kubernetes/ovnkube-node-hc97l" Nov 24 09:00:27 crc kubenswrapper[4886]: I1124 09:00:27.363598 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6e112739-e842-4b28-9725-42f1d8907066-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-hc97l\" (UID: \"6e112739-e842-4b28-9725-42f1d8907066\") " pod="openshift-ovn-kubernetes/ovnkube-node-hc97l" Nov 24 09:00:27 crc kubenswrapper[4886]: I1124 09:00:27.363645 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6e112739-e842-4b28-9725-42f1d8907066-ovnkube-script-lib\") pod \"ovnkube-node-hc97l\" (UID: \"6e112739-e842-4b28-9725-42f1d8907066\") " pod="openshift-ovn-kubernetes/ovnkube-node-hc97l" Nov 24 09:00:27 crc kubenswrapper[4886]: I1124 09:00:27.363685 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/6e112739-e842-4b28-9725-42f1d8907066-run-ovn\") pod \"ovnkube-node-hc97l\" (UID: \"6e112739-e842-4b28-9725-42f1d8907066\") " pod="openshift-ovn-kubernetes/ovnkube-node-hc97l" Nov 24 09:00:27 crc kubenswrapper[4886]: I1124 09:00:27.363731 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6e112739-e842-4b28-9725-42f1d8907066-ovnkube-config\") pod \"ovnkube-node-hc97l\" (UID: \"6e112739-e842-4b28-9725-42f1d8907066\") " pod="openshift-ovn-kubernetes/ovnkube-node-hc97l" Nov 24 09:00:27 crc kubenswrapper[4886]: I1124 09:00:27.363758 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/6e112739-e842-4b28-9725-42f1d8907066-run-ovn\") pod \"ovnkube-node-hc97l\" (UID: \"6e112739-e842-4b28-9725-42f1d8907066\") " pod="openshift-ovn-kubernetes/ovnkube-node-hc97l" Nov 24 09:00:27 crc kubenswrapper[4886]: I1124 09:00:27.363771 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-znckp\" (UniqueName: \"kubernetes.io/projected/6e112739-e842-4b28-9725-42f1d8907066-kube-api-access-znckp\") pod \"ovnkube-node-hc97l\" (UID: \"6e112739-e842-4b28-9725-42f1d8907066\") " pod="openshift-ovn-kubernetes/ovnkube-node-hc97l" Nov 24 09:00:27 crc kubenswrapper[4886]: I1124 09:00:27.363811 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/6e112739-e842-4b28-9725-42f1d8907066-systemd-units\") pod \"ovnkube-node-hc97l\" (UID: \"6e112739-e842-4b28-9725-42f1d8907066\") " pod="openshift-ovn-kubernetes/ovnkube-node-hc97l" Nov 24 09:00:27 crc kubenswrapper[4886]: I1124 09:00:27.363844 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/6e112739-e842-4b28-9725-42f1d8907066-node-log\") pod \"ovnkube-node-hc97l\" (UID: \"6e112739-e842-4b28-9725-42f1d8907066\") " pod="openshift-ovn-kubernetes/ovnkube-node-hc97l" Nov 24 09:00:27 crc kubenswrapper[4886]: I1124 09:00:27.363898 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/6e112739-e842-4b28-9725-42f1d8907066-systemd-units\") pod \"ovnkube-node-hc97l\" (UID: \"6e112739-e842-4b28-9725-42f1d8907066\") " pod="openshift-ovn-kubernetes/ovnkube-node-hc97l" Nov 24 09:00:27 crc kubenswrapper[4886]: I1124 09:00:27.363969 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/6e112739-e842-4b28-9725-42f1d8907066-node-log\") pod \"ovnkube-node-hc97l\" (UID: \"6e112739-e842-4b28-9725-42f1d8907066\") " pod="openshift-ovn-kubernetes/ovnkube-node-hc97l" Nov 24 09:00:27 crc kubenswrapper[4886]: I1124 09:00:27.364029 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6e112739-e842-4b28-9725-42f1d8907066-etc-openvswitch\") pod \"ovnkube-node-hc97l\" (UID: \"6e112739-e842-4b28-9725-42f1d8907066\") " pod="openshift-ovn-kubernetes/ovnkube-node-hc97l" Nov 24 09:00:27 crc kubenswrapper[4886]: I1124 09:00:27.364068 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/6e112739-e842-4b28-9725-42f1d8907066-host-cni-bin\") pod \"ovnkube-node-hc97l\" (UID: \"6e112739-e842-4b28-9725-42f1d8907066\") " pod="openshift-ovn-kubernetes/ovnkube-node-hc97l" Nov 24 09:00:27 crc kubenswrapper[4886]: I1124 09:00:27.364077 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6e112739-e842-4b28-9725-42f1d8907066-etc-openvswitch\") pod \"ovnkube-node-hc97l\" (UID: \"6e112739-e842-4b28-9725-42f1d8907066\") " pod="openshift-ovn-kubernetes/ovnkube-node-hc97l" Nov 24 09:00:27 crc kubenswrapper[4886]: I1124 09:00:27.364029 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6e112739-e842-4b28-9725-42f1d8907066-env-overrides\") pod \"ovnkube-node-hc97l\" (UID: \"6e112739-e842-4b28-9725-42f1d8907066\") " pod="openshift-ovn-kubernetes/ovnkube-node-hc97l" Nov 24 09:00:27 crc kubenswrapper[4886]: I1124 09:00:27.364355 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/6e112739-e842-4b28-9725-42f1d8907066-host-cni-bin\") pod \"ovnkube-node-hc97l\" (UID: \"6e112739-e842-4b28-9725-42f1d8907066\") " pod="openshift-ovn-kubernetes/ovnkube-node-hc97l" Nov 24 09:00:27 crc kubenswrapper[4886]: I1124 09:00:27.364409 4886 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/03f9078c-6b20-46d5-ae2a-2eb20e236769-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Nov 24 09:00:27 crc kubenswrapper[4886]: I1124 09:00:27.364436 4886 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/03f9078c-6b20-46d5-ae2a-2eb20e236769-run-systemd\") on node \"crc\" DevicePath \"\"" Nov 24 09:00:27 crc kubenswrapper[4886]: I1124 09:00:27.364458 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m55nz\" (UniqueName: \"kubernetes.io/projected/03f9078c-6b20-46d5-ae2a-2eb20e236769-kube-api-access-m55nz\") on node \"crc\" DevicePath \"\"" Nov 24 09:00:27 crc kubenswrapper[4886]: I1124 09:00:27.365006 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6e112739-e842-4b28-9725-42f1d8907066-ovnkube-config\") pod \"ovnkube-node-hc97l\" (UID: \"6e112739-e842-4b28-9725-42f1d8907066\") " pod="openshift-ovn-kubernetes/ovnkube-node-hc97l" Nov 24 09:00:27 crc kubenswrapper[4886]: I1124 09:00:27.365208 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6e112739-e842-4b28-9725-42f1d8907066-ovnkube-script-lib\") pod \"ovnkube-node-hc97l\" (UID: \"6e112739-e842-4b28-9725-42f1d8907066\") " pod="openshift-ovn-kubernetes/ovnkube-node-hc97l" Nov 24 09:00:27 crc kubenswrapper[4886]: I1124 09:00:27.367749 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6e112739-e842-4b28-9725-42f1d8907066-ovn-node-metrics-cert\") pod \"ovnkube-node-hc97l\" (UID: \"6e112739-e842-4b28-9725-42f1d8907066\") " pod="openshift-ovn-kubernetes/ovnkube-node-hc97l" Nov 24 09:00:27 crc kubenswrapper[4886]: I1124 09:00:27.378967 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-znckp\" (UniqueName: \"kubernetes.io/projected/6e112739-e842-4b28-9725-42f1d8907066-kube-api-access-znckp\") pod \"ovnkube-node-hc97l\" (UID: \"6e112739-e842-4b28-9725-42f1d8907066\") " pod="openshift-ovn-kubernetes/ovnkube-node-hc97l" Nov 24 09:00:27 crc kubenswrapper[4886]: I1124 09:00:27.462792 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-hc97l" Nov 24 09:00:27 crc kubenswrapper[4886]: I1124 09:00:27.738206 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-2dk8j_5d515fec-60f3-4bf7-9ba4-697bb691b670/kube-multus/2.log" Nov 24 09:00:27 crc kubenswrapper[4886]: I1124 09:00:27.739428 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-2dk8j_5d515fec-60f3-4bf7-9ba4-697bb691b670/kube-multus/1.log" Nov 24 09:00:27 crc kubenswrapper[4886]: I1124 09:00:27.739470 4886 generic.go:334] "Generic (PLEG): container finished" podID="5d515fec-60f3-4bf7-9ba4-697bb691b670" containerID="97763f8ce77f782f6462d9de656426c9d79e3b8ffc5a0ddcfbe4c68da2ec9905" exitCode=2 Nov 24 09:00:27 crc kubenswrapper[4886]: I1124 09:00:27.739531 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-2dk8j" event={"ID":"5d515fec-60f3-4bf7-9ba4-697bb691b670","Type":"ContainerDied","Data":"97763f8ce77f782f6462d9de656426c9d79e3b8ffc5a0ddcfbe4c68da2ec9905"} Nov 24 09:00:27 crc kubenswrapper[4886]: I1124 09:00:27.739572 4886 scope.go:117] "RemoveContainer" containerID="d14b62d61a68782ff616caa790ed7dd0213d43cd7f9efca2204e0f0bb8400d60" Nov 24 09:00:27 crc kubenswrapper[4886]: I1124 09:00:27.740309 4886 scope.go:117] "RemoveContainer" containerID="97763f8ce77f782f6462d9de656426c9d79e3b8ffc5a0ddcfbe4c68da2ec9905" Nov 24 09:00:27 crc kubenswrapper[4886]: E1124 09:00:27.740671 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-2dk8j_openshift-multus(5d515fec-60f3-4bf7-9ba4-697bb691b670)\"" pod="openshift-multus/multus-2dk8j" podUID="5d515fec-60f3-4bf7-9ba4-697bb691b670" Nov 24 09:00:27 crc kubenswrapper[4886]: I1124 09:00:27.745883 4886 generic.go:334] "Generic (PLEG): container finished" podID="6e112739-e842-4b28-9725-42f1d8907066" containerID="c0a11397f4d6279998a6c601897a82f2e9570b956877efad7bcc1427264a4441" exitCode=0 Nov 24 09:00:27 crc kubenswrapper[4886]: I1124 09:00:27.745951 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hc97l" event={"ID":"6e112739-e842-4b28-9725-42f1d8907066","Type":"ContainerDied","Data":"c0a11397f4d6279998a6c601897a82f2e9570b956877efad7bcc1427264a4441"} Nov 24 09:00:27 crc kubenswrapper[4886]: I1124 09:00:27.745981 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hc97l" event={"ID":"6e112739-e842-4b28-9725-42f1d8907066","Type":"ContainerStarted","Data":"8db7540ff155f961527c5b75b235eb95c632bbf4647c1cbb7440169482d740b1"} Nov 24 09:00:27 crc kubenswrapper[4886]: I1124 09:00:27.751710 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-657wc_03f9078c-6b20-46d5-ae2a-2eb20e236769/ovnkube-controller/3.log" Nov 24 09:00:27 crc kubenswrapper[4886]: I1124 09:00:27.755608 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-657wc_03f9078c-6b20-46d5-ae2a-2eb20e236769/ovn-acl-logging/0.log" Nov 24 09:00:27 crc kubenswrapper[4886]: I1124 09:00:27.756989 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-657wc_03f9078c-6b20-46d5-ae2a-2eb20e236769/ovn-controller/0.log" Nov 24 09:00:27 crc kubenswrapper[4886]: I1124 09:00:27.759991 4886 generic.go:334] "Generic (PLEG): container finished" podID="03f9078c-6b20-46d5-ae2a-2eb20e236769" containerID="0d891a12cf25b868d102f1ffa18fa60b421774bbfe09a824891723acc4d93fd3" exitCode=0 Nov 24 09:00:27 crc kubenswrapper[4886]: I1124 09:00:27.760041 4886 generic.go:334] "Generic (PLEG): container finished" podID="03f9078c-6b20-46d5-ae2a-2eb20e236769" containerID="070c77876ce83a479d0ab423d8107a14d506c655b476b2719766d78ad3c88dc4" exitCode=0 Nov 24 09:00:27 crc kubenswrapper[4886]: I1124 09:00:27.760043 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-657wc" event={"ID":"03f9078c-6b20-46d5-ae2a-2eb20e236769","Type":"ContainerDied","Data":"0d891a12cf25b868d102f1ffa18fa60b421774bbfe09a824891723acc4d93fd3"} Nov 24 09:00:27 crc kubenswrapper[4886]: I1124 09:00:27.760095 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-657wc" event={"ID":"03f9078c-6b20-46d5-ae2a-2eb20e236769","Type":"ContainerDied","Data":"070c77876ce83a479d0ab423d8107a14d506c655b476b2719766d78ad3c88dc4"} Nov 24 09:00:27 crc kubenswrapper[4886]: I1124 09:00:27.760117 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-657wc" event={"ID":"03f9078c-6b20-46d5-ae2a-2eb20e236769","Type":"ContainerDied","Data":"6f504f7f421bd783399a6d08fa713f0a885d8edd5f5245933a98b6830c5573d5"} Nov 24 09:00:27 crc kubenswrapper[4886]: I1124 09:00:27.760057 4886 generic.go:334] "Generic (PLEG): container finished" podID="03f9078c-6b20-46d5-ae2a-2eb20e236769" containerID="6f504f7f421bd783399a6d08fa713f0a885d8edd5f5245933a98b6830c5573d5" exitCode=0 Nov 24 09:00:27 crc kubenswrapper[4886]: I1124 09:00:27.760140 4886 generic.go:334] "Generic (PLEG): container finished" podID="03f9078c-6b20-46d5-ae2a-2eb20e236769" containerID="51cd23eb1b9056a44b5716ce542fafec8e3ebde494f136655a2a5d838ec90ab0" exitCode=0 Nov 24 09:00:27 crc kubenswrapper[4886]: I1124 09:00:27.760165 4886 generic.go:334] "Generic (PLEG): container finished" podID="03f9078c-6b20-46d5-ae2a-2eb20e236769" containerID="e8c129c56a2c69042516d9caa6abdf4095aefde8fdaef4953a8eed353940be89" exitCode=0 Nov 24 09:00:27 crc kubenswrapper[4886]: I1124 09:00:27.760173 4886 generic.go:334] "Generic (PLEG): container finished" podID="03f9078c-6b20-46d5-ae2a-2eb20e236769" containerID="f59f3a090f9824d71c23fcb40dde8c9d34fa8116c17ba79a45c005b2719f3dc7" exitCode=0 Nov 24 09:00:27 crc kubenswrapper[4886]: I1124 09:00:27.760181 4886 generic.go:334] "Generic (PLEG): container finished" podID="03f9078c-6b20-46d5-ae2a-2eb20e236769" containerID="48584e4993cf06ba4b29a0732136f0f86c23d81adea76e945015ffafb462819d" exitCode=143 Nov 24 09:00:27 crc kubenswrapper[4886]: I1124 09:00:27.760189 4886 generic.go:334] "Generic (PLEG): container finished" podID="03f9078c-6b20-46d5-ae2a-2eb20e236769" containerID="97477bfe13de2286806b9a8e3c723e3828c1b0ef182329585eaef507dd0c36f9" exitCode=143 Nov 24 09:00:27 crc kubenswrapper[4886]: I1124 09:00:27.760203 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-657wc" event={"ID":"03f9078c-6b20-46d5-ae2a-2eb20e236769","Type":"ContainerDied","Data":"51cd23eb1b9056a44b5716ce542fafec8e3ebde494f136655a2a5d838ec90ab0"} Nov 24 09:00:27 crc kubenswrapper[4886]: I1124 09:00:27.760216 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-657wc" event={"ID":"03f9078c-6b20-46d5-ae2a-2eb20e236769","Type":"ContainerDied","Data":"e8c129c56a2c69042516d9caa6abdf4095aefde8fdaef4953a8eed353940be89"} Nov 24 09:00:27 crc kubenswrapper[4886]: I1124 09:00:27.760230 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-657wc" event={"ID":"03f9078c-6b20-46d5-ae2a-2eb20e236769","Type":"ContainerDied","Data":"f59f3a090f9824d71c23fcb40dde8c9d34fa8116c17ba79a45c005b2719f3dc7"} Nov 24 09:00:27 crc kubenswrapper[4886]: I1124 09:00:27.760243 4886 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0d891a12cf25b868d102f1ffa18fa60b421774bbfe09a824891723acc4d93fd3"} Nov 24 09:00:27 crc kubenswrapper[4886]: I1124 09:00:27.760256 4886 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7339f5b94dd48022cf3d53105e626424c485536d2deb094f3aba90f56c27c698"} Nov 24 09:00:27 crc kubenswrapper[4886]: I1124 09:00:27.760264 4886 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"070c77876ce83a479d0ab423d8107a14d506c655b476b2719766d78ad3c88dc4"} Nov 24 09:00:27 crc kubenswrapper[4886]: I1124 09:00:27.760269 4886 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6f504f7f421bd783399a6d08fa713f0a885d8edd5f5245933a98b6830c5573d5"} Nov 24 09:00:27 crc kubenswrapper[4886]: I1124 09:00:27.760276 4886 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"51cd23eb1b9056a44b5716ce542fafec8e3ebde494f136655a2a5d838ec90ab0"} Nov 24 09:00:27 crc kubenswrapper[4886]: I1124 09:00:27.760281 4886 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e8c129c56a2c69042516d9caa6abdf4095aefde8fdaef4953a8eed353940be89"} Nov 24 09:00:27 crc kubenswrapper[4886]: I1124 09:00:27.760286 4886 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f59f3a090f9824d71c23fcb40dde8c9d34fa8116c17ba79a45c005b2719f3dc7"} Nov 24 09:00:27 crc kubenswrapper[4886]: I1124 09:00:27.760291 4886 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"48584e4993cf06ba4b29a0732136f0f86c23d81adea76e945015ffafb462819d"} Nov 24 09:00:27 crc kubenswrapper[4886]: I1124 09:00:27.760297 4886 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"97477bfe13de2286806b9a8e3c723e3828c1b0ef182329585eaef507dd0c36f9"} Nov 24 09:00:27 crc kubenswrapper[4886]: I1124 09:00:27.760301 4886 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"53fba806955f4d7e98c5a2c830bc39cc6f9ac2a6248099aa13da6dff5a0cde76"} Nov 24 09:00:27 crc kubenswrapper[4886]: I1124 09:00:27.760308 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-657wc" event={"ID":"03f9078c-6b20-46d5-ae2a-2eb20e236769","Type":"ContainerDied","Data":"48584e4993cf06ba4b29a0732136f0f86c23d81adea76e945015ffafb462819d"} Nov 24 09:00:27 crc kubenswrapper[4886]: I1124 09:00:27.760318 4886 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0d891a12cf25b868d102f1ffa18fa60b421774bbfe09a824891723acc4d93fd3"} Nov 24 09:00:27 crc kubenswrapper[4886]: I1124 09:00:27.760324 4886 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7339f5b94dd48022cf3d53105e626424c485536d2deb094f3aba90f56c27c698"} Nov 24 09:00:27 crc kubenswrapper[4886]: I1124 09:00:27.760329 4886 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"070c77876ce83a479d0ab423d8107a14d506c655b476b2719766d78ad3c88dc4"} Nov 24 09:00:27 crc kubenswrapper[4886]: I1124 09:00:27.760334 4886 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6f504f7f421bd783399a6d08fa713f0a885d8edd5f5245933a98b6830c5573d5"} Nov 24 09:00:27 crc kubenswrapper[4886]: I1124 09:00:27.760340 4886 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"51cd23eb1b9056a44b5716ce542fafec8e3ebde494f136655a2a5d838ec90ab0"} Nov 24 09:00:27 crc kubenswrapper[4886]: I1124 09:00:27.760345 4886 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e8c129c56a2c69042516d9caa6abdf4095aefde8fdaef4953a8eed353940be89"} Nov 24 09:00:27 crc kubenswrapper[4886]: I1124 09:00:27.760350 4886 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f59f3a090f9824d71c23fcb40dde8c9d34fa8116c17ba79a45c005b2719f3dc7"} Nov 24 09:00:27 crc kubenswrapper[4886]: I1124 09:00:27.760355 4886 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"48584e4993cf06ba4b29a0732136f0f86c23d81adea76e945015ffafb462819d"} Nov 24 09:00:27 crc kubenswrapper[4886]: I1124 09:00:27.760361 4886 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"97477bfe13de2286806b9a8e3c723e3828c1b0ef182329585eaef507dd0c36f9"} Nov 24 09:00:27 crc kubenswrapper[4886]: I1124 09:00:27.760366 4886 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"53fba806955f4d7e98c5a2c830bc39cc6f9ac2a6248099aa13da6dff5a0cde76"} Nov 24 09:00:27 crc kubenswrapper[4886]: I1124 09:00:27.760374 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-657wc" event={"ID":"03f9078c-6b20-46d5-ae2a-2eb20e236769","Type":"ContainerDied","Data":"97477bfe13de2286806b9a8e3c723e3828c1b0ef182329585eaef507dd0c36f9"} Nov 24 09:00:27 crc kubenswrapper[4886]: I1124 09:00:27.760382 4886 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0d891a12cf25b868d102f1ffa18fa60b421774bbfe09a824891723acc4d93fd3"} Nov 24 09:00:27 crc kubenswrapper[4886]: I1124 09:00:27.760391 4886 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7339f5b94dd48022cf3d53105e626424c485536d2deb094f3aba90f56c27c698"} Nov 24 09:00:27 crc kubenswrapper[4886]: I1124 09:00:27.760398 4886 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"070c77876ce83a479d0ab423d8107a14d506c655b476b2719766d78ad3c88dc4"} Nov 24 09:00:27 crc kubenswrapper[4886]: I1124 09:00:27.760405 4886 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6f504f7f421bd783399a6d08fa713f0a885d8edd5f5245933a98b6830c5573d5"} Nov 24 09:00:27 crc kubenswrapper[4886]: I1124 09:00:27.760411 4886 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"51cd23eb1b9056a44b5716ce542fafec8e3ebde494f136655a2a5d838ec90ab0"} Nov 24 09:00:27 crc kubenswrapper[4886]: I1124 09:00:27.760417 4886 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e8c129c56a2c69042516d9caa6abdf4095aefde8fdaef4953a8eed353940be89"} Nov 24 09:00:27 crc kubenswrapper[4886]: I1124 09:00:27.760423 4886 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f59f3a090f9824d71c23fcb40dde8c9d34fa8116c17ba79a45c005b2719f3dc7"} Nov 24 09:00:27 crc kubenswrapper[4886]: I1124 09:00:27.760427 4886 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"48584e4993cf06ba4b29a0732136f0f86c23d81adea76e945015ffafb462819d"} Nov 24 09:00:27 crc kubenswrapper[4886]: I1124 09:00:27.760433 4886 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"97477bfe13de2286806b9a8e3c723e3828c1b0ef182329585eaef507dd0c36f9"} Nov 24 09:00:27 crc kubenswrapper[4886]: I1124 09:00:27.760439 4886 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"53fba806955f4d7e98c5a2c830bc39cc6f9ac2a6248099aa13da6dff5a0cde76"} Nov 24 09:00:27 crc kubenswrapper[4886]: I1124 09:00:27.760446 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-657wc" event={"ID":"03f9078c-6b20-46d5-ae2a-2eb20e236769","Type":"ContainerDied","Data":"b0b380b152ef8a74d5787cfdeca82f7ca7ac64d220106858371ba4489ff5e2da"} Nov 24 09:00:27 crc kubenswrapper[4886]: I1124 09:00:27.760454 4886 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0d891a12cf25b868d102f1ffa18fa60b421774bbfe09a824891723acc4d93fd3"} Nov 24 09:00:27 crc kubenswrapper[4886]: I1124 09:00:27.760460 4886 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7339f5b94dd48022cf3d53105e626424c485536d2deb094f3aba90f56c27c698"} Nov 24 09:00:27 crc kubenswrapper[4886]: I1124 09:00:27.760465 4886 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"070c77876ce83a479d0ab423d8107a14d506c655b476b2719766d78ad3c88dc4"} Nov 24 09:00:27 crc kubenswrapper[4886]: I1124 09:00:27.760470 4886 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6f504f7f421bd783399a6d08fa713f0a885d8edd5f5245933a98b6830c5573d5"} Nov 24 09:00:27 crc kubenswrapper[4886]: I1124 09:00:27.760475 4886 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"51cd23eb1b9056a44b5716ce542fafec8e3ebde494f136655a2a5d838ec90ab0"} Nov 24 09:00:27 crc kubenswrapper[4886]: I1124 09:00:27.760480 4886 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e8c129c56a2c69042516d9caa6abdf4095aefde8fdaef4953a8eed353940be89"} Nov 24 09:00:27 crc kubenswrapper[4886]: I1124 09:00:27.760485 4886 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f59f3a090f9824d71c23fcb40dde8c9d34fa8116c17ba79a45c005b2719f3dc7"} Nov 24 09:00:27 crc kubenswrapper[4886]: I1124 09:00:27.760490 4886 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"48584e4993cf06ba4b29a0732136f0f86c23d81adea76e945015ffafb462819d"} Nov 24 09:00:27 crc kubenswrapper[4886]: I1124 09:00:27.760495 4886 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"97477bfe13de2286806b9a8e3c723e3828c1b0ef182329585eaef507dd0c36f9"} Nov 24 09:00:27 crc kubenswrapper[4886]: I1124 09:00:27.760500 4886 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"53fba806955f4d7e98c5a2c830bc39cc6f9ac2a6248099aa13da6dff5a0cde76"} Nov 24 09:00:27 crc kubenswrapper[4886]: I1124 09:00:27.762024 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-657wc" Nov 24 09:00:27 crc kubenswrapper[4886]: I1124 09:00:27.771208 4886 scope.go:117] "RemoveContainer" containerID="0d891a12cf25b868d102f1ffa18fa60b421774bbfe09a824891723acc4d93fd3" Nov 24 09:00:27 crc kubenswrapper[4886]: I1124 09:00:27.787935 4886 scope.go:117] "RemoveContainer" containerID="7339f5b94dd48022cf3d53105e626424c485536d2deb094f3aba90f56c27c698" Nov 24 09:00:27 crc kubenswrapper[4886]: I1124 09:00:27.811061 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-657wc"] Nov 24 09:00:27 crc kubenswrapper[4886]: I1124 09:00:27.821741 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-657wc"] Nov 24 09:00:27 crc kubenswrapper[4886]: I1124 09:00:27.822244 4886 scope.go:117] "RemoveContainer" containerID="070c77876ce83a479d0ab423d8107a14d506c655b476b2719766d78ad3c88dc4" Nov 24 09:00:27 crc kubenswrapper[4886]: I1124 09:00:27.841813 4886 scope.go:117] "RemoveContainer" containerID="6f504f7f421bd783399a6d08fa713f0a885d8edd5f5245933a98b6830c5573d5" Nov 24 09:00:27 crc kubenswrapper[4886]: I1124 09:00:27.855740 4886 scope.go:117] "RemoveContainer" containerID="51cd23eb1b9056a44b5716ce542fafec8e3ebde494f136655a2a5d838ec90ab0" Nov 24 09:00:27 crc kubenswrapper[4886]: I1124 09:00:27.868852 4886 scope.go:117] "RemoveContainer" containerID="e8c129c56a2c69042516d9caa6abdf4095aefde8fdaef4953a8eed353940be89" Nov 24 09:00:27 crc kubenswrapper[4886]: I1124 09:00:27.881094 4886 scope.go:117] "RemoveContainer" containerID="f59f3a090f9824d71c23fcb40dde8c9d34fa8116c17ba79a45c005b2719f3dc7" Nov 24 09:00:27 crc kubenswrapper[4886]: I1124 09:00:27.931474 4886 scope.go:117] "RemoveContainer" containerID="48584e4993cf06ba4b29a0732136f0f86c23d81adea76e945015ffafb462819d" Nov 24 09:00:27 crc kubenswrapper[4886]: I1124 09:00:27.945248 4886 scope.go:117] "RemoveContainer" containerID="97477bfe13de2286806b9a8e3c723e3828c1b0ef182329585eaef507dd0c36f9" Nov 24 09:00:27 crc kubenswrapper[4886]: I1124 09:00:27.969948 4886 scope.go:117] "RemoveContainer" containerID="53fba806955f4d7e98c5a2c830bc39cc6f9ac2a6248099aa13da6dff5a0cde76" Nov 24 09:00:27 crc kubenswrapper[4886]: I1124 09:00:27.983250 4886 scope.go:117] "RemoveContainer" containerID="0d891a12cf25b868d102f1ffa18fa60b421774bbfe09a824891723acc4d93fd3" Nov 24 09:00:27 crc kubenswrapper[4886]: E1124 09:00:27.984087 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0d891a12cf25b868d102f1ffa18fa60b421774bbfe09a824891723acc4d93fd3\": container with ID starting with 0d891a12cf25b868d102f1ffa18fa60b421774bbfe09a824891723acc4d93fd3 not found: ID does not exist" containerID="0d891a12cf25b868d102f1ffa18fa60b421774bbfe09a824891723acc4d93fd3" Nov 24 09:00:27 crc kubenswrapper[4886]: I1124 09:00:27.984118 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0d891a12cf25b868d102f1ffa18fa60b421774bbfe09a824891723acc4d93fd3"} err="failed to get container status \"0d891a12cf25b868d102f1ffa18fa60b421774bbfe09a824891723acc4d93fd3\": rpc error: code = NotFound desc = could not find container \"0d891a12cf25b868d102f1ffa18fa60b421774bbfe09a824891723acc4d93fd3\": container with ID starting with 0d891a12cf25b868d102f1ffa18fa60b421774bbfe09a824891723acc4d93fd3 not found: ID does not exist" Nov 24 09:00:27 crc kubenswrapper[4886]: I1124 09:00:27.984141 4886 scope.go:117] "RemoveContainer" containerID="7339f5b94dd48022cf3d53105e626424c485536d2deb094f3aba90f56c27c698" Nov 24 09:00:27 crc kubenswrapper[4886]: E1124 09:00:27.985010 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7339f5b94dd48022cf3d53105e626424c485536d2deb094f3aba90f56c27c698\": container with ID starting with 7339f5b94dd48022cf3d53105e626424c485536d2deb094f3aba90f56c27c698 not found: ID does not exist" containerID="7339f5b94dd48022cf3d53105e626424c485536d2deb094f3aba90f56c27c698" Nov 24 09:00:27 crc kubenswrapper[4886]: I1124 09:00:27.985033 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7339f5b94dd48022cf3d53105e626424c485536d2deb094f3aba90f56c27c698"} err="failed to get container status \"7339f5b94dd48022cf3d53105e626424c485536d2deb094f3aba90f56c27c698\": rpc error: code = NotFound desc = could not find container \"7339f5b94dd48022cf3d53105e626424c485536d2deb094f3aba90f56c27c698\": container with ID starting with 7339f5b94dd48022cf3d53105e626424c485536d2deb094f3aba90f56c27c698 not found: ID does not exist" Nov 24 09:00:27 crc kubenswrapper[4886]: I1124 09:00:27.985048 4886 scope.go:117] "RemoveContainer" containerID="070c77876ce83a479d0ab423d8107a14d506c655b476b2719766d78ad3c88dc4" Nov 24 09:00:27 crc kubenswrapper[4886]: E1124 09:00:27.985735 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"070c77876ce83a479d0ab423d8107a14d506c655b476b2719766d78ad3c88dc4\": container with ID starting with 070c77876ce83a479d0ab423d8107a14d506c655b476b2719766d78ad3c88dc4 not found: ID does not exist" containerID="070c77876ce83a479d0ab423d8107a14d506c655b476b2719766d78ad3c88dc4" Nov 24 09:00:27 crc kubenswrapper[4886]: I1124 09:00:27.985786 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"070c77876ce83a479d0ab423d8107a14d506c655b476b2719766d78ad3c88dc4"} err="failed to get container status \"070c77876ce83a479d0ab423d8107a14d506c655b476b2719766d78ad3c88dc4\": rpc error: code = NotFound desc = could not find container \"070c77876ce83a479d0ab423d8107a14d506c655b476b2719766d78ad3c88dc4\": container with ID starting with 070c77876ce83a479d0ab423d8107a14d506c655b476b2719766d78ad3c88dc4 not found: ID does not exist" Nov 24 09:00:27 crc kubenswrapper[4886]: I1124 09:00:27.985824 4886 scope.go:117] "RemoveContainer" containerID="6f504f7f421bd783399a6d08fa713f0a885d8edd5f5245933a98b6830c5573d5" Nov 24 09:00:27 crc kubenswrapper[4886]: E1124 09:00:27.986407 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6f504f7f421bd783399a6d08fa713f0a885d8edd5f5245933a98b6830c5573d5\": container with ID starting with 6f504f7f421bd783399a6d08fa713f0a885d8edd5f5245933a98b6830c5573d5 not found: ID does not exist" containerID="6f504f7f421bd783399a6d08fa713f0a885d8edd5f5245933a98b6830c5573d5" Nov 24 09:00:27 crc kubenswrapper[4886]: I1124 09:00:27.986439 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6f504f7f421bd783399a6d08fa713f0a885d8edd5f5245933a98b6830c5573d5"} err="failed to get container status \"6f504f7f421bd783399a6d08fa713f0a885d8edd5f5245933a98b6830c5573d5\": rpc error: code = NotFound desc = could not find container \"6f504f7f421bd783399a6d08fa713f0a885d8edd5f5245933a98b6830c5573d5\": container with ID starting with 6f504f7f421bd783399a6d08fa713f0a885d8edd5f5245933a98b6830c5573d5 not found: ID does not exist" Nov 24 09:00:27 crc kubenswrapper[4886]: I1124 09:00:27.986461 4886 scope.go:117] "RemoveContainer" containerID="51cd23eb1b9056a44b5716ce542fafec8e3ebde494f136655a2a5d838ec90ab0" Nov 24 09:00:27 crc kubenswrapper[4886]: E1124 09:00:27.987388 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"51cd23eb1b9056a44b5716ce542fafec8e3ebde494f136655a2a5d838ec90ab0\": container with ID starting with 51cd23eb1b9056a44b5716ce542fafec8e3ebde494f136655a2a5d838ec90ab0 not found: ID does not exist" containerID="51cd23eb1b9056a44b5716ce542fafec8e3ebde494f136655a2a5d838ec90ab0" Nov 24 09:00:27 crc kubenswrapper[4886]: I1124 09:00:27.987415 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"51cd23eb1b9056a44b5716ce542fafec8e3ebde494f136655a2a5d838ec90ab0"} err="failed to get container status \"51cd23eb1b9056a44b5716ce542fafec8e3ebde494f136655a2a5d838ec90ab0\": rpc error: code = NotFound desc = could not find container \"51cd23eb1b9056a44b5716ce542fafec8e3ebde494f136655a2a5d838ec90ab0\": container with ID starting with 51cd23eb1b9056a44b5716ce542fafec8e3ebde494f136655a2a5d838ec90ab0 not found: ID does not exist" Nov 24 09:00:27 crc kubenswrapper[4886]: I1124 09:00:27.987432 4886 scope.go:117] "RemoveContainer" containerID="e8c129c56a2c69042516d9caa6abdf4095aefde8fdaef4953a8eed353940be89" Nov 24 09:00:27 crc kubenswrapper[4886]: E1124 09:00:27.987719 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e8c129c56a2c69042516d9caa6abdf4095aefde8fdaef4953a8eed353940be89\": container with ID starting with e8c129c56a2c69042516d9caa6abdf4095aefde8fdaef4953a8eed353940be89 not found: ID does not exist" containerID="e8c129c56a2c69042516d9caa6abdf4095aefde8fdaef4953a8eed353940be89" Nov 24 09:00:27 crc kubenswrapper[4886]: I1124 09:00:27.987743 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e8c129c56a2c69042516d9caa6abdf4095aefde8fdaef4953a8eed353940be89"} err="failed to get container status \"e8c129c56a2c69042516d9caa6abdf4095aefde8fdaef4953a8eed353940be89\": rpc error: code = NotFound desc = could not find container \"e8c129c56a2c69042516d9caa6abdf4095aefde8fdaef4953a8eed353940be89\": container with ID starting with e8c129c56a2c69042516d9caa6abdf4095aefde8fdaef4953a8eed353940be89 not found: ID does not exist" Nov 24 09:00:27 crc kubenswrapper[4886]: I1124 09:00:27.987756 4886 scope.go:117] "RemoveContainer" containerID="f59f3a090f9824d71c23fcb40dde8c9d34fa8116c17ba79a45c005b2719f3dc7" Nov 24 09:00:27 crc kubenswrapper[4886]: E1124 09:00:27.988056 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f59f3a090f9824d71c23fcb40dde8c9d34fa8116c17ba79a45c005b2719f3dc7\": container with ID starting with f59f3a090f9824d71c23fcb40dde8c9d34fa8116c17ba79a45c005b2719f3dc7 not found: ID does not exist" containerID="f59f3a090f9824d71c23fcb40dde8c9d34fa8116c17ba79a45c005b2719f3dc7" Nov 24 09:00:27 crc kubenswrapper[4886]: I1124 09:00:27.988082 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f59f3a090f9824d71c23fcb40dde8c9d34fa8116c17ba79a45c005b2719f3dc7"} err="failed to get container status \"f59f3a090f9824d71c23fcb40dde8c9d34fa8116c17ba79a45c005b2719f3dc7\": rpc error: code = NotFound desc = could not find container \"f59f3a090f9824d71c23fcb40dde8c9d34fa8116c17ba79a45c005b2719f3dc7\": container with ID starting with f59f3a090f9824d71c23fcb40dde8c9d34fa8116c17ba79a45c005b2719f3dc7 not found: ID does not exist" Nov 24 09:00:27 crc kubenswrapper[4886]: I1124 09:00:27.988100 4886 scope.go:117] "RemoveContainer" containerID="48584e4993cf06ba4b29a0732136f0f86c23d81adea76e945015ffafb462819d" Nov 24 09:00:27 crc kubenswrapper[4886]: E1124 09:00:27.988785 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"48584e4993cf06ba4b29a0732136f0f86c23d81adea76e945015ffafb462819d\": container with ID starting with 48584e4993cf06ba4b29a0732136f0f86c23d81adea76e945015ffafb462819d not found: ID does not exist" containerID="48584e4993cf06ba4b29a0732136f0f86c23d81adea76e945015ffafb462819d" Nov 24 09:00:27 crc kubenswrapper[4886]: I1124 09:00:27.988807 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"48584e4993cf06ba4b29a0732136f0f86c23d81adea76e945015ffafb462819d"} err="failed to get container status \"48584e4993cf06ba4b29a0732136f0f86c23d81adea76e945015ffafb462819d\": rpc error: code = NotFound desc = could not find container \"48584e4993cf06ba4b29a0732136f0f86c23d81adea76e945015ffafb462819d\": container with ID starting with 48584e4993cf06ba4b29a0732136f0f86c23d81adea76e945015ffafb462819d not found: ID does not exist" Nov 24 09:00:27 crc kubenswrapper[4886]: I1124 09:00:27.988821 4886 scope.go:117] "RemoveContainer" containerID="97477bfe13de2286806b9a8e3c723e3828c1b0ef182329585eaef507dd0c36f9" Nov 24 09:00:27 crc kubenswrapper[4886]: E1124 09:00:27.989093 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"97477bfe13de2286806b9a8e3c723e3828c1b0ef182329585eaef507dd0c36f9\": container with ID starting with 97477bfe13de2286806b9a8e3c723e3828c1b0ef182329585eaef507dd0c36f9 not found: ID does not exist" containerID="97477bfe13de2286806b9a8e3c723e3828c1b0ef182329585eaef507dd0c36f9" Nov 24 09:00:27 crc kubenswrapper[4886]: I1124 09:00:27.989127 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"97477bfe13de2286806b9a8e3c723e3828c1b0ef182329585eaef507dd0c36f9"} err="failed to get container status \"97477bfe13de2286806b9a8e3c723e3828c1b0ef182329585eaef507dd0c36f9\": rpc error: code = NotFound desc = could not find container \"97477bfe13de2286806b9a8e3c723e3828c1b0ef182329585eaef507dd0c36f9\": container with ID starting with 97477bfe13de2286806b9a8e3c723e3828c1b0ef182329585eaef507dd0c36f9 not found: ID does not exist" Nov 24 09:00:27 crc kubenswrapper[4886]: I1124 09:00:27.989163 4886 scope.go:117] "RemoveContainer" containerID="53fba806955f4d7e98c5a2c830bc39cc6f9ac2a6248099aa13da6dff5a0cde76" Nov 24 09:00:27 crc kubenswrapper[4886]: E1124 09:00:27.989536 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"53fba806955f4d7e98c5a2c830bc39cc6f9ac2a6248099aa13da6dff5a0cde76\": container with ID starting with 53fba806955f4d7e98c5a2c830bc39cc6f9ac2a6248099aa13da6dff5a0cde76 not found: ID does not exist" containerID="53fba806955f4d7e98c5a2c830bc39cc6f9ac2a6248099aa13da6dff5a0cde76" Nov 24 09:00:27 crc kubenswrapper[4886]: I1124 09:00:27.989566 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"53fba806955f4d7e98c5a2c830bc39cc6f9ac2a6248099aa13da6dff5a0cde76"} err="failed to get container status \"53fba806955f4d7e98c5a2c830bc39cc6f9ac2a6248099aa13da6dff5a0cde76\": rpc error: code = NotFound desc = could not find container \"53fba806955f4d7e98c5a2c830bc39cc6f9ac2a6248099aa13da6dff5a0cde76\": container with ID starting with 53fba806955f4d7e98c5a2c830bc39cc6f9ac2a6248099aa13da6dff5a0cde76 not found: ID does not exist" Nov 24 09:00:27 crc kubenswrapper[4886]: I1124 09:00:27.989581 4886 scope.go:117] "RemoveContainer" containerID="0d891a12cf25b868d102f1ffa18fa60b421774bbfe09a824891723acc4d93fd3" Nov 24 09:00:27 crc kubenswrapper[4886]: I1124 09:00:27.989807 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0d891a12cf25b868d102f1ffa18fa60b421774bbfe09a824891723acc4d93fd3"} err="failed to get container status \"0d891a12cf25b868d102f1ffa18fa60b421774bbfe09a824891723acc4d93fd3\": rpc error: code = NotFound desc = could not find container \"0d891a12cf25b868d102f1ffa18fa60b421774bbfe09a824891723acc4d93fd3\": container with ID starting with 0d891a12cf25b868d102f1ffa18fa60b421774bbfe09a824891723acc4d93fd3 not found: ID does not exist" Nov 24 09:00:27 crc kubenswrapper[4886]: I1124 09:00:27.989837 4886 scope.go:117] "RemoveContainer" containerID="7339f5b94dd48022cf3d53105e626424c485536d2deb094f3aba90f56c27c698" Nov 24 09:00:27 crc kubenswrapper[4886]: I1124 09:00:27.990087 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7339f5b94dd48022cf3d53105e626424c485536d2deb094f3aba90f56c27c698"} err="failed to get container status \"7339f5b94dd48022cf3d53105e626424c485536d2deb094f3aba90f56c27c698\": rpc error: code = NotFound desc = could not find container \"7339f5b94dd48022cf3d53105e626424c485536d2deb094f3aba90f56c27c698\": container with ID starting with 7339f5b94dd48022cf3d53105e626424c485536d2deb094f3aba90f56c27c698 not found: ID does not exist" Nov 24 09:00:27 crc kubenswrapper[4886]: I1124 09:00:27.990113 4886 scope.go:117] "RemoveContainer" containerID="070c77876ce83a479d0ab423d8107a14d506c655b476b2719766d78ad3c88dc4" Nov 24 09:00:27 crc kubenswrapper[4886]: I1124 09:00:27.990375 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"070c77876ce83a479d0ab423d8107a14d506c655b476b2719766d78ad3c88dc4"} err="failed to get container status \"070c77876ce83a479d0ab423d8107a14d506c655b476b2719766d78ad3c88dc4\": rpc error: code = NotFound desc = could not find container \"070c77876ce83a479d0ab423d8107a14d506c655b476b2719766d78ad3c88dc4\": container with ID starting with 070c77876ce83a479d0ab423d8107a14d506c655b476b2719766d78ad3c88dc4 not found: ID does not exist" Nov 24 09:00:27 crc kubenswrapper[4886]: I1124 09:00:27.990402 4886 scope.go:117] "RemoveContainer" containerID="6f504f7f421bd783399a6d08fa713f0a885d8edd5f5245933a98b6830c5573d5" Nov 24 09:00:27 crc kubenswrapper[4886]: I1124 09:00:27.990769 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6f504f7f421bd783399a6d08fa713f0a885d8edd5f5245933a98b6830c5573d5"} err="failed to get container status \"6f504f7f421bd783399a6d08fa713f0a885d8edd5f5245933a98b6830c5573d5\": rpc error: code = NotFound desc = could not find container \"6f504f7f421bd783399a6d08fa713f0a885d8edd5f5245933a98b6830c5573d5\": container with ID starting with 6f504f7f421bd783399a6d08fa713f0a885d8edd5f5245933a98b6830c5573d5 not found: ID does not exist" Nov 24 09:00:27 crc kubenswrapper[4886]: I1124 09:00:27.990816 4886 scope.go:117] "RemoveContainer" containerID="51cd23eb1b9056a44b5716ce542fafec8e3ebde494f136655a2a5d838ec90ab0" Nov 24 09:00:27 crc kubenswrapper[4886]: I1124 09:00:27.991048 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"51cd23eb1b9056a44b5716ce542fafec8e3ebde494f136655a2a5d838ec90ab0"} err="failed to get container status \"51cd23eb1b9056a44b5716ce542fafec8e3ebde494f136655a2a5d838ec90ab0\": rpc error: code = NotFound desc = could not find container \"51cd23eb1b9056a44b5716ce542fafec8e3ebde494f136655a2a5d838ec90ab0\": container with ID starting with 51cd23eb1b9056a44b5716ce542fafec8e3ebde494f136655a2a5d838ec90ab0 not found: ID does not exist" Nov 24 09:00:27 crc kubenswrapper[4886]: I1124 09:00:27.991069 4886 scope.go:117] "RemoveContainer" containerID="e8c129c56a2c69042516d9caa6abdf4095aefde8fdaef4953a8eed353940be89" Nov 24 09:00:27 crc kubenswrapper[4886]: I1124 09:00:27.991791 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e8c129c56a2c69042516d9caa6abdf4095aefde8fdaef4953a8eed353940be89"} err="failed to get container status \"e8c129c56a2c69042516d9caa6abdf4095aefde8fdaef4953a8eed353940be89\": rpc error: code = NotFound desc = could not find container \"e8c129c56a2c69042516d9caa6abdf4095aefde8fdaef4953a8eed353940be89\": container with ID starting with e8c129c56a2c69042516d9caa6abdf4095aefde8fdaef4953a8eed353940be89 not found: ID does not exist" Nov 24 09:00:27 crc kubenswrapper[4886]: I1124 09:00:27.991845 4886 scope.go:117] "RemoveContainer" containerID="f59f3a090f9824d71c23fcb40dde8c9d34fa8116c17ba79a45c005b2719f3dc7" Nov 24 09:00:27 crc kubenswrapper[4886]: I1124 09:00:27.992169 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f59f3a090f9824d71c23fcb40dde8c9d34fa8116c17ba79a45c005b2719f3dc7"} err="failed to get container status \"f59f3a090f9824d71c23fcb40dde8c9d34fa8116c17ba79a45c005b2719f3dc7\": rpc error: code = NotFound desc = could not find container \"f59f3a090f9824d71c23fcb40dde8c9d34fa8116c17ba79a45c005b2719f3dc7\": container with ID starting with f59f3a090f9824d71c23fcb40dde8c9d34fa8116c17ba79a45c005b2719f3dc7 not found: ID does not exist" Nov 24 09:00:27 crc kubenswrapper[4886]: I1124 09:00:27.992196 4886 scope.go:117] "RemoveContainer" containerID="48584e4993cf06ba4b29a0732136f0f86c23d81adea76e945015ffafb462819d" Nov 24 09:00:27 crc kubenswrapper[4886]: I1124 09:00:27.992468 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"48584e4993cf06ba4b29a0732136f0f86c23d81adea76e945015ffafb462819d"} err="failed to get container status \"48584e4993cf06ba4b29a0732136f0f86c23d81adea76e945015ffafb462819d\": rpc error: code = NotFound desc = could not find container \"48584e4993cf06ba4b29a0732136f0f86c23d81adea76e945015ffafb462819d\": container with ID starting with 48584e4993cf06ba4b29a0732136f0f86c23d81adea76e945015ffafb462819d not found: ID does not exist" Nov 24 09:00:27 crc kubenswrapper[4886]: I1124 09:00:27.992556 4886 scope.go:117] "RemoveContainer" containerID="97477bfe13de2286806b9a8e3c723e3828c1b0ef182329585eaef507dd0c36f9" Nov 24 09:00:27 crc kubenswrapper[4886]: I1124 09:00:27.993338 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"97477bfe13de2286806b9a8e3c723e3828c1b0ef182329585eaef507dd0c36f9"} err="failed to get container status \"97477bfe13de2286806b9a8e3c723e3828c1b0ef182329585eaef507dd0c36f9\": rpc error: code = NotFound desc = could not find container \"97477bfe13de2286806b9a8e3c723e3828c1b0ef182329585eaef507dd0c36f9\": container with ID starting with 97477bfe13de2286806b9a8e3c723e3828c1b0ef182329585eaef507dd0c36f9 not found: ID does not exist" Nov 24 09:00:27 crc kubenswrapper[4886]: I1124 09:00:27.993368 4886 scope.go:117] "RemoveContainer" containerID="53fba806955f4d7e98c5a2c830bc39cc6f9ac2a6248099aa13da6dff5a0cde76" Nov 24 09:00:27 crc kubenswrapper[4886]: I1124 09:00:27.993695 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"53fba806955f4d7e98c5a2c830bc39cc6f9ac2a6248099aa13da6dff5a0cde76"} err="failed to get container status \"53fba806955f4d7e98c5a2c830bc39cc6f9ac2a6248099aa13da6dff5a0cde76\": rpc error: code = NotFound desc = could not find container \"53fba806955f4d7e98c5a2c830bc39cc6f9ac2a6248099aa13da6dff5a0cde76\": container with ID starting with 53fba806955f4d7e98c5a2c830bc39cc6f9ac2a6248099aa13da6dff5a0cde76 not found: ID does not exist" Nov 24 09:00:27 crc kubenswrapper[4886]: I1124 09:00:27.993728 4886 scope.go:117] "RemoveContainer" containerID="0d891a12cf25b868d102f1ffa18fa60b421774bbfe09a824891723acc4d93fd3" Nov 24 09:00:27 crc kubenswrapper[4886]: I1124 09:00:27.994011 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0d891a12cf25b868d102f1ffa18fa60b421774bbfe09a824891723acc4d93fd3"} err="failed to get container status \"0d891a12cf25b868d102f1ffa18fa60b421774bbfe09a824891723acc4d93fd3\": rpc error: code = NotFound desc = could not find container \"0d891a12cf25b868d102f1ffa18fa60b421774bbfe09a824891723acc4d93fd3\": container with ID starting with 0d891a12cf25b868d102f1ffa18fa60b421774bbfe09a824891723acc4d93fd3 not found: ID does not exist" Nov 24 09:00:27 crc kubenswrapper[4886]: I1124 09:00:27.994041 4886 scope.go:117] "RemoveContainer" containerID="7339f5b94dd48022cf3d53105e626424c485536d2deb094f3aba90f56c27c698" Nov 24 09:00:27 crc kubenswrapper[4886]: I1124 09:00:27.994333 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7339f5b94dd48022cf3d53105e626424c485536d2deb094f3aba90f56c27c698"} err="failed to get container status \"7339f5b94dd48022cf3d53105e626424c485536d2deb094f3aba90f56c27c698\": rpc error: code = NotFound desc = could not find container \"7339f5b94dd48022cf3d53105e626424c485536d2deb094f3aba90f56c27c698\": container with ID starting with 7339f5b94dd48022cf3d53105e626424c485536d2deb094f3aba90f56c27c698 not found: ID does not exist" Nov 24 09:00:27 crc kubenswrapper[4886]: I1124 09:00:27.994361 4886 scope.go:117] "RemoveContainer" containerID="070c77876ce83a479d0ab423d8107a14d506c655b476b2719766d78ad3c88dc4" Nov 24 09:00:27 crc kubenswrapper[4886]: I1124 09:00:27.994598 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"070c77876ce83a479d0ab423d8107a14d506c655b476b2719766d78ad3c88dc4"} err="failed to get container status \"070c77876ce83a479d0ab423d8107a14d506c655b476b2719766d78ad3c88dc4\": rpc error: code = NotFound desc = could not find container \"070c77876ce83a479d0ab423d8107a14d506c655b476b2719766d78ad3c88dc4\": container with ID starting with 070c77876ce83a479d0ab423d8107a14d506c655b476b2719766d78ad3c88dc4 not found: ID does not exist" Nov 24 09:00:27 crc kubenswrapper[4886]: I1124 09:00:27.994624 4886 scope.go:117] "RemoveContainer" containerID="6f504f7f421bd783399a6d08fa713f0a885d8edd5f5245933a98b6830c5573d5" Nov 24 09:00:27 crc kubenswrapper[4886]: I1124 09:00:27.994873 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6f504f7f421bd783399a6d08fa713f0a885d8edd5f5245933a98b6830c5573d5"} err="failed to get container status \"6f504f7f421bd783399a6d08fa713f0a885d8edd5f5245933a98b6830c5573d5\": rpc error: code = NotFound desc = could not find container \"6f504f7f421bd783399a6d08fa713f0a885d8edd5f5245933a98b6830c5573d5\": container with ID starting with 6f504f7f421bd783399a6d08fa713f0a885d8edd5f5245933a98b6830c5573d5 not found: ID does not exist" Nov 24 09:00:27 crc kubenswrapper[4886]: I1124 09:00:27.994896 4886 scope.go:117] "RemoveContainer" containerID="51cd23eb1b9056a44b5716ce542fafec8e3ebde494f136655a2a5d838ec90ab0" Nov 24 09:00:27 crc kubenswrapper[4886]: I1124 09:00:27.995178 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"51cd23eb1b9056a44b5716ce542fafec8e3ebde494f136655a2a5d838ec90ab0"} err="failed to get container status \"51cd23eb1b9056a44b5716ce542fafec8e3ebde494f136655a2a5d838ec90ab0\": rpc error: code = NotFound desc = could not find container \"51cd23eb1b9056a44b5716ce542fafec8e3ebde494f136655a2a5d838ec90ab0\": container with ID starting with 51cd23eb1b9056a44b5716ce542fafec8e3ebde494f136655a2a5d838ec90ab0 not found: ID does not exist" Nov 24 09:00:27 crc kubenswrapper[4886]: I1124 09:00:27.995202 4886 scope.go:117] "RemoveContainer" containerID="e8c129c56a2c69042516d9caa6abdf4095aefde8fdaef4953a8eed353940be89" Nov 24 09:00:27 crc kubenswrapper[4886]: I1124 09:00:27.995427 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e8c129c56a2c69042516d9caa6abdf4095aefde8fdaef4953a8eed353940be89"} err="failed to get container status \"e8c129c56a2c69042516d9caa6abdf4095aefde8fdaef4953a8eed353940be89\": rpc error: code = NotFound desc = could not find container \"e8c129c56a2c69042516d9caa6abdf4095aefde8fdaef4953a8eed353940be89\": container with ID starting with e8c129c56a2c69042516d9caa6abdf4095aefde8fdaef4953a8eed353940be89 not found: ID does not exist" Nov 24 09:00:27 crc kubenswrapper[4886]: I1124 09:00:27.995444 4886 scope.go:117] "RemoveContainer" containerID="f59f3a090f9824d71c23fcb40dde8c9d34fa8116c17ba79a45c005b2719f3dc7" Nov 24 09:00:27 crc kubenswrapper[4886]: I1124 09:00:27.995650 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f59f3a090f9824d71c23fcb40dde8c9d34fa8116c17ba79a45c005b2719f3dc7"} err="failed to get container status \"f59f3a090f9824d71c23fcb40dde8c9d34fa8116c17ba79a45c005b2719f3dc7\": rpc error: code = NotFound desc = could not find container \"f59f3a090f9824d71c23fcb40dde8c9d34fa8116c17ba79a45c005b2719f3dc7\": container with ID starting with f59f3a090f9824d71c23fcb40dde8c9d34fa8116c17ba79a45c005b2719f3dc7 not found: ID does not exist" Nov 24 09:00:27 crc kubenswrapper[4886]: I1124 09:00:27.995677 4886 scope.go:117] "RemoveContainer" containerID="48584e4993cf06ba4b29a0732136f0f86c23d81adea76e945015ffafb462819d" Nov 24 09:00:27 crc kubenswrapper[4886]: I1124 09:00:27.995940 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"48584e4993cf06ba4b29a0732136f0f86c23d81adea76e945015ffafb462819d"} err="failed to get container status \"48584e4993cf06ba4b29a0732136f0f86c23d81adea76e945015ffafb462819d\": rpc error: code = NotFound desc = could not find container \"48584e4993cf06ba4b29a0732136f0f86c23d81adea76e945015ffafb462819d\": container with ID starting with 48584e4993cf06ba4b29a0732136f0f86c23d81adea76e945015ffafb462819d not found: ID does not exist" Nov 24 09:00:27 crc kubenswrapper[4886]: I1124 09:00:27.995965 4886 scope.go:117] "RemoveContainer" containerID="97477bfe13de2286806b9a8e3c723e3828c1b0ef182329585eaef507dd0c36f9" Nov 24 09:00:27 crc kubenswrapper[4886]: I1124 09:00:27.996208 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"97477bfe13de2286806b9a8e3c723e3828c1b0ef182329585eaef507dd0c36f9"} err="failed to get container status \"97477bfe13de2286806b9a8e3c723e3828c1b0ef182329585eaef507dd0c36f9\": rpc error: code = NotFound desc = could not find container \"97477bfe13de2286806b9a8e3c723e3828c1b0ef182329585eaef507dd0c36f9\": container with ID starting with 97477bfe13de2286806b9a8e3c723e3828c1b0ef182329585eaef507dd0c36f9 not found: ID does not exist" Nov 24 09:00:27 crc kubenswrapper[4886]: I1124 09:00:27.996227 4886 scope.go:117] "RemoveContainer" containerID="53fba806955f4d7e98c5a2c830bc39cc6f9ac2a6248099aa13da6dff5a0cde76" Nov 24 09:00:27 crc kubenswrapper[4886]: I1124 09:00:27.996452 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"53fba806955f4d7e98c5a2c830bc39cc6f9ac2a6248099aa13da6dff5a0cde76"} err="failed to get container status \"53fba806955f4d7e98c5a2c830bc39cc6f9ac2a6248099aa13da6dff5a0cde76\": rpc error: code = NotFound desc = could not find container \"53fba806955f4d7e98c5a2c830bc39cc6f9ac2a6248099aa13da6dff5a0cde76\": container with ID starting with 53fba806955f4d7e98c5a2c830bc39cc6f9ac2a6248099aa13da6dff5a0cde76 not found: ID does not exist" Nov 24 09:00:27 crc kubenswrapper[4886]: I1124 09:00:27.996469 4886 scope.go:117] "RemoveContainer" containerID="0d891a12cf25b868d102f1ffa18fa60b421774bbfe09a824891723acc4d93fd3" Nov 24 09:00:27 crc kubenswrapper[4886]: I1124 09:00:27.996698 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0d891a12cf25b868d102f1ffa18fa60b421774bbfe09a824891723acc4d93fd3"} err="failed to get container status \"0d891a12cf25b868d102f1ffa18fa60b421774bbfe09a824891723acc4d93fd3\": rpc error: code = NotFound desc = could not find container \"0d891a12cf25b868d102f1ffa18fa60b421774bbfe09a824891723acc4d93fd3\": container with ID starting with 0d891a12cf25b868d102f1ffa18fa60b421774bbfe09a824891723acc4d93fd3 not found: ID does not exist" Nov 24 09:00:27 crc kubenswrapper[4886]: I1124 09:00:27.996714 4886 scope.go:117] "RemoveContainer" containerID="7339f5b94dd48022cf3d53105e626424c485536d2deb094f3aba90f56c27c698" Nov 24 09:00:27 crc kubenswrapper[4886]: I1124 09:00:27.997068 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7339f5b94dd48022cf3d53105e626424c485536d2deb094f3aba90f56c27c698"} err="failed to get container status \"7339f5b94dd48022cf3d53105e626424c485536d2deb094f3aba90f56c27c698\": rpc error: code = NotFound desc = could not find container \"7339f5b94dd48022cf3d53105e626424c485536d2deb094f3aba90f56c27c698\": container with ID starting with 7339f5b94dd48022cf3d53105e626424c485536d2deb094f3aba90f56c27c698 not found: ID does not exist" Nov 24 09:00:27 crc kubenswrapper[4886]: I1124 09:00:27.997086 4886 scope.go:117] "RemoveContainer" containerID="070c77876ce83a479d0ab423d8107a14d506c655b476b2719766d78ad3c88dc4" Nov 24 09:00:27 crc kubenswrapper[4886]: I1124 09:00:27.997304 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"070c77876ce83a479d0ab423d8107a14d506c655b476b2719766d78ad3c88dc4"} err="failed to get container status \"070c77876ce83a479d0ab423d8107a14d506c655b476b2719766d78ad3c88dc4\": rpc error: code = NotFound desc = could not find container \"070c77876ce83a479d0ab423d8107a14d506c655b476b2719766d78ad3c88dc4\": container with ID starting with 070c77876ce83a479d0ab423d8107a14d506c655b476b2719766d78ad3c88dc4 not found: ID does not exist" Nov 24 09:00:27 crc kubenswrapper[4886]: I1124 09:00:27.997324 4886 scope.go:117] "RemoveContainer" containerID="6f504f7f421bd783399a6d08fa713f0a885d8edd5f5245933a98b6830c5573d5" Nov 24 09:00:27 crc kubenswrapper[4886]: I1124 09:00:27.997521 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6f504f7f421bd783399a6d08fa713f0a885d8edd5f5245933a98b6830c5573d5"} err="failed to get container status \"6f504f7f421bd783399a6d08fa713f0a885d8edd5f5245933a98b6830c5573d5\": rpc error: code = NotFound desc = could not find container \"6f504f7f421bd783399a6d08fa713f0a885d8edd5f5245933a98b6830c5573d5\": container with ID starting with 6f504f7f421bd783399a6d08fa713f0a885d8edd5f5245933a98b6830c5573d5 not found: ID does not exist" Nov 24 09:00:27 crc kubenswrapper[4886]: I1124 09:00:27.997540 4886 scope.go:117] "RemoveContainer" containerID="51cd23eb1b9056a44b5716ce542fafec8e3ebde494f136655a2a5d838ec90ab0" Nov 24 09:00:27 crc kubenswrapper[4886]: I1124 09:00:27.997755 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"51cd23eb1b9056a44b5716ce542fafec8e3ebde494f136655a2a5d838ec90ab0"} err="failed to get container status \"51cd23eb1b9056a44b5716ce542fafec8e3ebde494f136655a2a5d838ec90ab0\": rpc error: code = NotFound desc = could not find container \"51cd23eb1b9056a44b5716ce542fafec8e3ebde494f136655a2a5d838ec90ab0\": container with ID starting with 51cd23eb1b9056a44b5716ce542fafec8e3ebde494f136655a2a5d838ec90ab0 not found: ID does not exist" Nov 24 09:00:27 crc kubenswrapper[4886]: I1124 09:00:27.997785 4886 scope.go:117] "RemoveContainer" containerID="e8c129c56a2c69042516d9caa6abdf4095aefde8fdaef4953a8eed353940be89" Nov 24 09:00:27 crc kubenswrapper[4886]: I1124 09:00:27.998000 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e8c129c56a2c69042516d9caa6abdf4095aefde8fdaef4953a8eed353940be89"} err="failed to get container status \"e8c129c56a2c69042516d9caa6abdf4095aefde8fdaef4953a8eed353940be89\": rpc error: code = NotFound desc = could not find container \"e8c129c56a2c69042516d9caa6abdf4095aefde8fdaef4953a8eed353940be89\": container with ID starting with e8c129c56a2c69042516d9caa6abdf4095aefde8fdaef4953a8eed353940be89 not found: ID does not exist" Nov 24 09:00:27 crc kubenswrapper[4886]: I1124 09:00:27.998025 4886 scope.go:117] "RemoveContainer" containerID="f59f3a090f9824d71c23fcb40dde8c9d34fa8116c17ba79a45c005b2719f3dc7" Nov 24 09:00:27 crc kubenswrapper[4886]: I1124 09:00:27.998288 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f59f3a090f9824d71c23fcb40dde8c9d34fa8116c17ba79a45c005b2719f3dc7"} err="failed to get container status \"f59f3a090f9824d71c23fcb40dde8c9d34fa8116c17ba79a45c005b2719f3dc7\": rpc error: code = NotFound desc = could not find container \"f59f3a090f9824d71c23fcb40dde8c9d34fa8116c17ba79a45c005b2719f3dc7\": container with ID starting with f59f3a090f9824d71c23fcb40dde8c9d34fa8116c17ba79a45c005b2719f3dc7 not found: ID does not exist" Nov 24 09:00:27 crc kubenswrapper[4886]: I1124 09:00:27.998305 4886 scope.go:117] "RemoveContainer" containerID="48584e4993cf06ba4b29a0732136f0f86c23d81adea76e945015ffafb462819d" Nov 24 09:00:27 crc kubenswrapper[4886]: I1124 09:00:27.998531 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"48584e4993cf06ba4b29a0732136f0f86c23d81adea76e945015ffafb462819d"} err="failed to get container status \"48584e4993cf06ba4b29a0732136f0f86c23d81adea76e945015ffafb462819d\": rpc error: code = NotFound desc = could not find container \"48584e4993cf06ba4b29a0732136f0f86c23d81adea76e945015ffafb462819d\": container with ID starting with 48584e4993cf06ba4b29a0732136f0f86c23d81adea76e945015ffafb462819d not found: ID does not exist" Nov 24 09:00:27 crc kubenswrapper[4886]: I1124 09:00:27.998558 4886 scope.go:117] "RemoveContainer" containerID="97477bfe13de2286806b9a8e3c723e3828c1b0ef182329585eaef507dd0c36f9" Nov 24 09:00:27 crc kubenswrapper[4886]: I1124 09:00:27.998811 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"97477bfe13de2286806b9a8e3c723e3828c1b0ef182329585eaef507dd0c36f9"} err="failed to get container status \"97477bfe13de2286806b9a8e3c723e3828c1b0ef182329585eaef507dd0c36f9\": rpc error: code = NotFound desc = could not find container \"97477bfe13de2286806b9a8e3c723e3828c1b0ef182329585eaef507dd0c36f9\": container with ID starting with 97477bfe13de2286806b9a8e3c723e3828c1b0ef182329585eaef507dd0c36f9 not found: ID does not exist" Nov 24 09:00:27 crc kubenswrapper[4886]: I1124 09:00:27.998835 4886 scope.go:117] "RemoveContainer" containerID="53fba806955f4d7e98c5a2c830bc39cc6f9ac2a6248099aa13da6dff5a0cde76" Nov 24 09:00:27 crc kubenswrapper[4886]: I1124 09:00:27.999141 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"53fba806955f4d7e98c5a2c830bc39cc6f9ac2a6248099aa13da6dff5a0cde76"} err="failed to get container status \"53fba806955f4d7e98c5a2c830bc39cc6f9ac2a6248099aa13da6dff5a0cde76\": rpc error: code = NotFound desc = could not find container \"53fba806955f4d7e98c5a2c830bc39cc6f9ac2a6248099aa13da6dff5a0cde76\": container with ID starting with 53fba806955f4d7e98c5a2c830bc39cc6f9ac2a6248099aa13da6dff5a0cde76 not found: ID does not exist" Nov 24 09:00:28 crc kubenswrapper[4886]: I1124 09:00:28.767895 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-2dk8j_5d515fec-60f3-4bf7-9ba4-697bb691b670/kube-multus/2.log" Nov 24 09:00:28 crc kubenswrapper[4886]: I1124 09:00:28.771590 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hc97l" event={"ID":"6e112739-e842-4b28-9725-42f1d8907066","Type":"ContainerStarted","Data":"f1f8a8511f9dd872438a63e73631942b5958932dd97d5c2bcac98d18d47fb415"} Nov 24 09:00:28 crc kubenswrapper[4886]: I1124 09:00:28.771628 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hc97l" event={"ID":"6e112739-e842-4b28-9725-42f1d8907066","Type":"ContainerStarted","Data":"91e0ba4b7c44e46546be8743530aa2bbd95c6f93010e9e944fdf3aea6ecf4d89"} Nov 24 09:00:28 crc kubenswrapper[4886]: I1124 09:00:28.771641 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hc97l" event={"ID":"6e112739-e842-4b28-9725-42f1d8907066","Type":"ContainerStarted","Data":"4482a47398ff03fea705cfa99f1bd34f203f0938ce261b2e2e8a95b1af25795e"} Nov 24 09:00:28 crc kubenswrapper[4886]: I1124 09:00:28.771650 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hc97l" event={"ID":"6e112739-e842-4b28-9725-42f1d8907066","Type":"ContainerStarted","Data":"a71382031e2ea2265816bc2cf5308bda89cb06a854a11e57b2030fdafb69a0eb"} Nov 24 09:00:28 crc kubenswrapper[4886]: I1124 09:00:28.771660 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hc97l" event={"ID":"6e112739-e842-4b28-9725-42f1d8907066","Type":"ContainerStarted","Data":"4fd26c1a887c45fb7b3c863e587aae443891e1519e097fb0eaa06369f58cbf97"} Nov 24 09:00:28 crc kubenswrapper[4886]: I1124 09:00:28.771671 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hc97l" event={"ID":"6e112739-e842-4b28-9725-42f1d8907066","Type":"ContainerStarted","Data":"3d1d758bb252c70415090b6de69ed76ed7ae7d4e00558a2b4de873238bb79314"} Nov 24 09:00:28 crc kubenswrapper[4886]: I1124 09:00:28.862347 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="03f9078c-6b20-46d5-ae2a-2eb20e236769" path="/var/lib/kubelet/pods/03f9078c-6b20-46d5-ae2a-2eb20e236769/volumes" Nov 24 09:00:30 crc kubenswrapper[4886]: I1124 09:00:30.789365 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hc97l" event={"ID":"6e112739-e842-4b28-9725-42f1d8907066","Type":"ContainerStarted","Data":"ab3bf39142b89f1c15597091a784009371aaeb567551e1100f3eb498d1cd0db1"} Nov 24 09:00:33 crc kubenswrapper[4886]: I1124 09:00:33.821863 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hc97l" event={"ID":"6e112739-e842-4b28-9725-42f1d8907066","Type":"ContainerStarted","Data":"3b9ecb59c6de7976c6d27bfd1354e90216c7816faf2b64fac46f06dd1b3dfcca"} Nov 24 09:00:33 crc kubenswrapper[4886]: I1124 09:00:33.822866 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-hc97l" Nov 24 09:00:33 crc kubenswrapper[4886]: I1124 09:00:33.822889 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-hc97l" Nov 24 09:00:33 crc kubenswrapper[4886]: I1124 09:00:33.855612 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-hc97l" podStartSLOduration=6.85558937 podStartE2EDuration="6.85558937s" podCreationTimestamp="2025-11-24 09:00:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 09:00:33.855324752 +0000 UTC m=+689.742062887" watchObservedRunningTime="2025-11-24 09:00:33.85558937 +0000 UTC m=+689.742327505" Nov 24 09:00:33 crc kubenswrapper[4886]: I1124 09:00:33.860577 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-hc97l" Nov 24 09:00:34 crc kubenswrapper[4886]: I1124 09:00:34.827781 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-hc97l" Nov 24 09:00:34 crc kubenswrapper[4886]: I1124 09:00:34.862943 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-hc97l" Nov 24 09:00:40 crc kubenswrapper[4886]: I1124 09:00:40.851043 4886 scope.go:117] "RemoveContainer" containerID="97763f8ce77f782f6462d9de656426c9d79e3b8ffc5a0ddcfbe4c68da2ec9905" Nov 24 09:00:40 crc kubenswrapper[4886]: E1124 09:00:40.851850 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-2dk8j_openshift-multus(5d515fec-60f3-4bf7-9ba4-697bb691b670)\"" pod="openshift-multus/multus-2dk8j" podUID="5d515fec-60f3-4bf7-9ba4-697bb691b670" Nov 24 09:00:48 crc kubenswrapper[4886]: I1124 09:00:48.237933 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e5rzdb"] Nov 24 09:00:48 crc kubenswrapper[4886]: I1124 09:00:48.240227 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e5rzdb" Nov 24 09:00:48 crc kubenswrapper[4886]: I1124 09:00:48.242879 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Nov 24 09:00:48 crc kubenswrapper[4886]: I1124 09:00:48.245250 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/45839f6c-5966-4fa5-84da-187fc952f624-bundle\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e5rzdb\" (UID: \"45839f6c-5966-4fa5-84da-187fc952f624\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e5rzdb" Nov 24 09:00:48 crc kubenswrapper[4886]: I1124 09:00:48.245290 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/45839f6c-5966-4fa5-84da-187fc952f624-util\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e5rzdb\" (UID: \"45839f6c-5966-4fa5-84da-187fc952f624\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e5rzdb" Nov 24 09:00:48 crc kubenswrapper[4886]: I1124 09:00:48.245319 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gtgcw\" (UniqueName: \"kubernetes.io/projected/45839f6c-5966-4fa5-84da-187fc952f624-kube-api-access-gtgcw\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e5rzdb\" (UID: \"45839f6c-5966-4fa5-84da-187fc952f624\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e5rzdb" Nov 24 09:00:48 crc kubenswrapper[4886]: I1124 09:00:48.255226 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e5rzdb"] Nov 24 09:00:48 crc kubenswrapper[4886]: I1124 09:00:48.346666 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/45839f6c-5966-4fa5-84da-187fc952f624-bundle\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e5rzdb\" (UID: \"45839f6c-5966-4fa5-84da-187fc952f624\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e5rzdb" Nov 24 09:00:48 crc kubenswrapper[4886]: I1124 09:00:48.346743 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/45839f6c-5966-4fa5-84da-187fc952f624-util\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e5rzdb\" (UID: \"45839f6c-5966-4fa5-84da-187fc952f624\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e5rzdb" Nov 24 09:00:48 crc kubenswrapper[4886]: I1124 09:00:48.346783 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gtgcw\" (UniqueName: \"kubernetes.io/projected/45839f6c-5966-4fa5-84da-187fc952f624-kube-api-access-gtgcw\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e5rzdb\" (UID: \"45839f6c-5966-4fa5-84da-187fc952f624\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e5rzdb" Nov 24 09:00:48 crc kubenswrapper[4886]: I1124 09:00:48.347400 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/45839f6c-5966-4fa5-84da-187fc952f624-bundle\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e5rzdb\" (UID: \"45839f6c-5966-4fa5-84da-187fc952f624\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e5rzdb" Nov 24 09:00:48 crc kubenswrapper[4886]: I1124 09:00:48.347519 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/45839f6c-5966-4fa5-84da-187fc952f624-util\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e5rzdb\" (UID: \"45839f6c-5966-4fa5-84da-187fc952f624\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e5rzdb" Nov 24 09:00:48 crc kubenswrapper[4886]: I1124 09:00:48.373319 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gtgcw\" (UniqueName: \"kubernetes.io/projected/45839f6c-5966-4fa5-84da-187fc952f624-kube-api-access-gtgcw\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e5rzdb\" (UID: \"45839f6c-5966-4fa5-84da-187fc952f624\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e5rzdb" Nov 24 09:00:48 crc kubenswrapper[4886]: I1124 09:00:48.566115 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e5rzdb" Nov 24 09:00:48 crc kubenswrapper[4886]: E1124 09:00:48.594540 4886 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e5rzdb_openshift-marketplace_45839f6c-5966-4fa5-84da-187fc952f624_0(0dfff656664607fd72fa7428f5e0e0546498e3522664feeb3cd44f84817d4884): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 24 09:00:48 crc kubenswrapper[4886]: E1124 09:00:48.594621 4886 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e5rzdb_openshift-marketplace_45839f6c-5966-4fa5-84da-187fc952f624_0(0dfff656664607fd72fa7428f5e0e0546498e3522664feeb3cd44f84817d4884): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e5rzdb" Nov 24 09:00:48 crc kubenswrapper[4886]: E1124 09:00:48.594652 4886 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e5rzdb_openshift-marketplace_45839f6c-5966-4fa5-84da-187fc952f624_0(0dfff656664607fd72fa7428f5e0e0546498e3522664feeb3cd44f84817d4884): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e5rzdb" Nov 24 09:00:48 crc kubenswrapper[4886]: E1124 09:00:48.594705 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e5rzdb_openshift-marketplace(45839f6c-5966-4fa5-84da-187fc952f624)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e5rzdb_openshift-marketplace(45839f6c-5966-4fa5-84da-187fc952f624)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e5rzdb_openshift-marketplace_45839f6c-5966-4fa5-84da-187fc952f624_0(0dfff656664607fd72fa7428f5e0e0546498e3522664feeb3cd44f84817d4884): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e5rzdb" podUID="45839f6c-5966-4fa5-84da-187fc952f624" Nov 24 09:00:48 crc kubenswrapper[4886]: I1124 09:00:48.915120 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e5rzdb" Nov 24 09:00:48 crc kubenswrapper[4886]: I1124 09:00:48.915737 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e5rzdb" Nov 24 09:00:48 crc kubenswrapper[4886]: E1124 09:00:48.955486 4886 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e5rzdb_openshift-marketplace_45839f6c-5966-4fa5-84da-187fc952f624_0(f0be0bf720936569ef72e136ec38bd3864bf4685dbc1808f19893e070298a852): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 24 09:00:48 crc kubenswrapper[4886]: E1124 09:00:48.955632 4886 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e5rzdb_openshift-marketplace_45839f6c-5966-4fa5-84da-187fc952f624_0(f0be0bf720936569ef72e136ec38bd3864bf4685dbc1808f19893e070298a852): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e5rzdb" Nov 24 09:00:48 crc kubenswrapper[4886]: E1124 09:00:48.955690 4886 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e5rzdb_openshift-marketplace_45839f6c-5966-4fa5-84da-187fc952f624_0(f0be0bf720936569ef72e136ec38bd3864bf4685dbc1808f19893e070298a852): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e5rzdb" Nov 24 09:00:48 crc kubenswrapper[4886]: E1124 09:00:48.955772 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e5rzdb_openshift-marketplace(45839f6c-5966-4fa5-84da-187fc952f624)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e5rzdb_openshift-marketplace(45839f6c-5966-4fa5-84da-187fc952f624)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e5rzdb_openshift-marketplace_45839f6c-5966-4fa5-84da-187fc952f624_0(f0be0bf720936569ef72e136ec38bd3864bf4685dbc1808f19893e070298a852): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e5rzdb" podUID="45839f6c-5966-4fa5-84da-187fc952f624" Nov 24 09:00:54 crc kubenswrapper[4886]: I1124 09:00:54.852484 4886 scope.go:117] "RemoveContainer" containerID="97763f8ce77f782f6462d9de656426c9d79e3b8ffc5a0ddcfbe4c68da2ec9905" Nov 24 09:00:55 crc kubenswrapper[4886]: I1124 09:00:55.957994 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-2dk8j_5d515fec-60f3-4bf7-9ba4-697bb691b670/kube-multus/2.log" Nov 24 09:00:55 crc kubenswrapper[4886]: I1124 09:00:55.958456 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-2dk8j" event={"ID":"5d515fec-60f3-4bf7-9ba4-697bb691b670","Type":"ContainerStarted","Data":"801bc1bd5ade2481a2dd0c4a6ee6867873bca0aa6b6d00eff114a2423328821e"} Nov 24 09:00:57 crc kubenswrapper[4886]: I1124 09:00:57.491302 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-hc97l" Nov 24 09:01:00 crc kubenswrapper[4886]: I1124 09:01:00.849194 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e5rzdb" Nov 24 09:01:00 crc kubenswrapper[4886]: I1124 09:01:00.850554 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e5rzdb" Nov 24 09:01:01 crc kubenswrapper[4886]: I1124 09:01:01.080123 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e5rzdb"] Nov 24 09:01:02 crc kubenswrapper[4886]: I1124 09:01:01.999637 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e5rzdb" event={"ID":"45839f6c-5966-4fa5-84da-187fc952f624","Type":"ContainerStarted","Data":"340ac969f2e9957b4dcae1233309e6d0a2c42fc53e232d518e4a0ff3975e87a7"} Nov 24 09:01:02 crc kubenswrapper[4886]: I1124 09:01:01.999983 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e5rzdb" event={"ID":"45839f6c-5966-4fa5-84da-187fc952f624","Type":"ContainerStarted","Data":"669d8cf44afa6f99b9fbbe9001bb741e0821afcd352a9ff02f4b93d14d256766"} Nov 24 09:01:03 crc kubenswrapper[4886]: I1124 09:01:03.009743 4886 generic.go:334] "Generic (PLEG): container finished" podID="45839f6c-5966-4fa5-84da-187fc952f624" containerID="340ac969f2e9957b4dcae1233309e6d0a2c42fc53e232d518e4a0ff3975e87a7" exitCode=0 Nov 24 09:01:03 crc kubenswrapper[4886]: I1124 09:01:03.009805 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e5rzdb" event={"ID":"45839f6c-5966-4fa5-84da-187fc952f624","Type":"ContainerDied","Data":"340ac969f2e9957b4dcae1233309e6d0a2c42fc53e232d518e4a0ff3975e87a7"} Nov 24 09:01:05 crc kubenswrapper[4886]: I1124 09:01:05.028428 4886 generic.go:334] "Generic (PLEG): container finished" podID="45839f6c-5966-4fa5-84da-187fc952f624" containerID="a37ba13844666140c747dea299422d2e952c169093aa19f2b56cb1b5a6a89d24" exitCode=0 Nov 24 09:01:05 crc kubenswrapper[4886]: I1124 09:01:05.028495 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e5rzdb" event={"ID":"45839f6c-5966-4fa5-84da-187fc952f624","Type":"ContainerDied","Data":"a37ba13844666140c747dea299422d2e952c169093aa19f2b56cb1b5a6a89d24"} Nov 24 09:01:06 crc kubenswrapper[4886]: I1124 09:01:06.037205 4886 generic.go:334] "Generic (PLEG): container finished" podID="45839f6c-5966-4fa5-84da-187fc952f624" containerID="0591a8f321279b956ad0406160fb70fa70b69f16bc4bef73be911e5b4d1f6106" exitCode=0 Nov 24 09:01:06 crc kubenswrapper[4886]: I1124 09:01:06.037319 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e5rzdb" event={"ID":"45839f6c-5966-4fa5-84da-187fc952f624","Type":"ContainerDied","Data":"0591a8f321279b956ad0406160fb70fa70b69f16bc4bef73be911e5b4d1f6106"} Nov 24 09:01:07 crc kubenswrapper[4886]: I1124 09:01:07.281741 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e5rzdb" Nov 24 09:01:07 crc kubenswrapper[4886]: I1124 09:01:07.427427 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gtgcw\" (UniqueName: \"kubernetes.io/projected/45839f6c-5966-4fa5-84da-187fc952f624-kube-api-access-gtgcw\") pod \"45839f6c-5966-4fa5-84da-187fc952f624\" (UID: \"45839f6c-5966-4fa5-84da-187fc952f624\") " Nov 24 09:01:07 crc kubenswrapper[4886]: I1124 09:01:07.427617 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/45839f6c-5966-4fa5-84da-187fc952f624-util\") pod \"45839f6c-5966-4fa5-84da-187fc952f624\" (UID: \"45839f6c-5966-4fa5-84da-187fc952f624\") " Nov 24 09:01:07 crc kubenswrapper[4886]: I1124 09:01:07.427771 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/45839f6c-5966-4fa5-84da-187fc952f624-bundle\") pod \"45839f6c-5966-4fa5-84da-187fc952f624\" (UID: \"45839f6c-5966-4fa5-84da-187fc952f624\") " Nov 24 09:01:07 crc kubenswrapper[4886]: I1124 09:01:07.428459 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/45839f6c-5966-4fa5-84da-187fc952f624-bundle" (OuterVolumeSpecName: "bundle") pod "45839f6c-5966-4fa5-84da-187fc952f624" (UID: "45839f6c-5966-4fa5-84da-187fc952f624"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 09:01:07 crc kubenswrapper[4886]: I1124 09:01:07.436523 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/45839f6c-5966-4fa5-84da-187fc952f624-kube-api-access-gtgcw" (OuterVolumeSpecName: "kube-api-access-gtgcw") pod "45839f6c-5966-4fa5-84da-187fc952f624" (UID: "45839f6c-5966-4fa5-84da-187fc952f624"). InnerVolumeSpecName "kube-api-access-gtgcw". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 09:01:07 crc kubenswrapper[4886]: I1124 09:01:07.438888 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/45839f6c-5966-4fa5-84da-187fc952f624-util" (OuterVolumeSpecName: "util") pod "45839f6c-5966-4fa5-84da-187fc952f624" (UID: "45839f6c-5966-4fa5-84da-187fc952f624"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 09:01:07 crc kubenswrapper[4886]: I1124 09:01:07.528969 4886 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/45839f6c-5966-4fa5-84da-187fc952f624-util\") on node \"crc\" DevicePath \"\"" Nov 24 09:01:07 crc kubenswrapper[4886]: I1124 09:01:07.529109 4886 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/45839f6c-5966-4fa5-84da-187fc952f624-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 09:01:07 crc kubenswrapper[4886]: I1124 09:01:07.529123 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gtgcw\" (UniqueName: \"kubernetes.io/projected/45839f6c-5966-4fa5-84da-187fc952f624-kube-api-access-gtgcw\") on node \"crc\" DevicePath \"\"" Nov 24 09:01:08 crc kubenswrapper[4886]: I1124 09:01:08.050493 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e5rzdb" event={"ID":"45839f6c-5966-4fa5-84da-187fc952f624","Type":"ContainerDied","Data":"669d8cf44afa6f99b9fbbe9001bb741e0821afcd352a9ff02f4b93d14d256766"} Nov 24 09:01:08 crc kubenswrapper[4886]: I1124 09:01:08.050544 4886 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="669d8cf44afa6f99b9fbbe9001bb741e0821afcd352a9ff02f4b93d14d256766" Nov 24 09:01:08 crc kubenswrapper[4886]: I1124 09:01:08.050915 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e5rzdb" Nov 24 09:01:09 crc kubenswrapper[4886]: I1124 09:01:09.853581 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-557fdffb88-mc67m"] Nov 24 09:01:09 crc kubenswrapper[4886]: E1124 09:01:09.854073 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45839f6c-5966-4fa5-84da-187fc952f624" containerName="extract" Nov 24 09:01:09 crc kubenswrapper[4886]: I1124 09:01:09.854086 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="45839f6c-5966-4fa5-84da-187fc952f624" containerName="extract" Nov 24 09:01:09 crc kubenswrapper[4886]: E1124 09:01:09.854103 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45839f6c-5966-4fa5-84da-187fc952f624" containerName="util" Nov 24 09:01:09 crc kubenswrapper[4886]: I1124 09:01:09.854112 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="45839f6c-5966-4fa5-84da-187fc952f624" containerName="util" Nov 24 09:01:09 crc kubenswrapper[4886]: E1124 09:01:09.854122 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45839f6c-5966-4fa5-84da-187fc952f624" containerName="pull" Nov 24 09:01:09 crc kubenswrapper[4886]: I1124 09:01:09.854131 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="45839f6c-5966-4fa5-84da-187fc952f624" containerName="pull" Nov 24 09:01:09 crc kubenswrapper[4886]: I1124 09:01:09.854257 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="45839f6c-5966-4fa5-84da-187fc952f624" containerName="extract" Nov 24 09:01:09 crc kubenswrapper[4886]: I1124 09:01:09.854618 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-557fdffb88-mc67m" Nov 24 09:01:09 crc kubenswrapper[4886]: I1124 09:01:09.858285 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Nov 24 09:01:09 crc kubenswrapper[4886]: I1124 09:01:09.858315 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Nov 24 09:01:09 crc kubenswrapper[4886]: I1124 09:01:09.858315 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-2rxzs" Nov 24 09:01:09 crc kubenswrapper[4886]: I1124 09:01:09.912873 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-557fdffb88-mc67m"] Nov 24 09:01:09 crc kubenswrapper[4886]: I1124 09:01:09.960443 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-78gzl\" (UniqueName: \"kubernetes.io/projected/2c6833a8-49fc-4959-b487-21009d6da024-kube-api-access-78gzl\") pod \"nmstate-operator-557fdffb88-mc67m\" (UID: \"2c6833a8-49fc-4959-b487-21009d6da024\") " pod="openshift-nmstate/nmstate-operator-557fdffb88-mc67m" Nov 24 09:01:10 crc kubenswrapper[4886]: I1124 09:01:10.062605 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-78gzl\" (UniqueName: \"kubernetes.io/projected/2c6833a8-49fc-4959-b487-21009d6da024-kube-api-access-78gzl\") pod \"nmstate-operator-557fdffb88-mc67m\" (UID: \"2c6833a8-49fc-4959-b487-21009d6da024\") " pod="openshift-nmstate/nmstate-operator-557fdffb88-mc67m" Nov 24 09:01:10 crc kubenswrapper[4886]: I1124 09:01:10.093910 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-78gzl\" (UniqueName: \"kubernetes.io/projected/2c6833a8-49fc-4959-b487-21009d6da024-kube-api-access-78gzl\") pod \"nmstate-operator-557fdffb88-mc67m\" (UID: \"2c6833a8-49fc-4959-b487-21009d6da024\") " pod="openshift-nmstate/nmstate-operator-557fdffb88-mc67m" Nov 24 09:01:10 crc kubenswrapper[4886]: I1124 09:01:10.174564 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-557fdffb88-mc67m" Nov 24 09:01:10 crc kubenswrapper[4886]: I1124 09:01:10.372604 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-557fdffb88-mc67m"] Nov 24 09:01:10 crc kubenswrapper[4886]: W1124 09:01:10.383202 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2c6833a8_49fc_4959_b487_21009d6da024.slice/crio-4c82086d9419181d6b151badcbe48451e58d208d900999394baf24eb7cb9b805 WatchSource:0}: Error finding container 4c82086d9419181d6b151badcbe48451e58d208d900999394baf24eb7cb9b805: Status 404 returned error can't find the container with id 4c82086d9419181d6b151badcbe48451e58d208d900999394baf24eb7cb9b805 Nov 24 09:01:11 crc kubenswrapper[4886]: I1124 09:01:11.070057 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-557fdffb88-mc67m" event={"ID":"2c6833a8-49fc-4959-b487-21009d6da024","Type":"ContainerStarted","Data":"4c82086d9419181d6b151badcbe48451e58d208d900999394baf24eb7cb9b805"} Nov 24 09:01:14 crc kubenswrapper[4886]: I1124 09:01:14.089538 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-557fdffb88-mc67m" event={"ID":"2c6833a8-49fc-4959-b487-21009d6da024","Type":"ContainerStarted","Data":"e100f1da329ec95f2f6eabe07447c473c786c0b1dffec126d92f5ebb887a02ef"} Nov 24 09:01:14 crc kubenswrapper[4886]: I1124 09:01:14.111698 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-557fdffb88-mc67m" podStartSLOduration=2.556097114 podStartE2EDuration="5.111674481s" podCreationTimestamp="2025-11-24 09:01:09 +0000 UTC" firstStartedPulling="2025-11-24 09:01:10.385379504 +0000 UTC m=+726.272117639" lastFinishedPulling="2025-11-24 09:01:12.940956871 +0000 UTC m=+728.827695006" observedRunningTime="2025-11-24 09:01:14.105091676 +0000 UTC m=+729.991829811" watchObservedRunningTime="2025-11-24 09:01:14.111674481 +0000 UTC m=+729.998412616" Nov 24 09:01:18 crc kubenswrapper[4886]: I1124 09:01:18.656602 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-5dcf9c57c5-646k5"] Nov 24 09:01:18 crc kubenswrapper[4886]: I1124 09:01:18.658010 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-646k5" Nov 24 09:01:18 crc kubenswrapper[4886]: I1124 09:01:18.661908 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-dhgkc" Nov 24 09:01:18 crc kubenswrapper[4886]: I1124 09:01:18.676952 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-6b89b748d8-dvjwb"] Nov 24 09:01:18 crc kubenswrapper[4886]: I1124 09:01:18.678087 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-6b89b748d8-dvjwb" Nov 24 09:01:18 crc kubenswrapper[4886]: I1124 09:01:18.682984 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Nov 24 09:01:18 crc kubenswrapper[4886]: I1124 09:01:18.702100 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-6b89b748d8-dvjwb"] Nov 24 09:01:18 crc kubenswrapper[4886]: I1124 09:01:18.708624 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-bxczf"] Nov 24 09:01:18 crc kubenswrapper[4886]: I1124 09:01:18.709734 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-bxczf" Nov 24 09:01:18 crc kubenswrapper[4886]: I1124 09:01:18.735232 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-5dcf9c57c5-646k5"] Nov 24 09:01:18 crc kubenswrapper[4886]: I1124 09:01:18.783575 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v6rh4\" (UniqueName: \"kubernetes.io/projected/33d55c5c-55cd-453e-8888-c064a7e0e36d-kube-api-access-v6rh4\") pod \"nmstate-webhook-6b89b748d8-dvjwb\" (UID: \"33d55c5c-55cd-453e-8888-c064a7e0e36d\") " pod="openshift-nmstate/nmstate-webhook-6b89b748d8-dvjwb" Nov 24 09:01:18 crc kubenswrapper[4886]: I1124 09:01:18.783642 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g8drr\" (UniqueName: \"kubernetes.io/projected/028a41e3-6c82-4e95-a4e5-fc835e4d75af-kube-api-access-g8drr\") pod \"nmstate-metrics-5dcf9c57c5-646k5\" (UID: \"028a41e3-6c82-4e95-a4e5-fc835e4d75af\") " pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-646k5" Nov 24 09:01:18 crc kubenswrapper[4886]: I1124 09:01:18.783684 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/50df2428-7c0e-4f4a-9c13-dd5cb4038f2e-ovs-socket\") pod \"nmstate-handler-bxczf\" (UID: \"50df2428-7c0e-4f4a-9c13-dd5cb4038f2e\") " pod="openshift-nmstate/nmstate-handler-bxczf" Nov 24 09:01:18 crc kubenswrapper[4886]: I1124 09:01:18.783717 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/50df2428-7c0e-4f4a-9c13-dd5cb4038f2e-nmstate-lock\") pod \"nmstate-handler-bxczf\" (UID: \"50df2428-7c0e-4f4a-9c13-dd5cb4038f2e\") " pod="openshift-nmstate/nmstate-handler-bxczf" Nov 24 09:01:18 crc kubenswrapper[4886]: I1124 09:01:18.783756 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/50df2428-7c0e-4f4a-9c13-dd5cb4038f2e-dbus-socket\") pod \"nmstate-handler-bxczf\" (UID: \"50df2428-7c0e-4f4a-9c13-dd5cb4038f2e\") " pod="openshift-nmstate/nmstate-handler-bxczf" Nov 24 09:01:18 crc kubenswrapper[4886]: I1124 09:01:18.783782 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q627c\" (UniqueName: \"kubernetes.io/projected/50df2428-7c0e-4f4a-9c13-dd5cb4038f2e-kube-api-access-q627c\") pod \"nmstate-handler-bxczf\" (UID: \"50df2428-7c0e-4f4a-9c13-dd5cb4038f2e\") " pod="openshift-nmstate/nmstate-handler-bxczf" Nov 24 09:01:18 crc kubenswrapper[4886]: I1124 09:01:18.783809 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/33d55c5c-55cd-453e-8888-c064a7e0e36d-tls-key-pair\") pod \"nmstate-webhook-6b89b748d8-dvjwb\" (UID: \"33d55c5c-55cd-453e-8888-c064a7e0e36d\") " pod="openshift-nmstate/nmstate-webhook-6b89b748d8-dvjwb" Nov 24 09:01:18 crc kubenswrapper[4886]: I1124 09:01:18.842408 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5874bd7bc5-nxtnd"] Nov 24 09:01:18 crc kubenswrapper[4886]: I1124 09:01:18.843545 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-nxtnd" Nov 24 09:01:18 crc kubenswrapper[4886]: I1124 09:01:18.846763 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Nov 24 09:01:18 crc kubenswrapper[4886]: I1124 09:01:18.846931 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Nov 24 09:01:18 crc kubenswrapper[4886]: I1124 09:01:18.847110 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-ck46b" Nov 24 09:01:18 crc kubenswrapper[4886]: I1124 09:01:18.870534 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5874bd7bc5-nxtnd"] Nov 24 09:01:18 crc kubenswrapper[4886]: I1124 09:01:18.885737 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v6rh4\" (UniqueName: \"kubernetes.io/projected/33d55c5c-55cd-453e-8888-c064a7e0e36d-kube-api-access-v6rh4\") pod \"nmstate-webhook-6b89b748d8-dvjwb\" (UID: \"33d55c5c-55cd-453e-8888-c064a7e0e36d\") " pod="openshift-nmstate/nmstate-webhook-6b89b748d8-dvjwb" Nov 24 09:01:18 crc kubenswrapper[4886]: I1124 09:01:18.885824 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g8drr\" (UniqueName: \"kubernetes.io/projected/028a41e3-6c82-4e95-a4e5-fc835e4d75af-kube-api-access-g8drr\") pod \"nmstate-metrics-5dcf9c57c5-646k5\" (UID: \"028a41e3-6c82-4e95-a4e5-fc835e4d75af\") " pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-646k5" Nov 24 09:01:18 crc kubenswrapper[4886]: I1124 09:01:18.885875 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/50df2428-7c0e-4f4a-9c13-dd5cb4038f2e-ovs-socket\") pod \"nmstate-handler-bxczf\" (UID: \"50df2428-7c0e-4f4a-9c13-dd5cb4038f2e\") " pod="openshift-nmstate/nmstate-handler-bxczf" Nov 24 09:01:18 crc kubenswrapper[4886]: I1124 09:01:18.885899 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/50df2428-7c0e-4f4a-9c13-dd5cb4038f2e-nmstate-lock\") pod \"nmstate-handler-bxczf\" (UID: \"50df2428-7c0e-4f4a-9c13-dd5cb4038f2e\") " pod="openshift-nmstate/nmstate-handler-bxczf" Nov 24 09:01:18 crc kubenswrapper[4886]: I1124 09:01:18.885930 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/50df2428-7c0e-4f4a-9c13-dd5cb4038f2e-dbus-socket\") pod \"nmstate-handler-bxczf\" (UID: \"50df2428-7c0e-4f4a-9c13-dd5cb4038f2e\") " pod="openshift-nmstate/nmstate-handler-bxczf" Nov 24 09:01:18 crc kubenswrapper[4886]: I1124 09:01:18.885970 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q627c\" (UniqueName: \"kubernetes.io/projected/50df2428-7c0e-4f4a-9c13-dd5cb4038f2e-kube-api-access-q627c\") pod \"nmstate-handler-bxczf\" (UID: \"50df2428-7c0e-4f4a-9c13-dd5cb4038f2e\") " pod="openshift-nmstate/nmstate-handler-bxczf" Nov 24 09:01:18 crc kubenswrapper[4886]: I1124 09:01:18.885996 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/33d55c5c-55cd-453e-8888-c064a7e0e36d-tls-key-pair\") pod \"nmstate-webhook-6b89b748d8-dvjwb\" (UID: \"33d55c5c-55cd-453e-8888-c064a7e0e36d\") " pod="openshift-nmstate/nmstate-webhook-6b89b748d8-dvjwb" Nov 24 09:01:18 crc kubenswrapper[4886]: I1124 09:01:18.886242 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/50df2428-7c0e-4f4a-9c13-dd5cb4038f2e-nmstate-lock\") pod \"nmstate-handler-bxczf\" (UID: \"50df2428-7c0e-4f4a-9c13-dd5cb4038f2e\") " pod="openshift-nmstate/nmstate-handler-bxczf" Nov 24 09:01:18 crc kubenswrapper[4886]: I1124 09:01:18.886304 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/50df2428-7c0e-4f4a-9c13-dd5cb4038f2e-ovs-socket\") pod \"nmstate-handler-bxczf\" (UID: \"50df2428-7c0e-4f4a-9c13-dd5cb4038f2e\") " pod="openshift-nmstate/nmstate-handler-bxczf" Nov 24 09:01:18 crc kubenswrapper[4886]: I1124 09:01:18.886773 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/50df2428-7c0e-4f4a-9c13-dd5cb4038f2e-dbus-socket\") pod \"nmstate-handler-bxczf\" (UID: \"50df2428-7c0e-4f4a-9c13-dd5cb4038f2e\") " pod="openshift-nmstate/nmstate-handler-bxczf" Nov 24 09:01:18 crc kubenswrapper[4886]: I1124 09:01:18.896104 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/33d55c5c-55cd-453e-8888-c064a7e0e36d-tls-key-pair\") pod \"nmstate-webhook-6b89b748d8-dvjwb\" (UID: \"33d55c5c-55cd-453e-8888-c064a7e0e36d\") " pod="openshift-nmstate/nmstate-webhook-6b89b748d8-dvjwb" Nov 24 09:01:18 crc kubenswrapper[4886]: I1124 09:01:18.913182 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g8drr\" (UniqueName: \"kubernetes.io/projected/028a41e3-6c82-4e95-a4e5-fc835e4d75af-kube-api-access-g8drr\") pod \"nmstate-metrics-5dcf9c57c5-646k5\" (UID: \"028a41e3-6c82-4e95-a4e5-fc835e4d75af\") " pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-646k5" Nov 24 09:01:18 crc kubenswrapper[4886]: I1124 09:01:18.914499 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q627c\" (UniqueName: \"kubernetes.io/projected/50df2428-7c0e-4f4a-9c13-dd5cb4038f2e-kube-api-access-q627c\") pod \"nmstate-handler-bxczf\" (UID: \"50df2428-7c0e-4f4a-9c13-dd5cb4038f2e\") " pod="openshift-nmstate/nmstate-handler-bxczf" Nov 24 09:01:18 crc kubenswrapper[4886]: I1124 09:01:18.918085 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v6rh4\" (UniqueName: \"kubernetes.io/projected/33d55c5c-55cd-453e-8888-c064a7e0e36d-kube-api-access-v6rh4\") pod \"nmstate-webhook-6b89b748d8-dvjwb\" (UID: \"33d55c5c-55cd-453e-8888-c064a7e0e36d\") " pod="openshift-nmstate/nmstate-webhook-6b89b748d8-dvjwb" Nov 24 09:01:18 crc kubenswrapper[4886]: I1124 09:01:18.987431 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/166ba125-3d7b-4ab8-bbca-7f707fd9261b-plugin-serving-cert\") pod \"nmstate-console-plugin-5874bd7bc5-nxtnd\" (UID: \"166ba125-3d7b-4ab8-bbca-7f707fd9261b\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-nxtnd" Nov 24 09:01:18 crc kubenswrapper[4886]: I1124 09:01:18.987484 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5ckn9\" (UniqueName: \"kubernetes.io/projected/166ba125-3d7b-4ab8-bbca-7f707fd9261b-kube-api-access-5ckn9\") pod \"nmstate-console-plugin-5874bd7bc5-nxtnd\" (UID: \"166ba125-3d7b-4ab8-bbca-7f707fd9261b\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-nxtnd" Nov 24 09:01:18 crc kubenswrapper[4886]: I1124 09:01:18.987508 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/166ba125-3d7b-4ab8-bbca-7f707fd9261b-nginx-conf\") pod \"nmstate-console-plugin-5874bd7bc5-nxtnd\" (UID: \"166ba125-3d7b-4ab8-bbca-7f707fd9261b\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-nxtnd" Nov 24 09:01:18 crc kubenswrapper[4886]: I1124 09:01:18.988816 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-646k5" Nov 24 09:01:19 crc kubenswrapper[4886]: I1124 09:01:19.019852 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-6b89b748d8-dvjwb" Nov 24 09:01:19 crc kubenswrapper[4886]: I1124 09:01:19.032615 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-654f7c8958-25fwd"] Nov 24 09:01:19 crc kubenswrapper[4886]: I1124 09:01:19.033556 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-654f7c8958-25fwd" Nov 24 09:01:19 crc kubenswrapper[4886]: I1124 09:01:19.058675 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-654f7c8958-25fwd"] Nov 24 09:01:19 crc kubenswrapper[4886]: I1124 09:01:19.063447 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-bxczf" Nov 24 09:01:19 crc kubenswrapper[4886]: I1124 09:01:19.088676 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/5b875f61-3159-45f2-a79b-a8a761dddc9f-console-serving-cert\") pod \"console-654f7c8958-25fwd\" (UID: \"5b875f61-3159-45f2-a79b-a8a761dddc9f\") " pod="openshift-console/console-654f7c8958-25fwd" Nov 24 09:01:19 crc kubenswrapper[4886]: I1124 09:01:19.088725 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/166ba125-3d7b-4ab8-bbca-7f707fd9261b-plugin-serving-cert\") pod \"nmstate-console-plugin-5874bd7bc5-nxtnd\" (UID: \"166ba125-3d7b-4ab8-bbca-7f707fd9261b\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-nxtnd" Nov 24 09:01:19 crc kubenswrapper[4886]: I1124 09:01:19.088747 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5ckn9\" (UniqueName: \"kubernetes.io/projected/166ba125-3d7b-4ab8-bbca-7f707fd9261b-kube-api-access-5ckn9\") pod \"nmstate-console-plugin-5874bd7bc5-nxtnd\" (UID: \"166ba125-3d7b-4ab8-bbca-7f707fd9261b\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-nxtnd" Nov 24 09:01:19 crc kubenswrapper[4886]: I1124 09:01:19.088764 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/166ba125-3d7b-4ab8-bbca-7f707fd9261b-nginx-conf\") pod \"nmstate-console-plugin-5874bd7bc5-nxtnd\" (UID: \"166ba125-3d7b-4ab8-bbca-7f707fd9261b\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-nxtnd" Nov 24 09:01:19 crc kubenswrapper[4886]: I1124 09:01:19.088803 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mr4zt\" (UniqueName: \"kubernetes.io/projected/5b875f61-3159-45f2-a79b-a8a761dddc9f-kube-api-access-mr4zt\") pod \"console-654f7c8958-25fwd\" (UID: \"5b875f61-3159-45f2-a79b-a8a761dddc9f\") " pod="openshift-console/console-654f7c8958-25fwd" Nov 24 09:01:19 crc kubenswrapper[4886]: I1124 09:01:19.088822 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5b875f61-3159-45f2-a79b-a8a761dddc9f-trusted-ca-bundle\") pod \"console-654f7c8958-25fwd\" (UID: \"5b875f61-3159-45f2-a79b-a8a761dddc9f\") " pod="openshift-console/console-654f7c8958-25fwd" Nov 24 09:01:19 crc kubenswrapper[4886]: I1124 09:01:19.088985 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/5b875f61-3159-45f2-a79b-a8a761dddc9f-service-ca\") pod \"console-654f7c8958-25fwd\" (UID: \"5b875f61-3159-45f2-a79b-a8a761dddc9f\") " pod="openshift-console/console-654f7c8958-25fwd" Nov 24 09:01:19 crc kubenswrapper[4886]: I1124 09:01:19.089143 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/5b875f61-3159-45f2-a79b-a8a761dddc9f-oauth-serving-cert\") pod \"console-654f7c8958-25fwd\" (UID: \"5b875f61-3159-45f2-a79b-a8a761dddc9f\") " pod="openshift-console/console-654f7c8958-25fwd" Nov 24 09:01:19 crc kubenswrapper[4886]: I1124 09:01:19.089184 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/5b875f61-3159-45f2-a79b-a8a761dddc9f-console-config\") pod \"console-654f7c8958-25fwd\" (UID: \"5b875f61-3159-45f2-a79b-a8a761dddc9f\") " pod="openshift-console/console-654f7c8958-25fwd" Nov 24 09:01:19 crc kubenswrapper[4886]: I1124 09:01:19.089273 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/5b875f61-3159-45f2-a79b-a8a761dddc9f-console-oauth-config\") pod \"console-654f7c8958-25fwd\" (UID: \"5b875f61-3159-45f2-a79b-a8a761dddc9f\") " pod="openshift-console/console-654f7c8958-25fwd" Nov 24 09:01:19 crc kubenswrapper[4886]: I1124 09:01:19.091140 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/166ba125-3d7b-4ab8-bbca-7f707fd9261b-nginx-conf\") pod \"nmstate-console-plugin-5874bd7bc5-nxtnd\" (UID: \"166ba125-3d7b-4ab8-bbca-7f707fd9261b\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-nxtnd" Nov 24 09:01:19 crc kubenswrapper[4886]: I1124 09:01:19.092752 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/166ba125-3d7b-4ab8-bbca-7f707fd9261b-plugin-serving-cert\") pod \"nmstate-console-plugin-5874bd7bc5-nxtnd\" (UID: \"166ba125-3d7b-4ab8-bbca-7f707fd9261b\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-nxtnd" Nov 24 09:01:19 crc kubenswrapper[4886]: I1124 09:01:19.107455 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5ckn9\" (UniqueName: \"kubernetes.io/projected/166ba125-3d7b-4ab8-bbca-7f707fd9261b-kube-api-access-5ckn9\") pod \"nmstate-console-plugin-5874bd7bc5-nxtnd\" (UID: \"166ba125-3d7b-4ab8-bbca-7f707fd9261b\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-nxtnd" Nov 24 09:01:19 crc kubenswrapper[4886]: I1124 09:01:19.125112 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-bxczf" event={"ID":"50df2428-7c0e-4f4a-9c13-dd5cb4038f2e","Type":"ContainerStarted","Data":"db2d648bcd1224a7396dfd6ea8ab3e548720ea8ec893279de085b61162c4911c"} Nov 24 09:01:19 crc kubenswrapper[4886]: I1124 09:01:19.174013 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-nxtnd" Nov 24 09:01:19 crc kubenswrapper[4886]: I1124 09:01:19.191429 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/5b875f61-3159-45f2-a79b-a8a761dddc9f-console-serving-cert\") pod \"console-654f7c8958-25fwd\" (UID: \"5b875f61-3159-45f2-a79b-a8a761dddc9f\") " pod="openshift-console/console-654f7c8958-25fwd" Nov 24 09:01:19 crc kubenswrapper[4886]: I1124 09:01:19.191504 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mr4zt\" (UniqueName: \"kubernetes.io/projected/5b875f61-3159-45f2-a79b-a8a761dddc9f-kube-api-access-mr4zt\") pod \"console-654f7c8958-25fwd\" (UID: \"5b875f61-3159-45f2-a79b-a8a761dddc9f\") " pod="openshift-console/console-654f7c8958-25fwd" Nov 24 09:01:19 crc kubenswrapper[4886]: I1124 09:01:19.191526 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5b875f61-3159-45f2-a79b-a8a761dddc9f-trusted-ca-bundle\") pod \"console-654f7c8958-25fwd\" (UID: \"5b875f61-3159-45f2-a79b-a8a761dddc9f\") " pod="openshift-console/console-654f7c8958-25fwd" Nov 24 09:01:19 crc kubenswrapper[4886]: I1124 09:01:19.191564 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/5b875f61-3159-45f2-a79b-a8a761dddc9f-service-ca\") pod \"console-654f7c8958-25fwd\" (UID: \"5b875f61-3159-45f2-a79b-a8a761dddc9f\") " pod="openshift-console/console-654f7c8958-25fwd" Nov 24 09:01:19 crc kubenswrapper[4886]: I1124 09:01:19.191603 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/5b875f61-3159-45f2-a79b-a8a761dddc9f-oauth-serving-cert\") pod \"console-654f7c8958-25fwd\" (UID: \"5b875f61-3159-45f2-a79b-a8a761dddc9f\") " pod="openshift-console/console-654f7c8958-25fwd" Nov 24 09:01:19 crc kubenswrapper[4886]: I1124 09:01:19.191622 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/5b875f61-3159-45f2-a79b-a8a761dddc9f-console-config\") pod \"console-654f7c8958-25fwd\" (UID: \"5b875f61-3159-45f2-a79b-a8a761dddc9f\") " pod="openshift-console/console-654f7c8958-25fwd" Nov 24 09:01:19 crc kubenswrapper[4886]: I1124 09:01:19.191640 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/5b875f61-3159-45f2-a79b-a8a761dddc9f-console-oauth-config\") pod \"console-654f7c8958-25fwd\" (UID: \"5b875f61-3159-45f2-a79b-a8a761dddc9f\") " pod="openshift-console/console-654f7c8958-25fwd" Nov 24 09:01:19 crc kubenswrapper[4886]: I1124 09:01:19.194344 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/5b875f61-3159-45f2-a79b-a8a761dddc9f-service-ca\") pod \"console-654f7c8958-25fwd\" (UID: \"5b875f61-3159-45f2-a79b-a8a761dddc9f\") " pod="openshift-console/console-654f7c8958-25fwd" Nov 24 09:01:19 crc kubenswrapper[4886]: I1124 09:01:19.196984 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/5b875f61-3159-45f2-a79b-a8a761dddc9f-oauth-serving-cert\") pod \"console-654f7c8958-25fwd\" (UID: \"5b875f61-3159-45f2-a79b-a8a761dddc9f\") " pod="openshift-console/console-654f7c8958-25fwd" Nov 24 09:01:19 crc kubenswrapper[4886]: I1124 09:01:19.198298 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5b875f61-3159-45f2-a79b-a8a761dddc9f-trusted-ca-bundle\") pod \"console-654f7c8958-25fwd\" (UID: \"5b875f61-3159-45f2-a79b-a8a761dddc9f\") " pod="openshift-console/console-654f7c8958-25fwd" Nov 24 09:01:19 crc kubenswrapper[4886]: I1124 09:01:19.198725 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/5b875f61-3159-45f2-a79b-a8a761dddc9f-console-config\") pod \"console-654f7c8958-25fwd\" (UID: \"5b875f61-3159-45f2-a79b-a8a761dddc9f\") " pod="openshift-console/console-654f7c8958-25fwd" Nov 24 09:01:19 crc kubenswrapper[4886]: I1124 09:01:19.199397 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/5b875f61-3159-45f2-a79b-a8a761dddc9f-console-oauth-config\") pod \"console-654f7c8958-25fwd\" (UID: \"5b875f61-3159-45f2-a79b-a8a761dddc9f\") " pod="openshift-console/console-654f7c8958-25fwd" Nov 24 09:01:19 crc kubenswrapper[4886]: I1124 09:01:19.200019 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/5b875f61-3159-45f2-a79b-a8a761dddc9f-console-serving-cert\") pod \"console-654f7c8958-25fwd\" (UID: \"5b875f61-3159-45f2-a79b-a8a761dddc9f\") " pod="openshift-console/console-654f7c8958-25fwd" Nov 24 09:01:19 crc kubenswrapper[4886]: I1124 09:01:19.214884 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mr4zt\" (UniqueName: \"kubernetes.io/projected/5b875f61-3159-45f2-a79b-a8a761dddc9f-kube-api-access-mr4zt\") pod \"console-654f7c8958-25fwd\" (UID: \"5b875f61-3159-45f2-a79b-a8a761dddc9f\") " pod="openshift-console/console-654f7c8958-25fwd" Nov 24 09:01:19 crc kubenswrapper[4886]: I1124 09:01:19.362629 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-6b89b748d8-dvjwb"] Nov 24 09:01:19 crc kubenswrapper[4886]: I1124 09:01:19.385812 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-654f7c8958-25fwd" Nov 24 09:01:19 crc kubenswrapper[4886]: I1124 09:01:19.448200 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5874bd7bc5-nxtnd"] Nov 24 09:01:19 crc kubenswrapper[4886]: I1124 09:01:19.498701 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-5dcf9c57c5-646k5"] Nov 24 09:01:19 crc kubenswrapper[4886]: I1124 09:01:19.594982 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-654f7c8958-25fwd"] Nov 24 09:01:19 crc kubenswrapper[4886]: W1124 09:01:19.597692 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5b875f61_3159_45f2_a79b_a8a761dddc9f.slice/crio-029007c39de3e04012f6a20e6bb0a88c7af2faeef05eca2e17da44b71b483ecb WatchSource:0}: Error finding container 029007c39de3e04012f6a20e6bb0a88c7af2faeef05eca2e17da44b71b483ecb: Status 404 returned error can't find the container with id 029007c39de3e04012f6a20e6bb0a88c7af2faeef05eca2e17da44b71b483ecb Nov 24 09:01:20 crc kubenswrapper[4886]: I1124 09:01:20.132733 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-6b89b748d8-dvjwb" event={"ID":"33d55c5c-55cd-453e-8888-c064a7e0e36d","Type":"ContainerStarted","Data":"9a9eb049fcb6a9596bcd9a470d4101997dc17f097af66fc48c586952fceb67a9"} Nov 24 09:01:20 crc kubenswrapper[4886]: I1124 09:01:20.134284 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-646k5" event={"ID":"028a41e3-6c82-4e95-a4e5-fc835e4d75af","Type":"ContainerStarted","Data":"078dc3b3ecc6915a7db2f6e37015766760a202f17345af4f711723ed13b32c6e"} Nov 24 09:01:20 crc kubenswrapper[4886]: I1124 09:01:20.135792 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-654f7c8958-25fwd" event={"ID":"5b875f61-3159-45f2-a79b-a8a761dddc9f","Type":"ContainerStarted","Data":"8aa8eac13dc2e5dfdd772aceda65077f314ac7547640dedd66021fa229b9a090"} Nov 24 09:01:20 crc kubenswrapper[4886]: I1124 09:01:20.135832 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-654f7c8958-25fwd" event={"ID":"5b875f61-3159-45f2-a79b-a8a761dddc9f","Type":"ContainerStarted","Data":"029007c39de3e04012f6a20e6bb0a88c7af2faeef05eca2e17da44b71b483ecb"} Nov 24 09:01:20 crc kubenswrapper[4886]: I1124 09:01:20.137970 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-nxtnd" event={"ID":"166ba125-3d7b-4ab8-bbca-7f707fd9261b","Type":"ContainerStarted","Data":"440c93719da1ee95de8c4e543b3c76781411c2d113b40a000d35e8fc3a424d2f"} Nov 24 09:01:20 crc kubenswrapper[4886]: I1124 09:01:20.155053 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-654f7c8958-25fwd" podStartSLOduration=1.15503636 podStartE2EDuration="1.15503636s" podCreationTimestamp="2025-11-24 09:01:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 09:01:20.15289474 +0000 UTC m=+736.039632895" watchObservedRunningTime="2025-11-24 09:01:20.15503636 +0000 UTC m=+736.041774485" Nov 24 09:01:23 crc kubenswrapper[4886]: I1124 09:01:23.163025 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-646k5" event={"ID":"028a41e3-6c82-4e95-a4e5-fc835e4d75af","Type":"ContainerStarted","Data":"35a10174669939f08fe03010dbcf5a440f894ef8e9a1da715e8e9ac679f8ee59"} Nov 24 09:01:23 crc kubenswrapper[4886]: I1124 09:01:23.166297 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-nxtnd" event={"ID":"166ba125-3d7b-4ab8-bbca-7f707fd9261b","Type":"ContainerStarted","Data":"ea5107f15ece91086dc3d39596d4df8c2497a4c7befd12227f8b578f893cc547"} Nov 24 09:01:23 crc kubenswrapper[4886]: I1124 09:01:23.168393 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-bxczf" event={"ID":"50df2428-7c0e-4f4a-9c13-dd5cb4038f2e","Type":"ContainerStarted","Data":"1b37e9fa7ebb4e6b2fe8943d70b2b776af9fdb9b1a6234ad916421689eda5023"} Nov 24 09:01:23 crc kubenswrapper[4886]: I1124 09:01:23.168538 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-bxczf" Nov 24 09:01:23 crc kubenswrapper[4886]: I1124 09:01:23.170132 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-6b89b748d8-dvjwb" event={"ID":"33d55c5c-55cd-453e-8888-c064a7e0e36d","Type":"ContainerStarted","Data":"e1889cc32824f09327640d7ddc25dd001430d1270805d00ab3cc5a42239346bd"} Nov 24 09:01:23 crc kubenswrapper[4886]: I1124 09:01:23.170431 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-6b89b748d8-dvjwb" Nov 24 09:01:23 crc kubenswrapper[4886]: I1124 09:01:23.208378 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-nxtnd" podStartSLOduration=2.150754648 podStartE2EDuration="5.208353776s" podCreationTimestamp="2025-11-24 09:01:18 +0000 UTC" firstStartedPulling="2025-11-24 09:01:19.458043653 +0000 UTC m=+735.344781798" lastFinishedPulling="2025-11-24 09:01:22.515642791 +0000 UTC m=+738.402380926" observedRunningTime="2025-11-24 09:01:23.184290828 +0000 UTC m=+739.071028963" watchObservedRunningTime="2025-11-24 09:01:23.208353776 +0000 UTC m=+739.095091911" Nov 24 09:01:23 crc kubenswrapper[4886]: I1124 09:01:23.227024 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-6b89b748d8-dvjwb" podStartSLOduration=2.090008865 podStartE2EDuration="5.226976301s" podCreationTimestamp="2025-11-24 09:01:18 +0000 UTC" firstStartedPulling="2025-11-24 09:01:19.380320602 +0000 UTC m=+735.267058737" lastFinishedPulling="2025-11-24 09:01:22.517288038 +0000 UTC m=+738.404026173" observedRunningTime="2025-11-24 09:01:23.205056623 +0000 UTC m=+739.091794768" watchObservedRunningTime="2025-11-24 09:01:23.226976301 +0000 UTC m=+739.113714466" Nov 24 09:01:23 crc kubenswrapper[4886]: I1124 09:01:23.230627 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-bxczf" podStartSLOduration=1.813919895 podStartE2EDuration="5.230615614s" podCreationTimestamp="2025-11-24 09:01:18 +0000 UTC" firstStartedPulling="2025-11-24 09:01:19.099911919 +0000 UTC m=+734.986650054" lastFinishedPulling="2025-11-24 09:01:22.516607638 +0000 UTC m=+738.403345773" observedRunningTime="2025-11-24 09:01:23.220991873 +0000 UTC m=+739.107730028" watchObservedRunningTime="2025-11-24 09:01:23.230615614 +0000 UTC m=+739.117353749" Nov 24 09:01:26 crc kubenswrapper[4886]: I1124 09:01:26.190624 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-646k5" event={"ID":"028a41e3-6c82-4e95-a4e5-fc835e4d75af","Type":"ContainerStarted","Data":"7ffdc10823ac60966ff442e0dd142c0c65df1bc8b8a5aa09d9eaec92467b48fb"} Nov 24 09:01:26 crc kubenswrapper[4886]: I1124 09:01:26.208404 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-646k5" podStartSLOduration=2.368289891 podStartE2EDuration="8.208369881s" podCreationTimestamp="2025-11-24 09:01:18 +0000 UTC" firstStartedPulling="2025-11-24 09:01:19.504192594 +0000 UTC m=+735.390930739" lastFinishedPulling="2025-11-24 09:01:25.344272594 +0000 UTC m=+741.231010729" observedRunningTime="2025-11-24 09:01:26.207594839 +0000 UTC m=+742.094332984" watchObservedRunningTime="2025-11-24 09:01:26.208369881 +0000 UTC m=+742.095108016" Nov 24 09:01:29 crc kubenswrapper[4886]: I1124 09:01:29.088975 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-bxczf" Nov 24 09:01:29 crc kubenswrapper[4886]: I1124 09:01:29.387069 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-654f7c8958-25fwd" Nov 24 09:01:29 crc kubenswrapper[4886]: I1124 09:01:29.387275 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-654f7c8958-25fwd" Nov 24 09:01:29 crc kubenswrapper[4886]: I1124 09:01:29.394978 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-654f7c8958-25fwd" Nov 24 09:01:30 crc kubenswrapper[4886]: I1124 09:01:30.220361 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-654f7c8958-25fwd" Nov 24 09:01:30 crc kubenswrapper[4886]: I1124 09:01:30.277630 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-tbxjm"] Nov 24 09:01:39 crc kubenswrapper[4886]: I1124 09:01:39.025700 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-6b89b748d8-dvjwb" Nov 24 09:01:50 crc kubenswrapper[4886]: I1124 09:01:50.256182 4886 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Nov 24 09:01:54 crc kubenswrapper[4886]: I1124 09:01:54.267726 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c68lz9q"] Nov 24 09:01:54 crc kubenswrapper[4886]: I1124 09:01:54.270021 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c68lz9q" Nov 24 09:01:54 crc kubenswrapper[4886]: I1124 09:01:54.274766 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Nov 24 09:01:54 crc kubenswrapper[4886]: I1124 09:01:54.280882 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c68lz9q"] Nov 24 09:01:54 crc kubenswrapper[4886]: I1124 09:01:54.444029 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e28dd825-a491-4ac3-a3b1-0e19192a40b9-bundle\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c68lz9q\" (UID: \"e28dd825-a491-4ac3-a3b1-0e19192a40b9\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c68lz9q" Nov 24 09:01:54 crc kubenswrapper[4886]: I1124 09:01:54.444106 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fpxws\" (UniqueName: \"kubernetes.io/projected/e28dd825-a491-4ac3-a3b1-0e19192a40b9-kube-api-access-fpxws\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c68lz9q\" (UID: \"e28dd825-a491-4ac3-a3b1-0e19192a40b9\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c68lz9q" Nov 24 09:01:54 crc kubenswrapper[4886]: I1124 09:01:54.444138 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e28dd825-a491-4ac3-a3b1-0e19192a40b9-util\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c68lz9q\" (UID: \"e28dd825-a491-4ac3-a3b1-0e19192a40b9\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c68lz9q" Nov 24 09:01:54 crc kubenswrapper[4886]: I1124 09:01:54.546039 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e28dd825-a491-4ac3-a3b1-0e19192a40b9-bundle\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c68lz9q\" (UID: \"e28dd825-a491-4ac3-a3b1-0e19192a40b9\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c68lz9q" Nov 24 09:01:54 crc kubenswrapper[4886]: I1124 09:01:54.546584 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fpxws\" (UniqueName: \"kubernetes.io/projected/e28dd825-a491-4ac3-a3b1-0e19192a40b9-kube-api-access-fpxws\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c68lz9q\" (UID: \"e28dd825-a491-4ac3-a3b1-0e19192a40b9\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c68lz9q" Nov 24 09:01:54 crc kubenswrapper[4886]: I1124 09:01:54.546735 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e28dd825-a491-4ac3-a3b1-0e19192a40b9-util\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c68lz9q\" (UID: \"e28dd825-a491-4ac3-a3b1-0e19192a40b9\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c68lz9q" Nov 24 09:01:54 crc kubenswrapper[4886]: I1124 09:01:54.546618 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e28dd825-a491-4ac3-a3b1-0e19192a40b9-bundle\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c68lz9q\" (UID: \"e28dd825-a491-4ac3-a3b1-0e19192a40b9\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c68lz9q" Nov 24 09:01:54 crc kubenswrapper[4886]: I1124 09:01:54.547353 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e28dd825-a491-4ac3-a3b1-0e19192a40b9-util\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c68lz9q\" (UID: \"e28dd825-a491-4ac3-a3b1-0e19192a40b9\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c68lz9q" Nov 24 09:01:54 crc kubenswrapper[4886]: I1124 09:01:54.574711 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fpxws\" (UniqueName: \"kubernetes.io/projected/e28dd825-a491-4ac3-a3b1-0e19192a40b9-kube-api-access-fpxws\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c68lz9q\" (UID: \"e28dd825-a491-4ac3-a3b1-0e19192a40b9\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c68lz9q" Nov 24 09:01:54 crc kubenswrapper[4886]: I1124 09:01:54.588394 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c68lz9q" Nov 24 09:01:54 crc kubenswrapper[4886]: I1124 09:01:54.816226 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c68lz9q"] Nov 24 09:01:55 crc kubenswrapper[4886]: I1124 09:01:55.319351 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-tbxjm" podUID="87f902e1-073b-4ccd-8b3a-717f802e9671" containerName="console" containerID="cri-o://8eea4918e58f77f6deb36f7042881dc2205041311fa6910fec0b42ced659d9f9" gracePeriod=15 Nov 24 09:01:55 crc kubenswrapper[4886]: I1124 09:01:55.387338 4886 generic.go:334] "Generic (PLEG): container finished" podID="e28dd825-a491-4ac3-a3b1-0e19192a40b9" containerID="4584696fd09a5017ba4482f483702340644c3aebfe1eb48410c0cc36f5271b01" exitCode=0 Nov 24 09:01:55 crc kubenswrapper[4886]: I1124 09:01:55.387406 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c68lz9q" event={"ID":"e28dd825-a491-4ac3-a3b1-0e19192a40b9","Type":"ContainerDied","Data":"4584696fd09a5017ba4482f483702340644c3aebfe1eb48410c0cc36f5271b01"} Nov 24 09:01:55 crc kubenswrapper[4886]: I1124 09:01:55.387472 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c68lz9q" event={"ID":"e28dd825-a491-4ac3-a3b1-0e19192a40b9","Type":"ContainerStarted","Data":"941c57f4f0be6afac9bf8472e0e60b98594ebd6240045780d52ab2213561347f"} Nov 24 09:01:55 crc kubenswrapper[4886]: I1124 09:01:55.717807 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-tbxjm_87f902e1-073b-4ccd-8b3a-717f802e9671/console/0.log" Nov 24 09:01:55 crc kubenswrapper[4886]: I1124 09:01:55.718272 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-tbxjm" Nov 24 09:01:55 crc kubenswrapper[4886]: I1124 09:01:55.866571 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/87f902e1-073b-4ccd-8b3a-717f802e9671-trusted-ca-bundle\") pod \"87f902e1-073b-4ccd-8b3a-717f802e9671\" (UID: \"87f902e1-073b-4ccd-8b3a-717f802e9671\") " Nov 24 09:01:55 crc kubenswrapper[4886]: I1124 09:01:55.866649 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/87f902e1-073b-4ccd-8b3a-717f802e9671-console-config\") pod \"87f902e1-073b-4ccd-8b3a-717f802e9671\" (UID: \"87f902e1-073b-4ccd-8b3a-717f802e9671\") " Nov 24 09:01:55 crc kubenswrapper[4886]: I1124 09:01:55.866678 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/87f902e1-073b-4ccd-8b3a-717f802e9671-service-ca\") pod \"87f902e1-073b-4ccd-8b3a-717f802e9671\" (UID: \"87f902e1-073b-4ccd-8b3a-717f802e9671\") " Nov 24 09:01:55 crc kubenswrapper[4886]: I1124 09:01:55.866754 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/87f902e1-073b-4ccd-8b3a-717f802e9671-console-serving-cert\") pod \"87f902e1-073b-4ccd-8b3a-717f802e9671\" (UID: \"87f902e1-073b-4ccd-8b3a-717f802e9671\") " Nov 24 09:01:55 crc kubenswrapper[4886]: I1124 09:01:55.866801 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/87f902e1-073b-4ccd-8b3a-717f802e9671-console-oauth-config\") pod \"87f902e1-073b-4ccd-8b3a-717f802e9671\" (UID: \"87f902e1-073b-4ccd-8b3a-717f802e9671\") " Nov 24 09:01:55 crc kubenswrapper[4886]: I1124 09:01:55.866843 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/87f902e1-073b-4ccd-8b3a-717f802e9671-oauth-serving-cert\") pod \"87f902e1-073b-4ccd-8b3a-717f802e9671\" (UID: \"87f902e1-073b-4ccd-8b3a-717f802e9671\") " Nov 24 09:01:55 crc kubenswrapper[4886]: I1124 09:01:55.866898 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cjdn4\" (UniqueName: \"kubernetes.io/projected/87f902e1-073b-4ccd-8b3a-717f802e9671-kube-api-access-cjdn4\") pod \"87f902e1-073b-4ccd-8b3a-717f802e9671\" (UID: \"87f902e1-073b-4ccd-8b3a-717f802e9671\") " Nov 24 09:01:55 crc kubenswrapper[4886]: I1124 09:01:55.868026 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87f902e1-073b-4ccd-8b3a-717f802e9671-console-config" (OuterVolumeSpecName: "console-config") pod "87f902e1-073b-4ccd-8b3a-717f802e9671" (UID: "87f902e1-073b-4ccd-8b3a-717f802e9671"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 09:01:55 crc kubenswrapper[4886]: I1124 09:01:55.868085 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87f902e1-073b-4ccd-8b3a-717f802e9671-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "87f902e1-073b-4ccd-8b3a-717f802e9671" (UID: "87f902e1-073b-4ccd-8b3a-717f802e9671"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 09:01:55 crc kubenswrapper[4886]: I1124 09:01:55.868078 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87f902e1-073b-4ccd-8b3a-717f802e9671-service-ca" (OuterVolumeSpecName: "service-ca") pod "87f902e1-073b-4ccd-8b3a-717f802e9671" (UID: "87f902e1-073b-4ccd-8b3a-717f802e9671"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 09:01:55 crc kubenswrapper[4886]: I1124 09:01:55.868528 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87f902e1-073b-4ccd-8b3a-717f802e9671-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "87f902e1-073b-4ccd-8b3a-717f802e9671" (UID: "87f902e1-073b-4ccd-8b3a-717f802e9671"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 09:01:55 crc kubenswrapper[4886]: I1124 09:01:55.874512 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87f902e1-073b-4ccd-8b3a-717f802e9671-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "87f902e1-073b-4ccd-8b3a-717f802e9671" (UID: "87f902e1-073b-4ccd-8b3a-717f802e9671"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:01:55 crc kubenswrapper[4886]: I1124 09:01:55.875378 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87f902e1-073b-4ccd-8b3a-717f802e9671-kube-api-access-cjdn4" (OuterVolumeSpecName: "kube-api-access-cjdn4") pod "87f902e1-073b-4ccd-8b3a-717f802e9671" (UID: "87f902e1-073b-4ccd-8b3a-717f802e9671"). InnerVolumeSpecName "kube-api-access-cjdn4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 09:01:55 crc kubenswrapper[4886]: I1124 09:01:55.881531 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87f902e1-073b-4ccd-8b3a-717f802e9671-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "87f902e1-073b-4ccd-8b3a-717f802e9671" (UID: "87f902e1-073b-4ccd-8b3a-717f802e9671"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:01:55 crc kubenswrapper[4886]: I1124 09:01:55.969804 4886 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/87f902e1-073b-4ccd-8b3a-717f802e9671-console-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 09:01:55 crc kubenswrapper[4886]: I1124 09:01:55.969841 4886 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/87f902e1-073b-4ccd-8b3a-717f802e9671-console-oauth-config\") on node \"crc\" DevicePath \"\"" Nov 24 09:01:55 crc kubenswrapper[4886]: I1124 09:01:55.969851 4886 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/87f902e1-073b-4ccd-8b3a-717f802e9671-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 09:01:55 crc kubenswrapper[4886]: I1124 09:01:55.969863 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cjdn4\" (UniqueName: \"kubernetes.io/projected/87f902e1-073b-4ccd-8b3a-717f802e9671-kube-api-access-cjdn4\") on node \"crc\" DevicePath \"\"" Nov 24 09:01:55 crc kubenswrapper[4886]: I1124 09:01:55.969874 4886 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/87f902e1-073b-4ccd-8b3a-717f802e9671-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 09:01:55 crc kubenswrapper[4886]: I1124 09:01:55.969884 4886 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/87f902e1-073b-4ccd-8b3a-717f802e9671-console-config\") on node \"crc\" DevicePath \"\"" Nov 24 09:01:55 crc kubenswrapper[4886]: I1124 09:01:55.969892 4886 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/87f902e1-073b-4ccd-8b3a-717f802e9671-service-ca\") on node \"crc\" DevicePath \"\"" Nov 24 09:01:56 crc kubenswrapper[4886]: I1124 09:01:56.401226 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-tbxjm_87f902e1-073b-4ccd-8b3a-717f802e9671/console/0.log" Nov 24 09:01:56 crc kubenswrapper[4886]: I1124 09:01:56.401287 4886 generic.go:334] "Generic (PLEG): container finished" podID="87f902e1-073b-4ccd-8b3a-717f802e9671" containerID="8eea4918e58f77f6deb36f7042881dc2205041311fa6910fec0b42ced659d9f9" exitCode=2 Nov 24 09:01:56 crc kubenswrapper[4886]: I1124 09:01:56.401328 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-tbxjm" event={"ID":"87f902e1-073b-4ccd-8b3a-717f802e9671","Type":"ContainerDied","Data":"8eea4918e58f77f6deb36f7042881dc2205041311fa6910fec0b42ced659d9f9"} Nov 24 09:01:56 crc kubenswrapper[4886]: I1124 09:01:56.401364 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-tbxjm" event={"ID":"87f902e1-073b-4ccd-8b3a-717f802e9671","Type":"ContainerDied","Data":"545e8bd5ff09b34f5aae73ec10886c2efd2047f54b1945232ccc5886e0f42486"} Nov 24 09:01:56 crc kubenswrapper[4886]: I1124 09:01:56.401382 4886 scope.go:117] "RemoveContainer" containerID="8eea4918e58f77f6deb36f7042881dc2205041311fa6910fec0b42ced659d9f9" Nov 24 09:01:56 crc kubenswrapper[4886]: I1124 09:01:56.401542 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-tbxjm" Nov 24 09:01:56 crc kubenswrapper[4886]: I1124 09:01:56.427184 4886 scope.go:117] "RemoveContainer" containerID="8eea4918e58f77f6deb36f7042881dc2205041311fa6910fec0b42ced659d9f9" Nov 24 09:01:56 crc kubenswrapper[4886]: E1124 09:01:56.429105 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8eea4918e58f77f6deb36f7042881dc2205041311fa6910fec0b42ced659d9f9\": container with ID starting with 8eea4918e58f77f6deb36f7042881dc2205041311fa6910fec0b42ced659d9f9 not found: ID does not exist" containerID="8eea4918e58f77f6deb36f7042881dc2205041311fa6910fec0b42ced659d9f9" Nov 24 09:01:56 crc kubenswrapper[4886]: I1124 09:01:56.429196 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8eea4918e58f77f6deb36f7042881dc2205041311fa6910fec0b42ced659d9f9"} err="failed to get container status \"8eea4918e58f77f6deb36f7042881dc2205041311fa6910fec0b42ced659d9f9\": rpc error: code = NotFound desc = could not find container \"8eea4918e58f77f6deb36f7042881dc2205041311fa6910fec0b42ced659d9f9\": container with ID starting with 8eea4918e58f77f6deb36f7042881dc2205041311fa6910fec0b42ced659d9f9 not found: ID does not exist" Nov 24 09:01:56 crc kubenswrapper[4886]: I1124 09:01:56.435532 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-tbxjm"] Nov 24 09:01:56 crc kubenswrapper[4886]: I1124 09:01:56.438608 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-tbxjm"] Nov 24 09:01:56 crc kubenswrapper[4886]: I1124 09:01:56.861205 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87f902e1-073b-4ccd-8b3a-717f802e9671" path="/var/lib/kubelet/pods/87f902e1-073b-4ccd-8b3a-717f802e9671/volumes" Nov 24 09:01:57 crc kubenswrapper[4886]: I1124 09:01:57.409630 4886 generic.go:334] "Generic (PLEG): container finished" podID="e28dd825-a491-4ac3-a3b1-0e19192a40b9" containerID="01a425442c921b11f29a7d6a70f8c4fa1a54f15f8361e740f7d375e2820bcda9" exitCode=0 Nov 24 09:01:57 crc kubenswrapper[4886]: I1124 09:01:57.409725 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c68lz9q" event={"ID":"e28dd825-a491-4ac3-a3b1-0e19192a40b9","Type":"ContainerDied","Data":"01a425442c921b11f29a7d6a70f8c4fa1a54f15f8361e740f7d375e2820bcda9"} Nov 24 09:01:57 crc kubenswrapper[4886]: I1124 09:01:57.616377 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-8s2td"] Nov 24 09:01:57 crc kubenswrapper[4886]: E1124 09:01:57.616748 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87f902e1-073b-4ccd-8b3a-717f802e9671" containerName="console" Nov 24 09:01:57 crc kubenswrapper[4886]: I1124 09:01:57.616773 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="87f902e1-073b-4ccd-8b3a-717f802e9671" containerName="console" Nov 24 09:01:57 crc kubenswrapper[4886]: I1124 09:01:57.616926 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="87f902e1-073b-4ccd-8b3a-717f802e9671" containerName="console" Nov 24 09:01:57 crc kubenswrapper[4886]: I1124 09:01:57.618480 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8s2td" Nov 24 09:01:57 crc kubenswrapper[4886]: I1124 09:01:57.628794 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-8s2td"] Nov 24 09:01:57 crc kubenswrapper[4886]: I1124 09:01:57.696507 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d1aed817-5aee-40e1-b46d-208a8a46c3f7-utilities\") pod \"redhat-operators-8s2td\" (UID: \"d1aed817-5aee-40e1-b46d-208a8a46c3f7\") " pod="openshift-marketplace/redhat-operators-8s2td" Nov 24 09:01:57 crc kubenswrapper[4886]: I1124 09:01:57.696557 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d1aed817-5aee-40e1-b46d-208a8a46c3f7-catalog-content\") pod \"redhat-operators-8s2td\" (UID: \"d1aed817-5aee-40e1-b46d-208a8a46c3f7\") " pod="openshift-marketplace/redhat-operators-8s2td" Nov 24 09:01:57 crc kubenswrapper[4886]: I1124 09:01:57.696594 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4k5gd\" (UniqueName: \"kubernetes.io/projected/d1aed817-5aee-40e1-b46d-208a8a46c3f7-kube-api-access-4k5gd\") pod \"redhat-operators-8s2td\" (UID: \"d1aed817-5aee-40e1-b46d-208a8a46c3f7\") " pod="openshift-marketplace/redhat-operators-8s2td" Nov 24 09:01:57 crc kubenswrapper[4886]: I1124 09:01:57.799034 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d1aed817-5aee-40e1-b46d-208a8a46c3f7-catalog-content\") pod \"redhat-operators-8s2td\" (UID: \"d1aed817-5aee-40e1-b46d-208a8a46c3f7\") " pod="openshift-marketplace/redhat-operators-8s2td" Nov 24 09:01:57 crc kubenswrapper[4886]: I1124 09:01:57.799117 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4k5gd\" (UniqueName: \"kubernetes.io/projected/d1aed817-5aee-40e1-b46d-208a8a46c3f7-kube-api-access-4k5gd\") pod \"redhat-operators-8s2td\" (UID: \"d1aed817-5aee-40e1-b46d-208a8a46c3f7\") " pod="openshift-marketplace/redhat-operators-8s2td" Nov 24 09:01:57 crc kubenswrapper[4886]: I1124 09:01:57.799229 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d1aed817-5aee-40e1-b46d-208a8a46c3f7-utilities\") pod \"redhat-operators-8s2td\" (UID: \"d1aed817-5aee-40e1-b46d-208a8a46c3f7\") " pod="openshift-marketplace/redhat-operators-8s2td" Nov 24 09:01:57 crc kubenswrapper[4886]: I1124 09:01:57.799774 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d1aed817-5aee-40e1-b46d-208a8a46c3f7-utilities\") pod \"redhat-operators-8s2td\" (UID: \"d1aed817-5aee-40e1-b46d-208a8a46c3f7\") " pod="openshift-marketplace/redhat-operators-8s2td" Nov 24 09:01:57 crc kubenswrapper[4886]: I1124 09:01:57.799989 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d1aed817-5aee-40e1-b46d-208a8a46c3f7-catalog-content\") pod \"redhat-operators-8s2td\" (UID: \"d1aed817-5aee-40e1-b46d-208a8a46c3f7\") " pod="openshift-marketplace/redhat-operators-8s2td" Nov 24 09:01:57 crc kubenswrapper[4886]: I1124 09:01:57.831645 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4k5gd\" (UniqueName: \"kubernetes.io/projected/d1aed817-5aee-40e1-b46d-208a8a46c3f7-kube-api-access-4k5gd\") pod \"redhat-operators-8s2td\" (UID: \"d1aed817-5aee-40e1-b46d-208a8a46c3f7\") " pod="openshift-marketplace/redhat-operators-8s2td" Nov 24 09:01:57 crc kubenswrapper[4886]: I1124 09:01:57.936029 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8s2td" Nov 24 09:01:58 crc kubenswrapper[4886]: I1124 09:01:58.204016 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-8s2td"] Nov 24 09:01:58 crc kubenswrapper[4886]: I1124 09:01:58.420284 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8s2td" event={"ID":"d1aed817-5aee-40e1-b46d-208a8a46c3f7","Type":"ContainerStarted","Data":"86fd1b225f49c950f323e87baf558d282938749ffdedc4f714d4babd7d151a70"} Nov 24 09:01:58 crc kubenswrapper[4886]: I1124 09:01:58.420784 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8s2td" event={"ID":"d1aed817-5aee-40e1-b46d-208a8a46c3f7","Type":"ContainerStarted","Data":"897c8185fcbc6572d2dfbb8e3ededaaa4d3def395d0eb63a05ae8ac287e63894"} Nov 24 09:01:58 crc kubenswrapper[4886]: I1124 09:01:58.427410 4886 generic.go:334] "Generic (PLEG): container finished" podID="e28dd825-a491-4ac3-a3b1-0e19192a40b9" containerID="e78e4cc0d369039c188235288c0d210438b469f437ba9d95969aa696b3f05ada" exitCode=0 Nov 24 09:01:58 crc kubenswrapper[4886]: I1124 09:01:58.427455 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c68lz9q" event={"ID":"e28dd825-a491-4ac3-a3b1-0e19192a40b9","Type":"ContainerDied","Data":"e78e4cc0d369039c188235288c0d210438b469f437ba9d95969aa696b3f05ada"} Nov 24 09:01:59 crc kubenswrapper[4886]: I1124 09:01:59.434940 4886 generic.go:334] "Generic (PLEG): container finished" podID="d1aed817-5aee-40e1-b46d-208a8a46c3f7" containerID="86fd1b225f49c950f323e87baf558d282938749ffdedc4f714d4babd7d151a70" exitCode=0 Nov 24 09:01:59 crc kubenswrapper[4886]: I1124 09:01:59.435026 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8s2td" event={"ID":"d1aed817-5aee-40e1-b46d-208a8a46c3f7","Type":"ContainerDied","Data":"86fd1b225f49c950f323e87baf558d282938749ffdedc4f714d4babd7d151a70"} Nov 24 09:01:59 crc kubenswrapper[4886]: I1124 09:01:59.715843 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c68lz9q" Nov 24 09:01:59 crc kubenswrapper[4886]: I1124 09:01:59.828466 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e28dd825-a491-4ac3-a3b1-0e19192a40b9-util\") pod \"e28dd825-a491-4ac3-a3b1-0e19192a40b9\" (UID: \"e28dd825-a491-4ac3-a3b1-0e19192a40b9\") " Nov 24 09:01:59 crc kubenswrapper[4886]: I1124 09:01:59.828705 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e28dd825-a491-4ac3-a3b1-0e19192a40b9-bundle\") pod \"e28dd825-a491-4ac3-a3b1-0e19192a40b9\" (UID: \"e28dd825-a491-4ac3-a3b1-0e19192a40b9\") " Nov 24 09:01:59 crc kubenswrapper[4886]: I1124 09:01:59.828746 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fpxws\" (UniqueName: \"kubernetes.io/projected/e28dd825-a491-4ac3-a3b1-0e19192a40b9-kube-api-access-fpxws\") pod \"e28dd825-a491-4ac3-a3b1-0e19192a40b9\" (UID: \"e28dd825-a491-4ac3-a3b1-0e19192a40b9\") " Nov 24 09:01:59 crc kubenswrapper[4886]: I1124 09:01:59.829664 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e28dd825-a491-4ac3-a3b1-0e19192a40b9-bundle" (OuterVolumeSpecName: "bundle") pod "e28dd825-a491-4ac3-a3b1-0e19192a40b9" (UID: "e28dd825-a491-4ac3-a3b1-0e19192a40b9"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 09:01:59 crc kubenswrapper[4886]: I1124 09:01:59.833514 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e28dd825-a491-4ac3-a3b1-0e19192a40b9-kube-api-access-fpxws" (OuterVolumeSpecName: "kube-api-access-fpxws") pod "e28dd825-a491-4ac3-a3b1-0e19192a40b9" (UID: "e28dd825-a491-4ac3-a3b1-0e19192a40b9"). InnerVolumeSpecName "kube-api-access-fpxws". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 09:01:59 crc kubenswrapper[4886]: I1124 09:01:59.842008 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e28dd825-a491-4ac3-a3b1-0e19192a40b9-util" (OuterVolumeSpecName: "util") pod "e28dd825-a491-4ac3-a3b1-0e19192a40b9" (UID: "e28dd825-a491-4ac3-a3b1-0e19192a40b9"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 09:01:59 crc kubenswrapper[4886]: I1124 09:01:59.930961 4886 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e28dd825-a491-4ac3-a3b1-0e19192a40b9-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 09:01:59 crc kubenswrapper[4886]: I1124 09:01:59.931011 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fpxws\" (UniqueName: \"kubernetes.io/projected/e28dd825-a491-4ac3-a3b1-0e19192a40b9-kube-api-access-fpxws\") on node \"crc\" DevicePath \"\"" Nov 24 09:01:59 crc kubenswrapper[4886]: I1124 09:01:59.931029 4886 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e28dd825-a491-4ac3-a3b1-0e19192a40b9-util\") on node \"crc\" DevicePath \"\"" Nov 24 09:02:00 crc kubenswrapper[4886]: I1124 09:02:00.442667 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c68lz9q" event={"ID":"e28dd825-a491-4ac3-a3b1-0e19192a40b9","Type":"ContainerDied","Data":"941c57f4f0be6afac9bf8472e0e60b98594ebd6240045780d52ab2213561347f"} Nov 24 09:02:00 crc kubenswrapper[4886]: I1124 09:02:00.443249 4886 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="941c57f4f0be6afac9bf8472e0e60b98594ebd6240045780d52ab2213561347f" Nov 24 09:02:00 crc kubenswrapper[4886]: I1124 09:02:00.443380 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c68lz9q" Nov 24 09:02:01 crc kubenswrapper[4886]: I1124 09:02:01.786187 4886 patch_prober.go:28] interesting pod/machine-config-daemon-zc46q container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 09:02:01 crc kubenswrapper[4886]: I1124 09:02:01.786752 4886 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zc46q" podUID="23cb993e-0360-4449-b604-8ddd825a6502" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 09:02:02 crc kubenswrapper[4886]: I1124 09:02:02.711353 4886 generic.go:334] "Generic (PLEG): container finished" podID="d1aed817-5aee-40e1-b46d-208a8a46c3f7" containerID="a8d85b65ebaf5e356e74c8afa639a833a4e31d7ce16db2c48c8664871bd06eed" exitCode=0 Nov 24 09:02:02 crc kubenswrapper[4886]: I1124 09:02:02.711420 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8s2td" event={"ID":"d1aed817-5aee-40e1-b46d-208a8a46c3f7","Type":"ContainerDied","Data":"a8d85b65ebaf5e356e74c8afa639a833a4e31d7ce16db2c48c8664871bd06eed"} Nov 24 09:02:03 crc kubenswrapper[4886]: I1124 09:02:03.720014 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8s2td" event={"ID":"d1aed817-5aee-40e1-b46d-208a8a46c3f7","Type":"ContainerStarted","Data":"415058bfa25d3d4ea23eb62ddb411469df4d48a74bfa64f2c56ec2e2f39621f8"} Nov 24 09:02:03 crc kubenswrapper[4886]: I1124 09:02:03.748330 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-8s2td" podStartSLOduration=3.027886628 podStartE2EDuration="6.748296908s" podCreationTimestamp="2025-11-24 09:01:57 +0000 UTC" firstStartedPulling="2025-11-24 09:01:59.437239399 +0000 UTC m=+775.323977534" lastFinishedPulling="2025-11-24 09:02:03.157649669 +0000 UTC m=+779.044387814" observedRunningTime="2025-11-24 09:02:03.73734844 +0000 UTC m=+779.624086575" watchObservedRunningTime="2025-11-24 09:02:03.748296908 +0000 UTC m=+779.635035063" Nov 24 09:02:07 crc kubenswrapper[4886]: I1124 09:02:07.936382 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-8s2td" Nov 24 09:02:07 crc kubenswrapper[4886]: I1124 09:02:07.937072 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-8s2td" Nov 24 09:02:08 crc kubenswrapper[4886]: I1124 09:02:08.982996 4886 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-8s2td" podUID="d1aed817-5aee-40e1-b46d-208a8a46c3f7" containerName="registry-server" probeResult="failure" output=< Nov 24 09:02:08 crc kubenswrapper[4886]: timeout: failed to connect service ":50051" within 1s Nov 24 09:02:08 crc kubenswrapper[4886]: > Nov 24 09:02:10 crc kubenswrapper[4886]: I1124 09:02:10.828409 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-688456bb67-dhj9s"] Nov 24 09:02:10 crc kubenswrapper[4886]: E1124 09:02:10.828760 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e28dd825-a491-4ac3-a3b1-0e19192a40b9" containerName="util" Nov 24 09:02:10 crc kubenswrapper[4886]: I1124 09:02:10.828780 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="e28dd825-a491-4ac3-a3b1-0e19192a40b9" containerName="util" Nov 24 09:02:10 crc kubenswrapper[4886]: E1124 09:02:10.828793 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e28dd825-a491-4ac3-a3b1-0e19192a40b9" containerName="extract" Nov 24 09:02:10 crc kubenswrapper[4886]: I1124 09:02:10.828802 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="e28dd825-a491-4ac3-a3b1-0e19192a40b9" containerName="extract" Nov 24 09:02:10 crc kubenswrapper[4886]: E1124 09:02:10.828815 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e28dd825-a491-4ac3-a3b1-0e19192a40b9" containerName="pull" Nov 24 09:02:10 crc kubenswrapper[4886]: I1124 09:02:10.828823 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="e28dd825-a491-4ac3-a3b1-0e19192a40b9" containerName="pull" Nov 24 09:02:10 crc kubenswrapper[4886]: I1124 09:02:10.828967 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="e28dd825-a491-4ac3-a3b1-0e19192a40b9" containerName="extract" Nov 24 09:02:10 crc kubenswrapper[4886]: I1124 09:02:10.829650 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-688456bb67-dhj9s" Nov 24 09:02:10 crc kubenswrapper[4886]: I1124 09:02:10.832916 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Nov 24 09:02:10 crc kubenswrapper[4886]: I1124 09:02:10.832945 4886 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Nov 24 09:02:10 crc kubenswrapper[4886]: I1124 09:02:10.833043 4886 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Nov 24 09:02:10 crc kubenswrapper[4886]: I1124 09:02:10.833097 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Nov 24 09:02:10 crc kubenswrapper[4886]: I1124 09:02:10.834062 4886 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-hrm5q" Nov 24 09:02:10 crc kubenswrapper[4886]: I1124 09:02:10.882504 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-688456bb67-dhj9s"] Nov 24 09:02:10 crc kubenswrapper[4886]: I1124 09:02:10.924398 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/24f2f5da-80b6-49b8-abe7-43f1301c84db-apiservice-cert\") pod \"metallb-operator-controller-manager-688456bb67-dhj9s\" (UID: \"24f2f5da-80b6-49b8-abe7-43f1301c84db\") " pod="metallb-system/metallb-operator-controller-manager-688456bb67-dhj9s" Nov 24 09:02:10 crc kubenswrapper[4886]: I1124 09:02:10.924528 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kr9jm\" (UniqueName: \"kubernetes.io/projected/24f2f5da-80b6-49b8-abe7-43f1301c84db-kube-api-access-kr9jm\") pod \"metallb-operator-controller-manager-688456bb67-dhj9s\" (UID: \"24f2f5da-80b6-49b8-abe7-43f1301c84db\") " pod="metallb-system/metallb-operator-controller-manager-688456bb67-dhj9s" Nov 24 09:02:10 crc kubenswrapper[4886]: I1124 09:02:10.924598 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/24f2f5da-80b6-49b8-abe7-43f1301c84db-webhook-cert\") pod \"metallb-operator-controller-manager-688456bb67-dhj9s\" (UID: \"24f2f5da-80b6-49b8-abe7-43f1301c84db\") " pod="metallb-system/metallb-operator-controller-manager-688456bb67-dhj9s" Nov 24 09:02:11 crc kubenswrapper[4886]: I1124 09:02:11.026816 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/24f2f5da-80b6-49b8-abe7-43f1301c84db-apiservice-cert\") pod \"metallb-operator-controller-manager-688456bb67-dhj9s\" (UID: \"24f2f5da-80b6-49b8-abe7-43f1301c84db\") " pod="metallb-system/metallb-operator-controller-manager-688456bb67-dhj9s" Nov 24 09:02:11 crc kubenswrapper[4886]: I1124 09:02:11.026874 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kr9jm\" (UniqueName: \"kubernetes.io/projected/24f2f5da-80b6-49b8-abe7-43f1301c84db-kube-api-access-kr9jm\") pod \"metallb-operator-controller-manager-688456bb67-dhj9s\" (UID: \"24f2f5da-80b6-49b8-abe7-43f1301c84db\") " pod="metallb-system/metallb-operator-controller-manager-688456bb67-dhj9s" Nov 24 09:02:11 crc kubenswrapper[4886]: I1124 09:02:11.026899 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/24f2f5da-80b6-49b8-abe7-43f1301c84db-webhook-cert\") pod \"metallb-operator-controller-manager-688456bb67-dhj9s\" (UID: \"24f2f5da-80b6-49b8-abe7-43f1301c84db\") " pod="metallb-system/metallb-operator-controller-manager-688456bb67-dhj9s" Nov 24 09:02:11 crc kubenswrapper[4886]: I1124 09:02:11.043047 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/24f2f5da-80b6-49b8-abe7-43f1301c84db-webhook-cert\") pod \"metallb-operator-controller-manager-688456bb67-dhj9s\" (UID: \"24f2f5da-80b6-49b8-abe7-43f1301c84db\") " pod="metallb-system/metallb-operator-controller-manager-688456bb67-dhj9s" Nov 24 09:02:11 crc kubenswrapper[4886]: I1124 09:02:11.056133 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kr9jm\" (UniqueName: \"kubernetes.io/projected/24f2f5da-80b6-49b8-abe7-43f1301c84db-kube-api-access-kr9jm\") pod \"metallb-operator-controller-manager-688456bb67-dhj9s\" (UID: \"24f2f5da-80b6-49b8-abe7-43f1301c84db\") " pod="metallb-system/metallb-operator-controller-manager-688456bb67-dhj9s" Nov 24 09:02:11 crc kubenswrapper[4886]: I1124 09:02:11.063088 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/24f2f5da-80b6-49b8-abe7-43f1301c84db-apiservice-cert\") pod \"metallb-operator-controller-manager-688456bb67-dhj9s\" (UID: \"24f2f5da-80b6-49b8-abe7-43f1301c84db\") " pod="metallb-system/metallb-operator-controller-manager-688456bb67-dhj9s" Nov 24 09:02:11 crc kubenswrapper[4886]: I1124 09:02:11.083773 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-7fd5b69667-tg7zm"] Nov 24 09:02:11 crc kubenswrapper[4886]: I1124 09:02:11.084807 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-7fd5b69667-tg7zm" Nov 24 09:02:11 crc kubenswrapper[4886]: I1124 09:02:11.088287 4886 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Nov 24 09:02:11 crc kubenswrapper[4886]: I1124 09:02:11.088503 4886 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-n629c" Nov 24 09:02:11 crc kubenswrapper[4886]: I1124 09:02:11.088561 4886 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Nov 24 09:02:11 crc kubenswrapper[4886]: I1124 09:02:11.105112 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-7fd5b69667-tg7zm"] Nov 24 09:02:11 crc kubenswrapper[4886]: I1124 09:02:11.158635 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-688456bb67-dhj9s" Nov 24 09:02:11 crc kubenswrapper[4886]: I1124 09:02:11.230032 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/58ed8691-0e33-4c91-aecb-d8bfcceab2de-apiservice-cert\") pod \"metallb-operator-webhook-server-7fd5b69667-tg7zm\" (UID: \"58ed8691-0e33-4c91-aecb-d8bfcceab2de\") " pod="metallb-system/metallb-operator-webhook-server-7fd5b69667-tg7zm" Nov 24 09:02:11 crc kubenswrapper[4886]: I1124 09:02:11.230110 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lrg8k\" (UniqueName: \"kubernetes.io/projected/58ed8691-0e33-4c91-aecb-d8bfcceab2de-kube-api-access-lrg8k\") pod \"metallb-operator-webhook-server-7fd5b69667-tg7zm\" (UID: \"58ed8691-0e33-4c91-aecb-d8bfcceab2de\") " pod="metallb-system/metallb-operator-webhook-server-7fd5b69667-tg7zm" Nov 24 09:02:11 crc kubenswrapper[4886]: I1124 09:02:11.230164 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/58ed8691-0e33-4c91-aecb-d8bfcceab2de-webhook-cert\") pod \"metallb-operator-webhook-server-7fd5b69667-tg7zm\" (UID: \"58ed8691-0e33-4c91-aecb-d8bfcceab2de\") " pod="metallb-system/metallb-operator-webhook-server-7fd5b69667-tg7zm" Nov 24 09:02:11 crc kubenswrapper[4886]: I1124 09:02:11.331801 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/58ed8691-0e33-4c91-aecb-d8bfcceab2de-apiservice-cert\") pod \"metallb-operator-webhook-server-7fd5b69667-tg7zm\" (UID: \"58ed8691-0e33-4c91-aecb-d8bfcceab2de\") " pod="metallb-system/metallb-operator-webhook-server-7fd5b69667-tg7zm" Nov 24 09:02:11 crc kubenswrapper[4886]: I1124 09:02:11.332356 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lrg8k\" (UniqueName: \"kubernetes.io/projected/58ed8691-0e33-4c91-aecb-d8bfcceab2de-kube-api-access-lrg8k\") pod \"metallb-operator-webhook-server-7fd5b69667-tg7zm\" (UID: \"58ed8691-0e33-4c91-aecb-d8bfcceab2de\") " pod="metallb-system/metallb-operator-webhook-server-7fd5b69667-tg7zm" Nov 24 09:02:11 crc kubenswrapper[4886]: I1124 09:02:11.332415 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/58ed8691-0e33-4c91-aecb-d8bfcceab2de-webhook-cert\") pod \"metallb-operator-webhook-server-7fd5b69667-tg7zm\" (UID: \"58ed8691-0e33-4c91-aecb-d8bfcceab2de\") " pod="metallb-system/metallb-operator-webhook-server-7fd5b69667-tg7zm" Nov 24 09:02:11 crc kubenswrapper[4886]: I1124 09:02:11.349595 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/58ed8691-0e33-4c91-aecb-d8bfcceab2de-apiservice-cert\") pod \"metallb-operator-webhook-server-7fd5b69667-tg7zm\" (UID: \"58ed8691-0e33-4c91-aecb-d8bfcceab2de\") " pod="metallb-system/metallb-operator-webhook-server-7fd5b69667-tg7zm" Nov 24 09:02:11 crc kubenswrapper[4886]: I1124 09:02:11.355985 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/58ed8691-0e33-4c91-aecb-d8bfcceab2de-webhook-cert\") pod \"metallb-operator-webhook-server-7fd5b69667-tg7zm\" (UID: \"58ed8691-0e33-4c91-aecb-d8bfcceab2de\") " pod="metallb-system/metallb-operator-webhook-server-7fd5b69667-tg7zm" Nov 24 09:02:11 crc kubenswrapper[4886]: I1124 09:02:11.392153 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lrg8k\" (UniqueName: \"kubernetes.io/projected/58ed8691-0e33-4c91-aecb-d8bfcceab2de-kube-api-access-lrg8k\") pod \"metallb-operator-webhook-server-7fd5b69667-tg7zm\" (UID: \"58ed8691-0e33-4c91-aecb-d8bfcceab2de\") " pod="metallb-system/metallb-operator-webhook-server-7fd5b69667-tg7zm" Nov 24 09:02:11 crc kubenswrapper[4886]: I1124 09:02:11.418666 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-7fd5b69667-tg7zm" Nov 24 09:02:11 crc kubenswrapper[4886]: I1124 09:02:11.575074 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-688456bb67-dhj9s"] Nov 24 09:02:11 crc kubenswrapper[4886]: I1124 09:02:11.769556 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-7fd5b69667-tg7zm"] Nov 24 09:02:11 crc kubenswrapper[4886]: I1124 09:02:11.773622 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-688456bb67-dhj9s" event={"ID":"24f2f5da-80b6-49b8-abe7-43f1301c84db","Type":"ContainerStarted","Data":"2550efd729c2ef91ef63546faa57c7d6af113f8b97032e10bed2cadfd05207aa"} Nov 24 09:02:11 crc kubenswrapper[4886]: W1124 09:02:11.798211 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod58ed8691_0e33_4c91_aecb_d8bfcceab2de.slice/crio-94bee58bf62356362256ffd2ed8fe0dd18df668082b4364fdaef7ae43ee0f9c2 WatchSource:0}: Error finding container 94bee58bf62356362256ffd2ed8fe0dd18df668082b4364fdaef7ae43ee0f9c2: Status 404 returned error can't find the container with id 94bee58bf62356362256ffd2ed8fe0dd18df668082b4364fdaef7ae43ee0f9c2 Nov 24 09:02:12 crc kubenswrapper[4886]: I1124 09:02:12.783074 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-7fd5b69667-tg7zm" event={"ID":"58ed8691-0e33-4c91-aecb-d8bfcceab2de","Type":"ContainerStarted","Data":"94bee58bf62356362256ffd2ed8fe0dd18df668082b4364fdaef7ae43ee0f9c2"} Nov 24 09:02:17 crc kubenswrapper[4886]: I1124 09:02:17.991308 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-8s2td" Nov 24 09:02:18 crc kubenswrapper[4886]: I1124 09:02:18.036665 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-8s2td" Nov 24 09:02:18 crc kubenswrapper[4886]: I1124 09:02:18.230467 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-8s2td"] Nov 24 09:02:19 crc kubenswrapper[4886]: I1124 09:02:19.851388 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-7fd5b69667-tg7zm" event={"ID":"58ed8691-0e33-4c91-aecb-d8bfcceab2de","Type":"ContainerStarted","Data":"8bf5e365c334d4557d12aa40b935a38cf2c713d7696f7dd70b82567c3afb844a"} Nov 24 09:02:19 crc kubenswrapper[4886]: I1124 09:02:19.851887 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-7fd5b69667-tg7zm" Nov 24 09:02:19 crc kubenswrapper[4886]: I1124 09:02:19.853374 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-688456bb67-dhj9s" event={"ID":"24f2f5da-80b6-49b8-abe7-43f1301c84db","Type":"ContainerStarted","Data":"a68bdeb13a06f3a6e1cfedc017313217fc282380ce549cc8532b1b9c64275114"} Nov 24 09:02:19 crc kubenswrapper[4886]: I1124 09:02:19.853583 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-8s2td" podUID="d1aed817-5aee-40e1-b46d-208a8a46c3f7" containerName="registry-server" containerID="cri-o://415058bfa25d3d4ea23eb62ddb411469df4d48a74bfa64f2c56ec2e2f39621f8" gracePeriod=2 Nov 24 09:02:19 crc kubenswrapper[4886]: I1124 09:02:19.870749 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-7fd5b69667-tg7zm" podStartSLOduration=1.004917764 podStartE2EDuration="8.870723705s" podCreationTimestamp="2025-11-24 09:02:11 +0000 UTC" firstStartedPulling="2025-11-24 09:02:11.806885482 +0000 UTC m=+787.693623617" lastFinishedPulling="2025-11-24 09:02:19.672691423 +0000 UTC m=+795.559429558" observedRunningTime="2025-11-24 09:02:19.869967564 +0000 UTC m=+795.756705699" watchObservedRunningTime="2025-11-24 09:02:19.870723705 +0000 UTC m=+795.757461840" Nov 24 09:02:19 crc kubenswrapper[4886]: I1124 09:02:19.911608 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-688456bb67-dhj9s" podStartSLOduration=1.8454480389999999 podStartE2EDuration="9.911586277s" podCreationTimestamp="2025-11-24 09:02:10 +0000 UTC" firstStartedPulling="2025-11-24 09:02:11.584457552 +0000 UTC m=+787.471195687" lastFinishedPulling="2025-11-24 09:02:19.65059579 +0000 UTC m=+795.537333925" observedRunningTime="2025-11-24 09:02:19.902659755 +0000 UTC m=+795.789397890" watchObservedRunningTime="2025-11-24 09:02:19.911586277 +0000 UTC m=+795.798324422" Nov 24 09:02:20 crc kubenswrapper[4886]: I1124 09:02:20.212266 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8s2td" Nov 24 09:02:20 crc kubenswrapper[4886]: I1124 09:02:20.308891 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d1aed817-5aee-40e1-b46d-208a8a46c3f7-catalog-content\") pod \"d1aed817-5aee-40e1-b46d-208a8a46c3f7\" (UID: \"d1aed817-5aee-40e1-b46d-208a8a46c3f7\") " Nov 24 09:02:20 crc kubenswrapper[4886]: I1124 09:02:20.309351 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4k5gd\" (UniqueName: \"kubernetes.io/projected/d1aed817-5aee-40e1-b46d-208a8a46c3f7-kube-api-access-4k5gd\") pod \"d1aed817-5aee-40e1-b46d-208a8a46c3f7\" (UID: \"d1aed817-5aee-40e1-b46d-208a8a46c3f7\") " Nov 24 09:02:20 crc kubenswrapper[4886]: I1124 09:02:20.309487 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d1aed817-5aee-40e1-b46d-208a8a46c3f7-utilities\") pod \"d1aed817-5aee-40e1-b46d-208a8a46c3f7\" (UID: \"d1aed817-5aee-40e1-b46d-208a8a46c3f7\") " Nov 24 09:02:20 crc kubenswrapper[4886]: I1124 09:02:20.310325 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d1aed817-5aee-40e1-b46d-208a8a46c3f7-utilities" (OuterVolumeSpecName: "utilities") pod "d1aed817-5aee-40e1-b46d-208a8a46c3f7" (UID: "d1aed817-5aee-40e1-b46d-208a8a46c3f7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 09:02:20 crc kubenswrapper[4886]: I1124 09:02:20.319230 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d1aed817-5aee-40e1-b46d-208a8a46c3f7-kube-api-access-4k5gd" (OuterVolumeSpecName: "kube-api-access-4k5gd") pod "d1aed817-5aee-40e1-b46d-208a8a46c3f7" (UID: "d1aed817-5aee-40e1-b46d-208a8a46c3f7"). InnerVolumeSpecName "kube-api-access-4k5gd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 09:02:20 crc kubenswrapper[4886]: I1124 09:02:20.394885 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d1aed817-5aee-40e1-b46d-208a8a46c3f7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d1aed817-5aee-40e1-b46d-208a8a46c3f7" (UID: "d1aed817-5aee-40e1-b46d-208a8a46c3f7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 09:02:20 crc kubenswrapper[4886]: I1124 09:02:20.411274 4886 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d1aed817-5aee-40e1-b46d-208a8a46c3f7-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 09:02:20 crc kubenswrapper[4886]: I1124 09:02:20.411316 4886 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d1aed817-5aee-40e1-b46d-208a8a46c3f7-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 09:02:20 crc kubenswrapper[4886]: I1124 09:02:20.411334 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4k5gd\" (UniqueName: \"kubernetes.io/projected/d1aed817-5aee-40e1-b46d-208a8a46c3f7-kube-api-access-4k5gd\") on node \"crc\" DevicePath \"\"" Nov 24 09:02:20 crc kubenswrapper[4886]: I1124 09:02:20.436483 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-q7c5s"] Nov 24 09:02:20 crc kubenswrapper[4886]: E1124 09:02:20.436798 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d1aed817-5aee-40e1-b46d-208a8a46c3f7" containerName="extract-utilities" Nov 24 09:02:20 crc kubenswrapper[4886]: I1124 09:02:20.436825 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1aed817-5aee-40e1-b46d-208a8a46c3f7" containerName="extract-utilities" Nov 24 09:02:20 crc kubenswrapper[4886]: E1124 09:02:20.436842 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d1aed817-5aee-40e1-b46d-208a8a46c3f7" containerName="extract-content" Nov 24 09:02:20 crc kubenswrapper[4886]: I1124 09:02:20.436852 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1aed817-5aee-40e1-b46d-208a8a46c3f7" containerName="extract-content" Nov 24 09:02:20 crc kubenswrapper[4886]: E1124 09:02:20.436869 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d1aed817-5aee-40e1-b46d-208a8a46c3f7" containerName="registry-server" Nov 24 09:02:20 crc kubenswrapper[4886]: I1124 09:02:20.436877 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1aed817-5aee-40e1-b46d-208a8a46c3f7" containerName="registry-server" Nov 24 09:02:20 crc kubenswrapper[4886]: I1124 09:02:20.437026 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="d1aed817-5aee-40e1-b46d-208a8a46c3f7" containerName="registry-server" Nov 24 09:02:20 crc kubenswrapper[4886]: I1124 09:02:20.438035 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-q7c5s" Nov 24 09:02:20 crc kubenswrapper[4886]: I1124 09:02:20.452365 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-q7c5s"] Nov 24 09:02:20 crc kubenswrapper[4886]: I1124 09:02:20.513195 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/008ebdbd-3199-4af5-9a32-2686d760c3ec-utilities\") pod \"community-operators-q7c5s\" (UID: \"008ebdbd-3199-4af5-9a32-2686d760c3ec\") " pod="openshift-marketplace/community-operators-q7c5s" Nov 24 09:02:20 crc kubenswrapper[4886]: I1124 09:02:20.513275 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9dzcf\" (UniqueName: \"kubernetes.io/projected/008ebdbd-3199-4af5-9a32-2686d760c3ec-kube-api-access-9dzcf\") pod \"community-operators-q7c5s\" (UID: \"008ebdbd-3199-4af5-9a32-2686d760c3ec\") " pod="openshift-marketplace/community-operators-q7c5s" Nov 24 09:02:20 crc kubenswrapper[4886]: I1124 09:02:20.513422 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/008ebdbd-3199-4af5-9a32-2686d760c3ec-catalog-content\") pod \"community-operators-q7c5s\" (UID: \"008ebdbd-3199-4af5-9a32-2686d760c3ec\") " pod="openshift-marketplace/community-operators-q7c5s" Nov 24 09:02:20 crc kubenswrapper[4886]: I1124 09:02:20.615332 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/008ebdbd-3199-4af5-9a32-2686d760c3ec-utilities\") pod \"community-operators-q7c5s\" (UID: \"008ebdbd-3199-4af5-9a32-2686d760c3ec\") " pod="openshift-marketplace/community-operators-q7c5s" Nov 24 09:02:20 crc kubenswrapper[4886]: I1124 09:02:20.615386 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9dzcf\" (UniqueName: \"kubernetes.io/projected/008ebdbd-3199-4af5-9a32-2686d760c3ec-kube-api-access-9dzcf\") pod \"community-operators-q7c5s\" (UID: \"008ebdbd-3199-4af5-9a32-2686d760c3ec\") " pod="openshift-marketplace/community-operators-q7c5s" Nov 24 09:02:20 crc kubenswrapper[4886]: I1124 09:02:20.615439 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/008ebdbd-3199-4af5-9a32-2686d760c3ec-catalog-content\") pod \"community-operators-q7c5s\" (UID: \"008ebdbd-3199-4af5-9a32-2686d760c3ec\") " pod="openshift-marketplace/community-operators-q7c5s" Nov 24 09:02:20 crc kubenswrapper[4886]: I1124 09:02:20.615981 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/008ebdbd-3199-4af5-9a32-2686d760c3ec-catalog-content\") pod \"community-operators-q7c5s\" (UID: \"008ebdbd-3199-4af5-9a32-2686d760c3ec\") " pod="openshift-marketplace/community-operators-q7c5s" Nov 24 09:02:20 crc kubenswrapper[4886]: I1124 09:02:20.616557 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/008ebdbd-3199-4af5-9a32-2686d760c3ec-utilities\") pod \"community-operators-q7c5s\" (UID: \"008ebdbd-3199-4af5-9a32-2686d760c3ec\") " pod="openshift-marketplace/community-operators-q7c5s" Nov 24 09:02:20 crc kubenswrapper[4886]: I1124 09:02:20.642594 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9dzcf\" (UniqueName: \"kubernetes.io/projected/008ebdbd-3199-4af5-9a32-2686d760c3ec-kube-api-access-9dzcf\") pod \"community-operators-q7c5s\" (UID: \"008ebdbd-3199-4af5-9a32-2686d760c3ec\") " pod="openshift-marketplace/community-operators-q7c5s" Nov 24 09:02:20 crc kubenswrapper[4886]: I1124 09:02:20.755798 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-q7c5s" Nov 24 09:02:20 crc kubenswrapper[4886]: I1124 09:02:20.937046 4886 generic.go:334] "Generic (PLEG): container finished" podID="d1aed817-5aee-40e1-b46d-208a8a46c3f7" containerID="415058bfa25d3d4ea23eb62ddb411469df4d48a74bfa64f2c56ec2e2f39621f8" exitCode=0 Nov 24 09:02:20 crc kubenswrapper[4886]: I1124 09:02:20.937206 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8s2td" Nov 24 09:02:20 crc kubenswrapper[4886]: I1124 09:02:20.937277 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8s2td" event={"ID":"d1aed817-5aee-40e1-b46d-208a8a46c3f7","Type":"ContainerDied","Data":"415058bfa25d3d4ea23eb62ddb411469df4d48a74bfa64f2c56ec2e2f39621f8"} Nov 24 09:02:20 crc kubenswrapper[4886]: I1124 09:02:20.937312 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8s2td" event={"ID":"d1aed817-5aee-40e1-b46d-208a8a46c3f7","Type":"ContainerDied","Data":"897c8185fcbc6572d2dfbb8e3ededaaa4d3def395d0eb63a05ae8ac287e63894"} Nov 24 09:02:20 crc kubenswrapper[4886]: I1124 09:02:20.937333 4886 scope.go:117] "RemoveContainer" containerID="415058bfa25d3d4ea23eb62ddb411469df4d48a74bfa64f2c56ec2e2f39621f8" Nov 24 09:02:20 crc kubenswrapper[4886]: I1124 09:02:20.939071 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-688456bb67-dhj9s" Nov 24 09:02:20 crc kubenswrapper[4886]: I1124 09:02:20.979042 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-8s2td"] Nov 24 09:02:20 crc kubenswrapper[4886]: I1124 09:02:20.984416 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-8s2td"] Nov 24 09:02:21 crc kubenswrapper[4886]: I1124 09:02:21.005410 4886 scope.go:117] "RemoveContainer" containerID="a8d85b65ebaf5e356e74c8afa639a833a4e31d7ce16db2c48c8664871bd06eed" Nov 24 09:02:21 crc kubenswrapper[4886]: I1124 09:02:21.066431 4886 scope.go:117] "RemoveContainer" containerID="86fd1b225f49c950f323e87baf558d282938749ffdedc4f714d4babd7d151a70" Nov 24 09:02:21 crc kubenswrapper[4886]: I1124 09:02:21.082640 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-q7c5s"] Nov 24 09:02:21 crc kubenswrapper[4886]: I1124 09:02:21.120180 4886 scope.go:117] "RemoveContainer" containerID="415058bfa25d3d4ea23eb62ddb411469df4d48a74bfa64f2c56ec2e2f39621f8" Nov 24 09:02:21 crc kubenswrapper[4886]: E1124 09:02:21.124142 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"415058bfa25d3d4ea23eb62ddb411469df4d48a74bfa64f2c56ec2e2f39621f8\": container with ID starting with 415058bfa25d3d4ea23eb62ddb411469df4d48a74bfa64f2c56ec2e2f39621f8 not found: ID does not exist" containerID="415058bfa25d3d4ea23eb62ddb411469df4d48a74bfa64f2c56ec2e2f39621f8" Nov 24 09:02:21 crc kubenswrapper[4886]: I1124 09:02:21.124224 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"415058bfa25d3d4ea23eb62ddb411469df4d48a74bfa64f2c56ec2e2f39621f8"} err="failed to get container status \"415058bfa25d3d4ea23eb62ddb411469df4d48a74bfa64f2c56ec2e2f39621f8\": rpc error: code = NotFound desc = could not find container \"415058bfa25d3d4ea23eb62ddb411469df4d48a74bfa64f2c56ec2e2f39621f8\": container with ID starting with 415058bfa25d3d4ea23eb62ddb411469df4d48a74bfa64f2c56ec2e2f39621f8 not found: ID does not exist" Nov 24 09:02:21 crc kubenswrapper[4886]: I1124 09:02:21.124266 4886 scope.go:117] "RemoveContainer" containerID="a8d85b65ebaf5e356e74c8afa639a833a4e31d7ce16db2c48c8664871bd06eed" Nov 24 09:02:21 crc kubenswrapper[4886]: E1124 09:02:21.124568 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a8d85b65ebaf5e356e74c8afa639a833a4e31d7ce16db2c48c8664871bd06eed\": container with ID starting with a8d85b65ebaf5e356e74c8afa639a833a4e31d7ce16db2c48c8664871bd06eed not found: ID does not exist" containerID="a8d85b65ebaf5e356e74c8afa639a833a4e31d7ce16db2c48c8664871bd06eed" Nov 24 09:02:21 crc kubenswrapper[4886]: I1124 09:02:21.124586 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a8d85b65ebaf5e356e74c8afa639a833a4e31d7ce16db2c48c8664871bd06eed"} err="failed to get container status \"a8d85b65ebaf5e356e74c8afa639a833a4e31d7ce16db2c48c8664871bd06eed\": rpc error: code = NotFound desc = could not find container \"a8d85b65ebaf5e356e74c8afa639a833a4e31d7ce16db2c48c8664871bd06eed\": container with ID starting with a8d85b65ebaf5e356e74c8afa639a833a4e31d7ce16db2c48c8664871bd06eed not found: ID does not exist" Nov 24 09:02:21 crc kubenswrapper[4886]: I1124 09:02:21.124603 4886 scope.go:117] "RemoveContainer" containerID="86fd1b225f49c950f323e87baf558d282938749ffdedc4f714d4babd7d151a70" Nov 24 09:02:21 crc kubenswrapper[4886]: E1124 09:02:21.124815 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"86fd1b225f49c950f323e87baf558d282938749ffdedc4f714d4babd7d151a70\": container with ID starting with 86fd1b225f49c950f323e87baf558d282938749ffdedc4f714d4babd7d151a70 not found: ID does not exist" containerID="86fd1b225f49c950f323e87baf558d282938749ffdedc4f714d4babd7d151a70" Nov 24 09:02:21 crc kubenswrapper[4886]: I1124 09:02:21.124835 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"86fd1b225f49c950f323e87baf558d282938749ffdedc4f714d4babd7d151a70"} err="failed to get container status \"86fd1b225f49c950f323e87baf558d282938749ffdedc4f714d4babd7d151a70\": rpc error: code = NotFound desc = could not find container \"86fd1b225f49c950f323e87baf558d282938749ffdedc4f714d4babd7d151a70\": container with ID starting with 86fd1b225f49c950f323e87baf558d282938749ffdedc4f714d4babd7d151a70 not found: ID does not exist" Nov 24 09:02:21 crc kubenswrapper[4886]: I1124 09:02:21.947667 4886 generic.go:334] "Generic (PLEG): container finished" podID="008ebdbd-3199-4af5-9a32-2686d760c3ec" containerID="5adeca38722cbce8945a455c9b50baf109491de55411409e93f35eb613d7ee1d" exitCode=0 Nov 24 09:02:21 crc kubenswrapper[4886]: I1124 09:02:21.949694 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q7c5s" event={"ID":"008ebdbd-3199-4af5-9a32-2686d760c3ec","Type":"ContainerDied","Data":"5adeca38722cbce8945a455c9b50baf109491de55411409e93f35eb613d7ee1d"} Nov 24 09:02:21 crc kubenswrapper[4886]: I1124 09:02:21.949756 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q7c5s" event={"ID":"008ebdbd-3199-4af5-9a32-2686d760c3ec","Type":"ContainerStarted","Data":"95bcea642fb5b414706813bd91a75b6c4dfa440d244fa65116f927924f43402e"} Nov 24 09:02:22 crc kubenswrapper[4886]: I1124 09:02:22.857421 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d1aed817-5aee-40e1-b46d-208a8a46c3f7" path="/var/lib/kubelet/pods/d1aed817-5aee-40e1-b46d-208a8a46c3f7/volumes" Nov 24 09:02:22 crc kubenswrapper[4886]: I1124 09:02:22.956311 4886 generic.go:334] "Generic (PLEG): container finished" podID="008ebdbd-3199-4af5-9a32-2686d760c3ec" containerID="23dd69464e1d7b61c5c8e02cf1fe4c031555fa61c813aa39cd07b57608d08ccc" exitCode=0 Nov 24 09:02:22 crc kubenswrapper[4886]: I1124 09:02:22.956383 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q7c5s" event={"ID":"008ebdbd-3199-4af5-9a32-2686d760c3ec","Type":"ContainerDied","Data":"23dd69464e1d7b61c5c8e02cf1fe4c031555fa61c813aa39cd07b57608d08ccc"} Nov 24 09:02:24 crc kubenswrapper[4886]: I1124 09:02:24.972009 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q7c5s" event={"ID":"008ebdbd-3199-4af5-9a32-2686d760c3ec","Type":"ContainerStarted","Data":"103f07d65c91b0a5f522ac3856220ff503db529da7ed9f6cfc3a6400b2575cc7"} Nov 24 09:02:24 crc kubenswrapper[4886]: I1124 09:02:24.997487 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-q7c5s" podStartSLOduration=3.572866199 podStartE2EDuration="4.997465915s" podCreationTimestamp="2025-11-24 09:02:20 +0000 UTC" firstStartedPulling="2025-11-24 09:02:21.951174658 +0000 UTC m=+797.837912793" lastFinishedPulling="2025-11-24 09:02:23.375774384 +0000 UTC m=+799.262512509" observedRunningTime="2025-11-24 09:02:24.995576442 +0000 UTC m=+800.882314587" watchObservedRunningTime="2025-11-24 09:02:24.997465915 +0000 UTC m=+800.884204050" Nov 24 09:02:29 crc kubenswrapper[4886]: I1124 09:02:29.837389 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-7xfxl"] Nov 24 09:02:29 crc kubenswrapper[4886]: I1124 09:02:29.839758 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7xfxl" Nov 24 09:02:29 crc kubenswrapper[4886]: I1124 09:02:29.856945 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-7xfxl"] Nov 24 09:02:29 crc kubenswrapper[4886]: I1124 09:02:29.956764 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9889e6b5-b0e9-4429-8d13-194c7f22b10e-utilities\") pod \"redhat-marketplace-7xfxl\" (UID: \"9889e6b5-b0e9-4429-8d13-194c7f22b10e\") " pod="openshift-marketplace/redhat-marketplace-7xfxl" Nov 24 09:02:29 crc kubenswrapper[4886]: I1124 09:02:29.956844 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tdncx\" (UniqueName: \"kubernetes.io/projected/9889e6b5-b0e9-4429-8d13-194c7f22b10e-kube-api-access-tdncx\") pod \"redhat-marketplace-7xfxl\" (UID: \"9889e6b5-b0e9-4429-8d13-194c7f22b10e\") " pod="openshift-marketplace/redhat-marketplace-7xfxl" Nov 24 09:02:29 crc kubenswrapper[4886]: I1124 09:02:29.956958 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9889e6b5-b0e9-4429-8d13-194c7f22b10e-catalog-content\") pod \"redhat-marketplace-7xfxl\" (UID: \"9889e6b5-b0e9-4429-8d13-194c7f22b10e\") " pod="openshift-marketplace/redhat-marketplace-7xfxl" Nov 24 09:02:30 crc kubenswrapper[4886]: I1124 09:02:30.058204 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9889e6b5-b0e9-4429-8d13-194c7f22b10e-catalog-content\") pod \"redhat-marketplace-7xfxl\" (UID: \"9889e6b5-b0e9-4429-8d13-194c7f22b10e\") " pod="openshift-marketplace/redhat-marketplace-7xfxl" Nov 24 09:02:30 crc kubenswrapper[4886]: I1124 09:02:30.058289 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9889e6b5-b0e9-4429-8d13-194c7f22b10e-utilities\") pod \"redhat-marketplace-7xfxl\" (UID: \"9889e6b5-b0e9-4429-8d13-194c7f22b10e\") " pod="openshift-marketplace/redhat-marketplace-7xfxl" Nov 24 09:02:30 crc kubenswrapper[4886]: I1124 09:02:30.058311 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tdncx\" (UniqueName: \"kubernetes.io/projected/9889e6b5-b0e9-4429-8d13-194c7f22b10e-kube-api-access-tdncx\") pod \"redhat-marketplace-7xfxl\" (UID: \"9889e6b5-b0e9-4429-8d13-194c7f22b10e\") " pod="openshift-marketplace/redhat-marketplace-7xfxl" Nov 24 09:02:30 crc kubenswrapper[4886]: I1124 09:02:30.059208 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9889e6b5-b0e9-4429-8d13-194c7f22b10e-catalog-content\") pod \"redhat-marketplace-7xfxl\" (UID: \"9889e6b5-b0e9-4429-8d13-194c7f22b10e\") " pod="openshift-marketplace/redhat-marketplace-7xfxl" Nov 24 09:02:30 crc kubenswrapper[4886]: I1124 09:02:30.059323 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9889e6b5-b0e9-4429-8d13-194c7f22b10e-utilities\") pod \"redhat-marketplace-7xfxl\" (UID: \"9889e6b5-b0e9-4429-8d13-194c7f22b10e\") " pod="openshift-marketplace/redhat-marketplace-7xfxl" Nov 24 09:02:30 crc kubenswrapper[4886]: I1124 09:02:30.086209 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tdncx\" (UniqueName: \"kubernetes.io/projected/9889e6b5-b0e9-4429-8d13-194c7f22b10e-kube-api-access-tdncx\") pod \"redhat-marketplace-7xfxl\" (UID: \"9889e6b5-b0e9-4429-8d13-194c7f22b10e\") " pod="openshift-marketplace/redhat-marketplace-7xfxl" Nov 24 09:02:30 crc kubenswrapper[4886]: I1124 09:02:30.198100 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7xfxl" Nov 24 09:02:30 crc kubenswrapper[4886]: I1124 09:02:30.466327 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-7xfxl"] Nov 24 09:02:30 crc kubenswrapper[4886]: I1124 09:02:30.756641 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-q7c5s" Nov 24 09:02:30 crc kubenswrapper[4886]: I1124 09:02:30.757212 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-q7c5s" Nov 24 09:02:30 crc kubenswrapper[4886]: I1124 09:02:30.804426 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-q7c5s" Nov 24 09:02:31 crc kubenswrapper[4886]: I1124 09:02:31.011508 4886 generic.go:334] "Generic (PLEG): container finished" podID="9889e6b5-b0e9-4429-8d13-194c7f22b10e" containerID="ea6ce673d4a2a4ba60fae94ac7be4335dd50e73a091088a02157d9241b7b2806" exitCode=0 Nov 24 09:02:31 crc kubenswrapper[4886]: I1124 09:02:31.011596 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7xfxl" event={"ID":"9889e6b5-b0e9-4429-8d13-194c7f22b10e","Type":"ContainerDied","Data":"ea6ce673d4a2a4ba60fae94ac7be4335dd50e73a091088a02157d9241b7b2806"} Nov 24 09:02:31 crc kubenswrapper[4886]: I1124 09:02:31.012024 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7xfxl" event={"ID":"9889e6b5-b0e9-4429-8d13-194c7f22b10e","Type":"ContainerStarted","Data":"21fd76a99f235f2affa29a8b92017a439b7a1e3ead78576404f8f58b3d125d05"} Nov 24 09:02:31 crc kubenswrapper[4886]: I1124 09:02:31.070410 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-q7c5s" Nov 24 09:02:31 crc kubenswrapper[4886]: I1124 09:02:31.428802 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-7fd5b69667-tg7zm" Nov 24 09:02:31 crc kubenswrapper[4886]: I1124 09:02:31.784869 4886 patch_prober.go:28] interesting pod/machine-config-daemon-zc46q container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 09:02:31 crc kubenswrapper[4886]: I1124 09:02:31.784937 4886 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zc46q" podUID="23cb993e-0360-4449-b604-8ddd825a6502" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 09:02:32 crc kubenswrapper[4886]: I1124 09:02:32.019196 4886 generic.go:334] "Generic (PLEG): container finished" podID="9889e6b5-b0e9-4429-8d13-194c7f22b10e" containerID="5f71a48d927f7efd8885c252ab8bc0b922965694030a83195ff3ee74a14df369" exitCode=0 Nov 24 09:02:32 crc kubenswrapper[4886]: I1124 09:02:32.019294 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7xfxl" event={"ID":"9889e6b5-b0e9-4429-8d13-194c7f22b10e","Type":"ContainerDied","Data":"5f71a48d927f7efd8885c252ab8bc0b922965694030a83195ff3ee74a14df369"} Nov 24 09:02:33 crc kubenswrapper[4886]: I1124 09:02:33.027346 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7xfxl" event={"ID":"9889e6b5-b0e9-4429-8d13-194c7f22b10e","Type":"ContainerStarted","Data":"f6d268879789ed143fea5586474a04842c877ac131b5a03b5bcab85aefddb374"} Nov 24 09:02:33 crc kubenswrapper[4886]: I1124 09:02:33.044473 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-7xfxl" podStartSLOduration=2.6338356320000003 podStartE2EDuration="4.044450658s" podCreationTimestamp="2025-11-24 09:02:29 +0000 UTC" firstStartedPulling="2025-11-24 09:02:31.015268968 +0000 UTC m=+806.902007103" lastFinishedPulling="2025-11-24 09:02:32.425883994 +0000 UTC m=+808.312622129" observedRunningTime="2025-11-24 09:02:33.042174132 +0000 UTC m=+808.928912277" watchObservedRunningTime="2025-11-24 09:02:33.044450658 +0000 UTC m=+808.931188793" Nov 24 09:02:34 crc kubenswrapper[4886]: I1124 09:02:34.431059 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-q7c5s"] Nov 24 09:02:34 crc kubenswrapper[4886]: I1124 09:02:34.431413 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-q7c5s" podUID="008ebdbd-3199-4af5-9a32-2686d760c3ec" containerName="registry-server" containerID="cri-o://103f07d65c91b0a5f522ac3856220ff503db529da7ed9f6cfc3a6400b2575cc7" gracePeriod=2 Nov 24 09:02:34 crc kubenswrapper[4886]: E1124 09:02:34.506201 4886 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod008ebdbd_3199_4af5_9a32_2686d760c3ec.slice/crio-103f07d65c91b0a5f522ac3856220ff503db529da7ed9f6cfc3a6400b2575cc7.scope\": RecentStats: unable to find data in memory cache]" Nov 24 09:02:35 crc kubenswrapper[4886]: I1124 09:02:35.037253 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-q7c5s" Nov 24 09:02:35 crc kubenswrapper[4886]: I1124 09:02:35.043287 4886 generic.go:334] "Generic (PLEG): container finished" podID="008ebdbd-3199-4af5-9a32-2686d760c3ec" containerID="103f07d65c91b0a5f522ac3856220ff503db529da7ed9f6cfc3a6400b2575cc7" exitCode=0 Nov 24 09:02:35 crc kubenswrapper[4886]: I1124 09:02:35.043372 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-q7c5s" Nov 24 09:02:35 crc kubenswrapper[4886]: I1124 09:02:35.043353 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q7c5s" event={"ID":"008ebdbd-3199-4af5-9a32-2686d760c3ec","Type":"ContainerDied","Data":"103f07d65c91b0a5f522ac3856220ff503db529da7ed9f6cfc3a6400b2575cc7"} Nov 24 09:02:35 crc kubenswrapper[4886]: I1124 09:02:35.043530 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q7c5s" event={"ID":"008ebdbd-3199-4af5-9a32-2686d760c3ec","Type":"ContainerDied","Data":"95bcea642fb5b414706813bd91a75b6c4dfa440d244fa65116f927924f43402e"} Nov 24 09:02:35 crc kubenswrapper[4886]: I1124 09:02:35.043564 4886 scope.go:117] "RemoveContainer" containerID="103f07d65c91b0a5f522ac3856220ff503db529da7ed9f6cfc3a6400b2575cc7" Nov 24 09:02:35 crc kubenswrapper[4886]: I1124 09:02:35.070615 4886 scope.go:117] "RemoveContainer" containerID="23dd69464e1d7b61c5c8e02cf1fe4c031555fa61c813aa39cd07b57608d08ccc" Nov 24 09:02:35 crc kubenswrapper[4886]: I1124 09:02:35.094554 4886 scope.go:117] "RemoveContainer" containerID="5adeca38722cbce8945a455c9b50baf109491de55411409e93f35eb613d7ee1d" Nov 24 09:02:35 crc kubenswrapper[4886]: I1124 09:02:35.121844 4886 scope.go:117] "RemoveContainer" containerID="103f07d65c91b0a5f522ac3856220ff503db529da7ed9f6cfc3a6400b2575cc7" Nov 24 09:02:35 crc kubenswrapper[4886]: E1124 09:02:35.122549 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"103f07d65c91b0a5f522ac3856220ff503db529da7ed9f6cfc3a6400b2575cc7\": container with ID starting with 103f07d65c91b0a5f522ac3856220ff503db529da7ed9f6cfc3a6400b2575cc7 not found: ID does not exist" containerID="103f07d65c91b0a5f522ac3856220ff503db529da7ed9f6cfc3a6400b2575cc7" Nov 24 09:02:35 crc kubenswrapper[4886]: I1124 09:02:35.122610 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"103f07d65c91b0a5f522ac3856220ff503db529da7ed9f6cfc3a6400b2575cc7"} err="failed to get container status \"103f07d65c91b0a5f522ac3856220ff503db529da7ed9f6cfc3a6400b2575cc7\": rpc error: code = NotFound desc = could not find container \"103f07d65c91b0a5f522ac3856220ff503db529da7ed9f6cfc3a6400b2575cc7\": container with ID starting with 103f07d65c91b0a5f522ac3856220ff503db529da7ed9f6cfc3a6400b2575cc7 not found: ID does not exist" Nov 24 09:02:35 crc kubenswrapper[4886]: I1124 09:02:35.122634 4886 scope.go:117] "RemoveContainer" containerID="23dd69464e1d7b61c5c8e02cf1fe4c031555fa61c813aa39cd07b57608d08ccc" Nov 24 09:02:35 crc kubenswrapper[4886]: E1124 09:02:35.123771 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"23dd69464e1d7b61c5c8e02cf1fe4c031555fa61c813aa39cd07b57608d08ccc\": container with ID starting with 23dd69464e1d7b61c5c8e02cf1fe4c031555fa61c813aa39cd07b57608d08ccc not found: ID does not exist" containerID="23dd69464e1d7b61c5c8e02cf1fe4c031555fa61c813aa39cd07b57608d08ccc" Nov 24 09:02:35 crc kubenswrapper[4886]: I1124 09:02:35.123842 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"23dd69464e1d7b61c5c8e02cf1fe4c031555fa61c813aa39cd07b57608d08ccc"} err="failed to get container status \"23dd69464e1d7b61c5c8e02cf1fe4c031555fa61c813aa39cd07b57608d08ccc\": rpc error: code = NotFound desc = could not find container \"23dd69464e1d7b61c5c8e02cf1fe4c031555fa61c813aa39cd07b57608d08ccc\": container with ID starting with 23dd69464e1d7b61c5c8e02cf1fe4c031555fa61c813aa39cd07b57608d08ccc not found: ID does not exist" Nov 24 09:02:35 crc kubenswrapper[4886]: I1124 09:02:35.123886 4886 scope.go:117] "RemoveContainer" containerID="5adeca38722cbce8945a455c9b50baf109491de55411409e93f35eb613d7ee1d" Nov 24 09:02:35 crc kubenswrapper[4886]: E1124 09:02:35.124253 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5adeca38722cbce8945a455c9b50baf109491de55411409e93f35eb613d7ee1d\": container with ID starting with 5adeca38722cbce8945a455c9b50baf109491de55411409e93f35eb613d7ee1d not found: ID does not exist" containerID="5adeca38722cbce8945a455c9b50baf109491de55411409e93f35eb613d7ee1d" Nov 24 09:02:35 crc kubenswrapper[4886]: I1124 09:02:35.124286 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5adeca38722cbce8945a455c9b50baf109491de55411409e93f35eb613d7ee1d"} err="failed to get container status \"5adeca38722cbce8945a455c9b50baf109491de55411409e93f35eb613d7ee1d\": rpc error: code = NotFound desc = could not find container \"5adeca38722cbce8945a455c9b50baf109491de55411409e93f35eb613d7ee1d\": container with ID starting with 5adeca38722cbce8945a455c9b50baf109491de55411409e93f35eb613d7ee1d not found: ID does not exist" Nov 24 09:02:35 crc kubenswrapper[4886]: I1124 09:02:35.154385 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9dzcf\" (UniqueName: \"kubernetes.io/projected/008ebdbd-3199-4af5-9a32-2686d760c3ec-kube-api-access-9dzcf\") pod \"008ebdbd-3199-4af5-9a32-2686d760c3ec\" (UID: \"008ebdbd-3199-4af5-9a32-2686d760c3ec\") " Nov 24 09:02:35 crc kubenswrapper[4886]: I1124 09:02:35.154523 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/008ebdbd-3199-4af5-9a32-2686d760c3ec-catalog-content\") pod \"008ebdbd-3199-4af5-9a32-2686d760c3ec\" (UID: \"008ebdbd-3199-4af5-9a32-2686d760c3ec\") " Nov 24 09:02:35 crc kubenswrapper[4886]: I1124 09:02:35.154602 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/008ebdbd-3199-4af5-9a32-2686d760c3ec-utilities\") pod \"008ebdbd-3199-4af5-9a32-2686d760c3ec\" (UID: \"008ebdbd-3199-4af5-9a32-2686d760c3ec\") " Nov 24 09:02:35 crc kubenswrapper[4886]: I1124 09:02:35.155768 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/008ebdbd-3199-4af5-9a32-2686d760c3ec-utilities" (OuterVolumeSpecName: "utilities") pod "008ebdbd-3199-4af5-9a32-2686d760c3ec" (UID: "008ebdbd-3199-4af5-9a32-2686d760c3ec"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 09:02:35 crc kubenswrapper[4886]: I1124 09:02:35.161807 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/008ebdbd-3199-4af5-9a32-2686d760c3ec-kube-api-access-9dzcf" (OuterVolumeSpecName: "kube-api-access-9dzcf") pod "008ebdbd-3199-4af5-9a32-2686d760c3ec" (UID: "008ebdbd-3199-4af5-9a32-2686d760c3ec"). InnerVolumeSpecName "kube-api-access-9dzcf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 09:02:35 crc kubenswrapper[4886]: I1124 09:02:35.213054 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/008ebdbd-3199-4af5-9a32-2686d760c3ec-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "008ebdbd-3199-4af5-9a32-2686d760c3ec" (UID: "008ebdbd-3199-4af5-9a32-2686d760c3ec"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 09:02:35 crc kubenswrapper[4886]: I1124 09:02:35.256169 4886 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/008ebdbd-3199-4af5-9a32-2686d760c3ec-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 09:02:35 crc kubenswrapper[4886]: I1124 09:02:35.256222 4886 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/008ebdbd-3199-4af5-9a32-2686d760c3ec-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 09:02:35 crc kubenswrapper[4886]: I1124 09:02:35.256238 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9dzcf\" (UniqueName: \"kubernetes.io/projected/008ebdbd-3199-4af5-9a32-2686d760c3ec-kube-api-access-9dzcf\") on node \"crc\" DevicePath \"\"" Nov 24 09:02:35 crc kubenswrapper[4886]: I1124 09:02:35.373057 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-q7c5s"] Nov 24 09:02:35 crc kubenswrapper[4886]: I1124 09:02:35.376985 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-q7c5s"] Nov 24 09:02:36 crc kubenswrapper[4886]: I1124 09:02:36.865705 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="008ebdbd-3199-4af5-9a32-2686d760c3ec" path="/var/lib/kubelet/pods/008ebdbd-3199-4af5-9a32-2686d760c3ec/volumes" Nov 24 09:02:40 crc kubenswrapper[4886]: I1124 09:02:40.198726 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-7xfxl" Nov 24 09:02:40 crc kubenswrapper[4886]: I1124 09:02:40.199774 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-7xfxl" Nov 24 09:02:40 crc kubenswrapper[4886]: I1124 09:02:40.249527 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-7xfxl" Nov 24 09:02:41 crc kubenswrapper[4886]: I1124 09:02:41.138682 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-7xfxl" Nov 24 09:02:41 crc kubenswrapper[4886]: I1124 09:02:41.829139 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-7xfxl"] Nov 24 09:02:43 crc kubenswrapper[4886]: I1124 09:02:43.121143 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-7xfxl" podUID="9889e6b5-b0e9-4429-8d13-194c7f22b10e" containerName="registry-server" containerID="cri-o://f6d268879789ed143fea5586474a04842c877ac131b5a03b5bcab85aefddb374" gracePeriod=2 Nov 24 09:02:44 crc kubenswrapper[4886]: I1124 09:02:44.093028 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7xfxl" Nov 24 09:02:44 crc kubenswrapper[4886]: I1124 09:02:44.117232 4886 generic.go:334] "Generic (PLEG): container finished" podID="9889e6b5-b0e9-4429-8d13-194c7f22b10e" containerID="f6d268879789ed143fea5586474a04842c877ac131b5a03b5bcab85aefddb374" exitCode=0 Nov 24 09:02:44 crc kubenswrapper[4886]: I1124 09:02:44.117547 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7xfxl" event={"ID":"9889e6b5-b0e9-4429-8d13-194c7f22b10e","Type":"ContainerDied","Data":"f6d268879789ed143fea5586474a04842c877ac131b5a03b5bcab85aefddb374"} Nov 24 09:02:44 crc kubenswrapper[4886]: I1124 09:02:44.117584 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7xfxl" event={"ID":"9889e6b5-b0e9-4429-8d13-194c7f22b10e","Type":"ContainerDied","Data":"21fd76a99f235f2affa29a8b92017a439b7a1e3ead78576404f8f58b3d125d05"} Nov 24 09:02:44 crc kubenswrapper[4886]: I1124 09:02:44.117628 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7xfxl" Nov 24 09:02:44 crc kubenswrapper[4886]: I1124 09:02:44.117632 4886 scope.go:117] "RemoveContainer" containerID="f6d268879789ed143fea5586474a04842c877ac131b5a03b5bcab85aefddb374" Nov 24 09:02:44 crc kubenswrapper[4886]: I1124 09:02:44.137280 4886 scope.go:117] "RemoveContainer" containerID="5f71a48d927f7efd8885c252ab8bc0b922965694030a83195ff3ee74a14df369" Nov 24 09:02:44 crc kubenswrapper[4886]: I1124 09:02:44.161196 4886 scope.go:117] "RemoveContainer" containerID="ea6ce673d4a2a4ba60fae94ac7be4335dd50e73a091088a02157d9241b7b2806" Nov 24 09:02:44 crc kubenswrapper[4886]: I1124 09:02:44.178108 4886 scope.go:117] "RemoveContainer" containerID="f6d268879789ed143fea5586474a04842c877ac131b5a03b5bcab85aefddb374" Nov 24 09:02:44 crc kubenswrapper[4886]: E1124 09:02:44.178736 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f6d268879789ed143fea5586474a04842c877ac131b5a03b5bcab85aefddb374\": container with ID starting with f6d268879789ed143fea5586474a04842c877ac131b5a03b5bcab85aefddb374 not found: ID does not exist" containerID="f6d268879789ed143fea5586474a04842c877ac131b5a03b5bcab85aefddb374" Nov 24 09:02:44 crc kubenswrapper[4886]: I1124 09:02:44.178796 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f6d268879789ed143fea5586474a04842c877ac131b5a03b5bcab85aefddb374"} err="failed to get container status \"f6d268879789ed143fea5586474a04842c877ac131b5a03b5bcab85aefddb374\": rpc error: code = NotFound desc = could not find container \"f6d268879789ed143fea5586474a04842c877ac131b5a03b5bcab85aefddb374\": container with ID starting with f6d268879789ed143fea5586474a04842c877ac131b5a03b5bcab85aefddb374 not found: ID does not exist" Nov 24 09:02:44 crc kubenswrapper[4886]: I1124 09:02:44.178828 4886 scope.go:117] "RemoveContainer" containerID="5f71a48d927f7efd8885c252ab8bc0b922965694030a83195ff3ee74a14df369" Nov 24 09:02:44 crc kubenswrapper[4886]: E1124 09:02:44.179203 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5f71a48d927f7efd8885c252ab8bc0b922965694030a83195ff3ee74a14df369\": container with ID starting with 5f71a48d927f7efd8885c252ab8bc0b922965694030a83195ff3ee74a14df369 not found: ID does not exist" containerID="5f71a48d927f7efd8885c252ab8bc0b922965694030a83195ff3ee74a14df369" Nov 24 09:02:44 crc kubenswrapper[4886]: I1124 09:02:44.179250 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5f71a48d927f7efd8885c252ab8bc0b922965694030a83195ff3ee74a14df369"} err="failed to get container status \"5f71a48d927f7efd8885c252ab8bc0b922965694030a83195ff3ee74a14df369\": rpc error: code = NotFound desc = could not find container \"5f71a48d927f7efd8885c252ab8bc0b922965694030a83195ff3ee74a14df369\": container with ID starting with 5f71a48d927f7efd8885c252ab8bc0b922965694030a83195ff3ee74a14df369 not found: ID does not exist" Nov 24 09:02:44 crc kubenswrapper[4886]: I1124 09:02:44.179266 4886 scope.go:117] "RemoveContainer" containerID="ea6ce673d4a2a4ba60fae94ac7be4335dd50e73a091088a02157d9241b7b2806" Nov 24 09:02:44 crc kubenswrapper[4886]: E1124 09:02:44.179590 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ea6ce673d4a2a4ba60fae94ac7be4335dd50e73a091088a02157d9241b7b2806\": container with ID starting with ea6ce673d4a2a4ba60fae94ac7be4335dd50e73a091088a02157d9241b7b2806 not found: ID does not exist" containerID="ea6ce673d4a2a4ba60fae94ac7be4335dd50e73a091088a02157d9241b7b2806" Nov 24 09:02:44 crc kubenswrapper[4886]: I1124 09:02:44.179672 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ea6ce673d4a2a4ba60fae94ac7be4335dd50e73a091088a02157d9241b7b2806"} err="failed to get container status \"ea6ce673d4a2a4ba60fae94ac7be4335dd50e73a091088a02157d9241b7b2806\": rpc error: code = NotFound desc = could not find container \"ea6ce673d4a2a4ba60fae94ac7be4335dd50e73a091088a02157d9241b7b2806\": container with ID starting with ea6ce673d4a2a4ba60fae94ac7be4335dd50e73a091088a02157d9241b7b2806 not found: ID does not exist" Nov 24 09:02:44 crc kubenswrapper[4886]: I1124 09:02:44.281834 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9889e6b5-b0e9-4429-8d13-194c7f22b10e-utilities\") pod \"9889e6b5-b0e9-4429-8d13-194c7f22b10e\" (UID: \"9889e6b5-b0e9-4429-8d13-194c7f22b10e\") " Nov 24 09:02:44 crc kubenswrapper[4886]: I1124 09:02:44.281983 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tdncx\" (UniqueName: \"kubernetes.io/projected/9889e6b5-b0e9-4429-8d13-194c7f22b10e-kube-api-access-tdncx\") pod \"9889e6b5-b0e9-4429-8d13-194c7f22b10e\" (UID: \"9889e6b5-b0e9-4429-8d13-194c7f22b10e\") " Nov 24 09:02:44 crc kubenswrapper[4886]: I1124 09:02:44.282035 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9889e6b5-b0e9-4429-8d13-194c7f22b10e-catalog-content\") pod \"9889e6b5-b0e9-4429-8d13-194c7f22b10e\" (UID: \"9889e6b5-b0e9-4429-8d13-194c7f22b10e\") " Nov 24 09:02:44 crc kubenswrapper[4886]: I1124 09:02:44.283964 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9889e6b5-b0e9-4429-8d13-194c7f22b10e-utilities" (OuterVolumeSpecName: "utilities") pod "9889e6b5-b0e9-4429-8d13-194c7f22b10e" (UID: "9889e6b5-b0e9-4429-8d13-194c7f22b10e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 09:02:44 crc kubenswrapper[4886]: I1124 09:02:44.297515 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9889e6b5-b0e9-4429-8d13-194c7f22b10e-kube-api-access-tdncx" (OuterVolumeSpecName: "kube-api-access-tdncx") pod "9889e6b5-b0e9-4429-8d13-194c7f22b10e" (UID: "9889e6b5-b0e9-4429-8d13-194c7f22b10e"). InnerVolumeSpecName "kube-api-access-tdncx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 09:02:44 crc kubenswrapper[4886]: I1124 09:02:44.309014 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9889e6b5-b0e9-4429-8d13-194c7f22b10e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9889e6b5-b0e9-4429-8d13-194c7f22b10e" (UID: "9889e6b5-b0e9-4429-8d13-194c7f22b10e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 09:02:44 crc kubenswrapper[4886]: I1124 09:02:44.383528 4886 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9889e6b5-b0e9-4429-8d13-194c7f22b10e-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 09:02:44 crc kubenswrapper[4886]: I1124 09:02:44.383582 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tdncx\" (UniqueName: \"kubernetes.io/projected/9889e6b5-b0e9-4429-8d13-194c7f22b10e-kube-api-access-tdncx\") on node \"crc\" DevicePath \"\"" Nov 24 09:02:44 crc kubenswrapper[4886]: I1124 09:02:44.383595 4886 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9889e6b5-b0e9-4429-8d13-194c7f22b10e-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 09:02:44 crc kubenswrapper[4886]: I1124 09:02:44.451481 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-7xfxl"] Nov 24 09:02:44 crc kubenswrapper[4886]: I1124 09:02:44.458061 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-7xfxl"] Nov 24 09:02:44 crc kubenswrapper[4886]: I1124 09:02:44.858685 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9889e6b5-b0e9-4429-8d13-194c7f22b10e" path="/var/lib/kubelet/pods/9889e6b5-b0e9-4429-8d13-194c7f22b10e/volumes" Nov 24 09:02:51 crc kubenswrapper[4886]: I1124 09:02:51.163520 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-688456bb67-dhj9s" Nov 24 09:02:51 crc kubenswrapper[4886]: I1124 09:02:51.911706 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-gr6pn"] Nov 24 09:02:51 crc kubenswrapper[4886]: E1124 09:02:51.912071 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9889e6b5-b0e9-4429-8d13-194c7f22b10e" containerName="extract-content" Nov 24 09:02:51 crc kubenswrapper[4886]: I1124 09:02:51.912096 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="9889e6b5-b0e9-4429-8d13-194c7f22b10e" containerName="extract-content" Nov 24 09:02:51 crc kubenswrapper[4886]: E1124 09:02:51.912120 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9889e6b5-b0e9-4429-8d13-194c7f22b10e" containerName="registry-server" Nov 24 09:02:51 crc kubenswrapper[4886]: I1124 09:02:51.912131 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="9889e6b5-b0e9-4429-8d13-194c7f22b10e" containerName="registry-server" Nov 24 09:02:51 crc kubenswrapper[4886]: E1124 09:02:51.912143 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="008ebdbd-3199-4af5-9a32-2686d760c3ec" containerName="extract-utilities" Nov 24 09:02:51 crc kubenswrapper[4886]: I1124 09:02:51.912197 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="008ebdbd-3199-4af5-9a32-2686d760c3ec" containerName="extract-utilities" Nov 24 09:02:51 crc kubenswrapper[4886]: E1124 09:02:51.912211 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9889e6b5-b0e9-4429-8d13-194c7f22b10e" containerName="extract-utilities" Nov 24 09:02:51 crc kubenswrapper[4886]: I1124 09:02:51.912219 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="9889e6b5-b0e9-4429-8d13-194c7f22b10e" containerName="extract-utilities" Nov 24 09:02:51 crc kubenswrapper[4886]: E1124 09:02:51.912237 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="008ebdbd-3199-4af5-9a32-2686d760c3ec" containerName="extract-content" Nov 24 09:02:51 crc kubenswrapper[4886]: I1124 09:02:51.912247 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="008ebdbd-3199-4af5-9a32-2686d760c3ec" containerName="extract-content" Nov 24 09:02:51 crc kubenswrapper[4886]: E1124 09:02:51.912257 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="008ebdbd-3199-4af5-9a32-2686d760c3ec" containerName="registry-server" Nov 24 09:02:51 crc kubenswrapper[4886]: I1124 09:02:51.912265 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="008ebdbd-3199-4af5-9a32-2686d760c3ec" containerName="registry-server" Nov 24 09:02:51 crc kubenswrapper[4886]: I1124 09:02:51.912434 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="008ebdbd-3199-4af5-9a32-2686d760c3ec" containerName="registry-server" Nov 24 09:02:51 crc kubenswrapper[4886]: I1124 09:02:51.912459 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="9889e6b5-b0e9-4429-8d13-194c7f22b10e" containerName="registry-server" Nov 24 09:02:51 crc kubenswrapper[4886]: I1124 09:02:51.915034 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-6998585d5-npnj4"] Nov 24 09:02:51 crc kubenswrapper[4886]: I1124 09:02:51.915047 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-gr6pn" Nov 24 09:02:51 crc kubenswrapper[4886]: I1124 09:02:51.915920 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-6998585d5-npnj4" Nov 24 09:02:51 crc kubenswrapper[4886]: I1124 09:02:51.918931 4886 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Nov 24 09:02:51 crc kubenswrapper[4886]: I1124 09:02:51.919086 4886 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Nov 24 09:02:51 crc kubenswrapper[4886]: I1124 09:02:51.919226 4886 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-48c4j" Nov 24 09:02:51 crc kubenswrapper[4886]: I1124 09:02:51.934828 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Nov 24 09:02:51 crc kubenswrapper[4886]: I1124 09:02:51.951869 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-6998585d5-npnj4"] Nov 24 09:02:51 crc kubenswrapper[4886]: I1124 09:02:51.992414 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/68ac7d9f-558c-415d-a499-9aca2c3c7d62-frr-conf\") pod \"frr-k8s-gr6pn\" (UID: \"68ac7d9f-558c-415d-a499-9aca2c3c7d62\") " pod="metallb-system/frr-k8s-gr6pn" Nov 24 09:02:51 crc kubenswrapper[4886]: I1124 09:02:51.992559 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fhqg9\" (UniqueName: \"kubernetes.io/projected/3d2f363e-5545-4437-90ff-060ba6628fa9-kube-api-access-fhqg9\") pod \"frr-k8s-webhook-server-6998585d5-npnj4\" (UID: \"3d2f363e-5545-4437-90ff-060ba6628fa9\") " pod="metallb-system/frr-k8s-webhook-server-6998585d5-npnj4" Nov 24 09:02:51 crc kubenswrapper[4886]: I1124 09:02:51.992623 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rpzh9\" (UniqueName: \"kubernetes.io/projected/68ac7d9f-558c-415d-a499-9aca2c3c7d62-kube-api-access-rpzh9\") pod \"frr-k8s-gr6pn\" (UID: \"68ac7d9f-558c-415d-a499-9aca2c3c7d62\") " pod="metallb-system/frr-k8s-gr6pn" Nov 24 09:02:51 crc kubenswrapper[4886]: I1124 09:02:51.992675 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/68ac7d9f-558c-415d-a499-9aca2c3c7d62-metrics\") pod \"frr-k8s-gr6pn\" (UID: \"68ac7d9f-558c-415d-a499-9aca2c3c7d62\") " pod="metallb-system/frr-k8s-gr6pn" Nov 24 09:02:51 crc kubenswrapper[4886]: I1124 09:02:51.992710 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/68ac7d9f-558c-415d-a499-9aca2c3c7d62-metrics-certs\") pod \"frr-k8s-gr6pn\" (UID: \"68ac7d9f-558c-415d-a499-9aca2c3c7d62\") " pod="metallb-system/frr-k8s-gr6pn" Nov 24 09:02:51 crc kubenswrapper[4886]: I1124 09:02:51.992746 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/68ac7d9f-558c-415d-a499-9aca2c3c7d62-frr-sockets\") pod \"frr-k8s-gr6pn\" (UID: \"68ac7d9f-558c-415d-a499-9aca2c3c7d62\") " pod="metallb-system/frr-k8s-gr6pn" Nov 24 09:02:51 crc kubenswrapper[4886]: I1124 09:02:51.992771 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/68ac7d9f-558c-415d-a499-9aca2c3c7d62-reloader\") pod \"frr-k8s-gr6pn\" (UID: \"68ac7d9f-558c-415d-a499-9aca2c3c7d62\") " pod="metallb-system/frr-k8s-gr6pn" Nov 24 09:02:51 crc kubenswrapper[4886]: I1124 09:02:51.992813 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3d2f363e-5545-4437-90ff-060ba6628fa9-cert\") pod \"frr-k8s-webhook-server-6998585d5-npnj4\" (UID: \"3d2f363e-5545-4437-90ff-060ba6628fa9\") " pod="metallb-system/frr-k8s-webhook-server-6998585d5-npnj4" Nov 24 09:02:51 crc kubenswrapper[4886]: I1124 09:02:51.992893 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/68ac7d9f-558c-415d-a499-9aca2c3c7d62-frr-startup\") pod \"frr-k8s-gr6pn\" (UID: \"68ac7d9f-558c-415d-a499-9aca2c3c7d62\") " pod="metallb-system/frr-k8s-gr6pn" Nov 24 09:02:52 crc kubenswrapper[4886]: I1124 09:02:52.016887 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-gwqzh"] Nov 24 09:02:52 crc kubenswrapper[4886]: I1124 09:02:52.017895 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-gwqzh" Nov 24 09:02:52 crc kubenswrapper[4886]: I1124 09:02:52.020580 4886 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Nov 24 09:02:52 crc kubenswrapper[4886]: I1124 09:02:52.020860 4886 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Nov 24 09:02:52 crc kubenswrapper[4886]: I1124 09:02:52.020934 4886 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-fclxl" Nov 24 09:02:52 crc kubenswrapper[4886]: I1124 09:02:52.023977 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Nov 24 09:02:52 crc kubenswrapper[4886]: I1124 09:02:52.050398 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-6c7b4b5f48-znxdl"] Nov 24 09:02:52 crc kubenswrapper[4886]: I1124 09:02:52.053482 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6c7b4b5f48-znxdl" Nov 24 09:02:52 crc kubenswrapper[4886]: I1124 09:02:52.062540 4886 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Nov 24 09:02:52 crc kubenswrapper[4886]: I1124 09:02:52.080903 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6c7b4b5f48-znxdl"] Nov 24 09:02:52 crc kubenswrapper[4886]: I1124 09:02:52.093833 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/68ac7d9f-558c-415d-a499-9aca2c3c7d62-frr-startup\") pod \"frr-k8s-gr6pn\" (UID: \"68ac7d9f-558c-415d-a499-9aca2c3c7d62\") " pod="metallb-system/frr-k8s-gr6pn" Nov 24 09:02:52 crc kubenswrapper[4886]: I1124 09:02:52.093887 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/68ac7d9f-558c-415d-a499-9aca2c3c7d62-frr-conf\") pod \"frr-k8s-gr6pn\" (UID: \"68ac7d9f-558c-415d-a499-9aca2c3c7d62\") " pod="metallb-system/frr-k8s-gr6pn" Nov 24 09:02:52 crc kubenswrapper[4886]: I1124 09:02:52.093913 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4dh8p\" (UniqueName: \"kubernetes.io/projected/8bfe8a52-0472-407d-a1c4-a828c81e5032-kube-api-access-4dh8p\") pod \"controller-6c7b4b5f48-znxdl\" (UID: \"8bfe8a52-0472-407d-a1c4-a828c81e5032\") " pod="metallb-system/controller-6c7b4b5f48-znxdl" Nov 24 09:02:52 crc kubenswrapper[4886]: I1124 09:02:52.093955 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8bfe8a52-0472-407d-a1c4-a828c81e5032-metrics-certs\") pod \"controller-6c7b4b5f48-znxdl\" (UID: \"8bfe8a52-0472-407d-a1c4-a828c81e5032\") " pod="metallb-system/controller-6c7b4b5f48-znxdl" Nov 24 09:02:52 crc kubenswrapper[4886]: I1124 09:02:52.093975 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fhqg9\" (UniqueName: \"kubernetes.io/projected/3d2f363e-5545-4437-90ff-060ba6628fa9-kube-api-access-fhqg9\") pod \"frr-k8s-webhook-server-6998585d5-npnj4\" (UID: \"3d2f363e-5545-4437-90ff-060ba6628fa9\") " pod="metallb-system/frr-k8s-webhook-server-6998585d5-npnj4" Nov 24 09:02:52 crc kubenswrapper[4886]: I1124 09:02:52.093993 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/795ee8d3-ac1e-4a6f-ba0a-8b75eda08e00-memberlist\") pod \"speaker-gwqzh\" (UID: \"795ee8d3-ac1e-4a6f-ba0a-8b75eda08e00\") " pod="metallb-system/speaker-gwqzh" Nov 24 09:02:52 crc kubenswrapper[4886]: I1124 09:02:52.094009 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/795ee8d3-ac1e-4a6f-ba0a-8b75eda08e00-metallb-excludel2\") pod \"speaker-gwqzh\" (UID: \"795ee8d3-ac1e-4a6f-ba0a-8b75eda08e00\") " pod="metallb-system/speaker-gwqzh" Nov 24 09:02:52 crc kubenswrapper[4886]: I1124 09:02:52.094032 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rpzh9\" (UniqueName: \"kubernetes.io/projected/68ac7d9f-558c-415d-a499-9aca2c3c7d62-kube-api-access-rpzh9\") pod \"frr-k8s-gr6pn\" (UID: \"68ac7d9f-558c-415d-a499-9aca2c3c7d62\") " pod="metallb-system/frr-k8s-gr6pn" Nov 24 09:02:52 crc kubenswrapper[4886]: I1124 09:02:52.094055 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/68ac7d9f-558c-415d-a499-9aca2c3c7d62-metrics\") pod \"frr-k8s-gr6pn\" (UID: \"68ac7d9f-558c-415d-a499-9aca2c3c7d62\") " pod="metallb-system/frr-k8s-gr6pn" Nov 24 09:02:52 crc kubenswrapper[4886]: I1124 09:02:52.094070 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/68ac7d9f-558c-415d-a499-9aca2c3c7d62-metrics-certs\") pod \"frr-k8s-gr6pn\" (UID: \"68ac7d9f-558c-415d-a499-9aca2c3c7d62\") " pod="metallb-system/frr-k8s-gr6pn" Nov 24 09:02:52 crc kubenswrapper[4886]: I1124 09:02:52.094090 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/68ac7d9f-558c-415d-a499-9aca2c3c7d62-frr-sockets\") pod \"frr-k8s-gr6pn\" (UID: \"68ac7d9f-558c-415d-a499-9aca2c3c7d62\") " pod="metallb-system/frr-k8s-gr6pn" Nov 24 09:02:52 crc kubenswrapper[4886]: I1124 09:02:52.094107 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/795ee8d3-ac1e-4a6f-ba0a-8b75eda08e00-metrics-certs\") pod \"speaker-gwqzh\" (UID: \"795ee8d3-ac1e-4a6f-ba0a-8b75eda08e00\") " pod="metallb-system/speaker-gwqzh" Nov 24 09:02:52 crc kubenswrapper[4886]: I1124 09:02:52.094135 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/68ac7d9f-558c-415d-a499-9aca2c3c7d62-reloader\") pod \"frr-k8s-gr6pn\" (UID: \"68ac7d9f-558c-415d-a499-9aca2c3c7d62\") " pod="metallb-system/frr-k8s-gr6pn" Nov 24 09:02:52 crc kubenswrapper[4886]: I1124 09:02:52.094178 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8bfe8a52-0472-407d-a1c4-a828c81e5032-cert\") pod \"controller-6c7b4b5f48-znxdl\" (UID: \"8bfe8a52-0472-407d-a1c4-a828c81e5032\") " pod="metallb-system/controller-6c7b4b5f48-znxdl" Nov 24 09:02:52 crc kubenswrapper[4886]: I1124 09:02:52.094200 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3d2f363e-5545-4437-90ff-060ba6628fa9-cert\") pod \"frr-k8s-webhook-server-6998585d5-npnj4\" (UID: \"3d2f363e-5545-4437-90ff-060ba6628fa9\") " pod="metallb-system/frr-k8s-webhook-server-6998585d5-npnj4" Nov 24 09:02:52 crc kubenswrapper[4886]: I1124 09:02:52.094224 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7wklt\" (UniqueName: \"kubernetes.io/projected/795ee8d3-ac1e-4a6f-ba0a-8b75eda08e00-kube-api-access-7wklt\") pod \"speaker-gwqzh\" (UID: \"795ee8d3-ac1e-4a6f-ba0a-8b75eda08e00\") " pod="metallb-system/speaker-gwqzh" Nov 24 09:02:52 crc kubenswrapper[4886]: I1124 09:02:52.095102 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/68ac7d9f-558c-415d-a499-9aca2c3c7d62-frr-startup\") pod \"frr-k8s-gr6pn\" (UID: \"68ac7d9f-558c-415d-a499-9aca2c3c7d62\") " pod="metallb-system/frr-k8s-gr6pn" Nov 24 09:02:52 crc kubenswrapper[4886]: I1124 09:02:52.095150 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/68ac7d9f-558c-415d-a499-9aca2c3c7d62-frr-conf\") pod \"frr-k8s-gr6pn\" (UID: \"68ac7d9f-558c-415d-a499-9aca2c3c7d62\") " pod="metallb-system/frr-k8s-gr6pn" Nov 24 09:02:52 crc kubenswrapper[4886]: I1124 09:02:52.095322 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/68ac7d9f-558c-415d-a499-9aca2c3c7d62-metrics\") pod \"frr-k8s-gr6pn\" (UID: \"68ac7d9f-558c-415d-a499-9aca2c3c7d62\") " pod="metallb-system/frr-k8s-gr6pn" Nov 24 09:02:52 crc kubenswrapper[4886]: I1124 09:02:52.095581 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/68ac7d9f-558c-415d-a499-9aca2c3c7d62-reloader\") pod \"frr-k8s-gr6pn\" (UID: \"68ac7d9f-558c-415d-a499-9aca2c3c7d62\") " pod="metallb-system/frr-k8s-gr6pn" Nov 24 09:02:52 crc kubenswrapper[4886]: I1124 09:02:52.095755 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/68ac7d9f-558c-415d-a499-9aca2c3c7d62-frr-sockets\") pod \"frr-k8s-gr6pn\" (UID: \"68ac7d9f-558c-415d-a499-9aca2c3c7d62\") " pod="metallb-system/frr-k8s-gr6pn" Nov 24 09:02:52 crc kubenswrapper[4886]: I1124 09:02:52.101379 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3d2f363e-5545-4437-90ff-060ba6628fa9-cert\") pod \"frr-k8s-webhook-server-6998585d5-npnj4\" (UID: \"3d2f363e-5545-4437-90ff-060ba6628fa9\") " pod="metallb-system/frr-k8s-webhook-server-6998585d5-npnj4" Nov 24 09:02:52 crc kubenswrapper[4886]: I1124 09:02:52.124989 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rpzh9\" (UniqueName: \"kubernetes.io/projected/68ac7d9f-558c-415d-a499-9aca2c3c7d62-kube-api-access-rpzh9\") pod \"frr-k8s-gr6pn\" (UID: \"68ac7d9f-558c-415d-a499-9aca2c3c7d62\") " pod="metallb-system/frr-k8s-gr6pn" Nov 24 09:02:52 crc kubenswrapper[4886]: I1124 09:02:52.128256 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fhqg9\" (UniqueName: \"kubernetes.io/projected/3d2f363e-5545-4437-90ff-060ba6628fa9-kube-api-access-fhqg9\") pod \"frr-k8s-webhook-server-6998585d5-npnj4\" (UID: \"3d2f363e-5545-4437-90ff-060ba6628fa9\") " pod="metallb-system/frr-k8s-webhook-server-6998585d5-npnj4" Nov 24 09:02:52 crc kubenswrapper[4886]: I1124 09:02:52.131386 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/68ac7d9f-558c-415d-a499-9aca2c3c7d62-metrics-certs\") pod \"frr-k8s-gr6pn\" (UID: \"68ac7d9f-558c-415d-a499-9aca2c3c7d62\") " pod="metallb-system/frr-k8s-gr6pn" Nov 24 09:02:52 crc kubenswrapper[4886]: I1124 09:02:52.195103 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/795ee8d3-ac1e-4a6f-ba0a-8b75eda08e00-memberlist\") pod \"speaker-gwqzh\" (UID: \"795ee8d3-ac1e-4a6f-ba0a-8b75eda08e00\") " pod="metallb-system/speaker-gwqzh" Nov 24 09:02:52 crc kubenswrapper[4886]: I1124 09:02:52.195174 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/795ee8d3-ac1e-4a6f-ba0a-8b75eda08e00-metallb-excludel2\") pod \"speaker-gwqzh\" (UID: \"795ee8d3-ac1e-4a6f-ba0a-8b75eda08e00\") " pod="metallb-system/speaker-gwqzh" Nov 24 09:02:52 crc kubenswrapper[4886]: I1124 09:02:52.195221 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/795ee8d3-ac1e-4a6f-ba0a-8b75eda08e00-metrics-certs\") pod \"speaker-gwqzh\" (UID: \"795ee8d3-ac1e-4a6f-ba0a-8b75eda08e00\") " pod="metallb-system/speaker-gwqzh" Nov 24 09:02:52 crc kubenswrapper[4886]: I1124 09:02:52.195244 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8bfe8a52-0472-407d-a1c4-a828c81e5032-cert\") pod \"controller-6c7b4b5f48-znxdl\" (UID: \"8bfe8a52-0472-407d-a1c4-a828c81e5032\") " pod="metallb-system/controller-6c7b4b5f48-znxdl" Nov 24 09:02:52 crc kubenswrapper[4886]: E1124 09:02:52.195268 4886 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Nov 24 09:02:52 crc kubenswrapper[4886]: E1124 09:02:52.195346 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/795ee8d3-ac1e-4a6f-ba0a-8b75eda08e00-memberlist podName:795ee8d3-ac1e-4a6f-ba0a-8b75eda08e00 nodeName:}" failed. No retries permitted until 2025-11-24 09:02:52.695316393 +0000 UTC m=+828.582054538 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/795ee8d3-ac1e-4a6f-ba0a-8b75eda08e00-memberlist") pod "speaker-gwqzh" (UID: "795ee8d3-ac1e-4a6f-ba0a-8b75eda08e00") : secret "metallb-memberlist" not found Nov 24 09:02:52 crc kubenswrapper[4886]: I1124 09:02:52.195275 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7wklt\" (UniqueName: \"kubernetes.io/projected/795ee8d3-ac1e-4a6f-ba0a-8b75eda08e00-kube-api-access-7wklt\") pod \"speaker-gwqzh\" (UID: \"795ee8d3-ac1e-4a6f-ba0a-8b75eda08e00\") " pod="metallb-system/speaker-gwqzh" Nov 24 09:02:52 crc kubenswrapper[4886]: I1124 09:02:52.195521 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4dh8p\" (UniqueName: \"kubernetes.io/projected/8bfe8a52-0472-407d-a1c4-a828c81e5032-kube-api-access-4dh8p\") pod \"controller-6c7b4b5f48-znxdl\" (UID: \"8bfe8a52-0472-407d-a1c4-a828c81e5032\") " pod="metallb-system/controller-6c7b4b5f48-znxdl" Nov 24 09:02:52 crc kubenswrapper[4886]: I1124 09:02:52.195643 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8bfe8a52-0472-407d-a1c4-a828c81e5032-metrics-certs\") pod \"controller-6c7b4b5f48-znxdl\" (UID: \"8bfe8a52-0472-407d-a1c4-a828c81e5032\") " pod="metallb-system/controller-6c7b4b5f48-znxdl" Nov 24 09:02:52 crc kubenswrapper[4886]: E1124 09:02:52.195793 4886 secret.go:188] Couldn't get secret metallb-system/controller-certs-secret: secret "controller-certs-secret" not found Nov 24 09:02:52 crc kubenswrapper[4886]: E1124 09:02:52.195866 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8bfe8a52-0472-407d-a1c4-a828c81e5032-metrics-certs podName:8bfe8a52-0472-407d-a1c4-a828c81e5032 nodeName:}" failed. No retries permitted until 2025-11-24 09:02:52.695838378 +0000 UTC m=+828.582576513 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/8bfe8a52-0472-407d-a1c4-a828c81e5032-metrics-certs") pod "controller-6c7b4b5f48-znxdl" (UID: "8bfe8a52-0472-407d-a1c4-a828c81e5032") : secret "controller-certs-secret" not found Nov 24 09:02:52 crc kubenswrapper[4886]: I1124 09:02:52.196130 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/795ee8d3-ac1e-4a6f-ba0a-8b75eda08e00-metallb-excludel2\") pod \"speaker-gwqzh\" (UID: \"795ee8d3-ac1e-4a6f-ba0a-8b75eda08e00\") " pod="metallb-system/speaker-gwqzh" Nov 24 09:02:52 crc kubenswrapper[4886]: I1124 09:02:52.199184 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/795ee8d3-ac1e-4a6f-ba0a-8b75eda08e00-metrics-certs\") pod \"speaker-gwqzh\" (UID: \"795ee8d3-ac1e-4a6f-ba0a-8b75eda08e00\") " pod="metallb-system/speaker-gwqzh" Nov 24 09:02:52 crc kubenswrapper[4886]: I1124 09:02:52.203903 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8bfe8a52-0472-407d-a1c4-a828c81e5032-cert\") pod \"controller-6c7b4b5f48-znxdl\" (UID: \"8bfe8a52-0472-407d-a1c4-a828c81e5032\") " pod="metallb-system/controller-6c7b4b5f48-znxdl" Nov 24 09:02:52 crc kubenswrapper[4886]: I1124 09:02:52.212406 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4dh8p\" (UniqueName: \"kubernetes.io/projected/8bfe8a52-0472-407d-a1c4-a828c81e5032-kube-api-access-4dh8p\") pod \"controller-6c7b4b5f48-znxdl\" (UID: \"8bfe8a52-0472-407d-a1c4-a828c81e5032\") " pod="metallb-system/controller-6c7b4b5f48-znxdl" Nov 24 09:02:52 crc kubenswrapper[4886]: I1124 09:02:52.219457 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7wklt\" (UniqueName: \"kubernetes.io/projected/795ee8d3-ac1e-4a6f-ba0a-8b75eda08e00-kube-api-access-7wklt\") pod \"speaker-gwqzh\" (UID: \"795ee8d3-ac1e-4a6f-ba0a-8b75eda08e00\") " pod="metallb-system/speaker-gwqzh" Nov 24 09:02:52 crc kubenswrapper[4886]: I1124 09:02:52.243428 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-gr6pn" Nov 24 09:02:52 crc kubenswrapper[4886]: I1124 09:02:52.255843 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-6998585d5-npnj4" Nov 24 09:02:52 crc kubenswrapper[4886]: I1124 09:02:52.539716 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-6998585d5-npnj4"] Nov 24 09:02:52 crc kubenswrapper[4886]: W1124 09:02:52.546806 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3d2f363e_5545_4437_90ff_060ba6628fa9.slice/crio-42c0c96ed086f14a74070eae745c489c7ef3989363d00d1a7d97aa71d5cc2200 WatchSource:0}: Error finding container 42c0c96ed086f14a74070eae745c489c7ef3989363d00d1a7d97aa71d5cc2200: Status 404 returned error can't find the container with id 42c0c96ed086f14a74070eae745c489c7ef3989363d00d1a7d97aa71d5cc2200 Nov 24 09:02:52 crc kubenswrapper[4886]: I1124 09:02:52.704369 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8bfe8a52-0472-407d-a1c4-a828c81e5032-metrics-certs\") pod \"controller-6c7b4b5f48-znxdl\" (UID: \"8bfe8a52-0472-407d-a1c4-a828c81e5032\") " pod="metallb-system/controller-6c7b4b5f48-znxdl" Nov 24 09:02:52 crc kubenswrapper[4886]: I1124 09:02:52.704425 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/795ee8d3-ac1e-4a6f-ba0a-8b75eda08e00-memberlist\") pod \"speaker-gwqzh\" (UID: \"795ee8d3-ac1e-4a6f-ba0a-8b75eda08e00\") " pod="metallb-system/speaker-gwqzh" Nov 24 09:02:52 crc kubenswrapper[4886]: E1124 09:02:52.704543 4886 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Nov 24 09:02:52 crc kubenswrapper[4886]: E1124 09:02:52.704602 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/795ee8d3-ac1e-4a6f-ba0a-8b75eda08e00-memberlist podName:795ee8d3-ac1e-4a6f-ba0a-8b75eda08e00 nodeName:}" failed. No retries permitted until 2025-11-24 09:02:53.704581709 +0000 UTC m=+829.591319844 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/795ee8d3-ac1e-4a6f-ba0a-8b75eda08e00-memberlist") pod "speaker-gwqzh" (UID: "795ee8d3-ac1e-4a6f-ba0a-8b75eda08e00") : secret "metallb-memberlist" not found Nov 24 09:02:52 crc kubenswrapper[4886]: I1124 09:02:52.712872 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8bfe8a52-0472-407d-a1c4-a828c81e5032-metrics-certs\") pod \"controller-6c7b4b5f48-znxdl\" (UID: \"8bfe8a52-0472-407d-a1c4-a828c81e5032\") " pod="metallb-system/controller-6c7b4b5f48-znxdl" Nov 24 09:02:52 crc kubenswrapper[4886]: I1124 09:02:52.984732 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6c7b4b5f48-znxdl" Nov 24 09:02:53 crc kubenswrapper[4886]: I1124 09:02:53.183318 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-6998585d5-npnj4" event={"ID":"3d2f363e-5545-4437-90ff-060ba6628fa9","Type":"ContainerStarted","Data":"42c0c96ed086f14a74070eae745c489c7ef3989363d00d1a7d97aa71d5cc2200"} Nov 24 09:02:53 crc kubenswrapper[4886]: I1124 09:02:53.187182 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-gr6pn" event={"ID":"68ac7d9f-558c-415d-a499-9aca2c3c7d62","Type":"ContainerStarted","Data":"ebf858355eed0543a183806043c562eb8e7ae889ab75611cfe103c4e7e4c83db"} Nov 24 09:02:53 crc kubenswrapper[4886]: I1124 09:02:53.188185 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6c7b4b5f48-znxdl"] Nov 24 09:02:53 crc kubenswrapper[4886]: I1124 09:02:53.722185 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/795ee8d3-ac1e-4a6f-ba0a-8b75eda08e00-memberlist\") pod \"speaker-gwqzh\" (UID: \"795ee8d3-ac1e-4a6f-ba0a-8b75eda08e00\") " pod="metallb-system/speaker-gwqzh" Nov 24 09:02:53 crc kubenswrapper[4886]: I1124 09:02:53.728776 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/795ee8d3-ac1e-4a6f-ba0a-8b75eda08e00-memberlist\") pod \"speaker-gwqzh\" (UID: \"795ee8d3-ac1e-4a6f-ba0a-8b75eda08e00\") " pod="metallb-system/speaker-gwqzh" Nov 24 09:02:53 crc kubenswrapper[4886]: I1124 09:02:53.851224 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-gwqzh" Nov 24 09:02:53 crc kubenswrapper[4886]: W1124 09:02:53.874334 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod795ee8d3_ac1e_4a6f_ba0a_8b75eda08e00.slice/crio-b2cc243d0baa081c2a4aa0573a34046c92c15bd4f0015948e6f85261236db206 WatchSource:0}: Error finding container b2cc243d0baa081c2a4aa0573a34046c92c15bd4f0015948e6f85261236db206: Status 404 returned error can't find the container with id b2cc243d0baa081c2a4aa0573a34046c92c15bd4f0015948e6f85261236db206 Nov 24 09:02:54 crc kubenswrapper[4886]: I1124 09:02:54.199392 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-gwqzh" event={"ID":"795ee8d3-ac1e-4a6f-ba0a-8b75eda08e00","Type":"ContainerStarted","Data":"117c692408c5708eb29e64e5cd5697f4729f875f4354eb10529ce68301ae81e0"} Nov 24 09:02:54 crc kubenswrapper[4886]: I1124 09:02:54.199476 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-gwqzh" event={"ID":"795ee8d3-ac1e-4a6f-ba0a-8b75eda08e00","Type":"ContainerStarted","Data":"b2cc243d0baa081c2a4aa0573a34046c92c15bd4f0015948e6f85261236db206"} Nov 24 09:02:54 crc kubenswrapper[4886]: I1124 09:02:54.201974 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6c7b4b5f48-znxdl" event={"ID":"8bfe8a52-0472-407d-a1c4-a828c81e5032","Type":"ContainerStarted","Data":"847f17f0569c53858164880419ba42f7b9e074d153bc34b3310c04bd16b48e37"} Nov 24 09:02:54 crc kubenswrapper[4886]: I1124 09:02:54.202023 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6c7b4b5f48-znxdl" event={"ID":"8bfe8a52-0472-407d-a1c4-a828c81e5032","Type":"ContainerStarted","Data":"d31b6baa31ca144a254aa442460de96c337264073b8d8dcb724ba53f068a10b0"} Nov 24 09:02:54 crc kubenswrapper[4886]: I1124 09:02:54.202037 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6c7b4b5f48-znxdl" event={"ID":"8bfe8a52-0472-407d-a1c4-a828c81e5032","Type":"ContainerStarted","Data":"ea9daeb6074daba97ebb4419cb11296e8d0e8f2170d92abaa2eaba6d67598818"} Nov 24 09:02:54 crc kubenswrapper[4886]: I1124 09:02:54.203003 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-6c7b4b5f48-znxdl" Nov 24 09:02:54 crc kubenswrapper[4886]: I1124 09:02:54.222341 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-6c7b4b5f48-znxdl" podStartSLOduration=2.222322211 podStartE2EDuration="2.222322211s" podCreationTimestamp="2025-11-24 09:02:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 09:02:54.218466429 +0000 UTC m=+830.105204584" watchObservedRunningTime="2025-11-24 09:02:54.222322211 +0000 UTC m=+830.109060346" Nov 24 09:02:54 crc kubenswrapper[4886]: I1124 09:02:54.975335 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-2dvtb"] Nov 24 09:02:54 crc kubenswrapper[4886]: I1124 09:02:54.977305 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2dvtb" Nov 24 09:02:55 crc kubenswrapper[4886]: I1124 09:02:55.023333 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-2dvtb"] Nov 24 09:02:55 crc kubenswrapper[4886]: I1124 09:02:55.159783 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/395daed1-d0f6-4c6f-8ffe-bd89d43c576f-utilities\") pod \"certified-operators-2dvtb\" (UID: \"395daed1-d0f6-4c6f-8ffe-bd89d43c576f\") " pod="openshift-marketplace/certified-operators-2dvtb" Nov 24 09:02:55 crc kubenswrapper[4886]: I1124 09:02:55.159845 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/395daed1-d0f6-4c6f-8ffe-bd89d43c576f-catalog-content\") pod \"certified-operators-2dvtb\" (UID: \"395daed1-d0f6-4c6f-8ffe-bd89d43c576f\") " pod="openshift-marketplace/certified-operators-2dvtb" Nov 24 09:02:55 crc kubenswrapper[4886]: I1124 09:02:55.159903 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l4vm5\" (UniqueName: \"kubernetes.io/projected/395daed1-d0f6-4c6f-8ffe-bd89d43c576f-kube-api-access-l4vm5\") pod \"certified-operators-2dvtb\" (UID: \"395daed1-d0f6-4c6f-8ffe-bd89d43c576f\") " pod="openshift-marketplace/certified-operators-2dvtb" Nov 24 09:02:55 crc kubenswrapper[4886]: I1124 09:02:55.216955 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-gwqzh" event={"ID":"795ee8d3-ac1e-4a6f-ba0a-8b75eda08e00","Type":"ContainerStarted","Data":"d08ad0abafac1683a18834cb6d99d9de922d907941c8a4c3fc589afae98bdf03"} Nov 24 09:02:55 crc kubenswrapper[4886]: I1124 09:02:55.217245 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-gwqzh" Nov 24 09:02:55 crc kubenswrapper[4886]: I1124 09:02:55.254827 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-gwqzh" podStartSLOduration=4.254793807 podStartE2EDuration="4.254793807s" podCreationTimestamp="2025-11-24 09:02:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 09:02:55.250718099 +0000 UTC m=+831.137456254" watchObservedRunningTime="2025-11-24 09:02:55.254793807 +0000 UTC m=+831.141531952" Nov 24 09:02:55 crc kubenswrapper[4886]: I1124 09:02:55.262545 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/395daed1-d0f6-4c6f-8ffe-bd89d43c576f-utilities\") pod \"certified-operators-2dvtb\" (UID: \"395daed1-d0f6-4c6f-8ffe-bd89d43c576f\") " pod="openshift-marketplace/certified-operators-2dvtb" Nov 24 09:02:55 crc kubenswrapper[4886]: I1124 09:02:55.262611 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/395daed1-d0f6-4c6f-8ffe-bd89d43c576f-catalog-content\") pod \"certified-operators-2dvtb\" (UID: \"395daed1-d0f6-4c6f-8ffe-bd89d43c576f\") " pod="openshift-marketplace/certified-operators-2dvtb" Nov 24 09:02:55 crc kubenswrapper[4886]: I1124 09:02:55.262658 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l4vm5\" (UniqueName: \"kubernetes.io/projected/395daed1-d0f6-4c6f-8ffe-bd89d43c576f-kube-api-access-l4vm5\") pod \"certified-operators-2dvtb\" (UID: \"395daed1-d0f6-4c6f-8ffe-bd89d43c576f\") " pod="openshift-marketplace/certified-operators-2dvtb" Nov 24 09:02:55 crc kubenswrapper[4886]: I1124 09:02:55.264009 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/395daed1-d0f6-4c6f-8ffe-bd89d43c576f-utilities\") pod \"certified-operators-2dvtb\" (UID: \"395daed1-d0f6-4c6f-8ffe-bd89d43c576f\") " pod="openshift-marketplace/certified-operators-2dvtb" Nov 24 09:02:55 crc kubenswrapper[4886]: I1124 09:02:55.264479 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/395daed1-d0f6-4c6f-8ffe-bd89d43c576f-catalog-content\") pod \"certified-operators-2dvtb\" (UID: \"395daed1-d0f6-4c6f-8ffe-bd89d43c576f\") " pod="openshift-marketplace/certified-operators-2dvtb" Nov 24 09:02:55 crc kubenswrapper[4886]: I1124 09:02:55.307254 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l4vm5\" (UniqueName: \"kubernetes.io/projected/395daed1-d0f6-4c6f-8ffe-bd89d43c576f-kube-api-access-l4vm5\") pod \"certified-operators-2dvtb\" (UID: \"395daed1-d0f6-4c6f-8ffe-bd89d43c576f\") " pod="openshift-marketplace/certified-operators-2dvtb" Nov 24 09:02:55 crc kubenswrapper[4886]: I1124 09:02:55.595131 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2dvtb" Nov 24 09:02:56 crc kubenswrapper[4886]: I1124 09:02:56.277595 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-2dvtb"] Nov 24 09:02:56 crc kubenswrapper[4886]: W1124 09:02:56.290568 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod395daed1_d0f6_4c6f_8ffe_bd89d43c576f.slice/crio-4394ec50fb2b142a953475327e3fa5f6724411a0c9b44d475a05948716390c6b WatchSource:0}: Error finding container 4394ec50fb2b142a953475327e3fa5f6724411a0c9b44d475a05948716390c6b: Status 404 returned error can't find the container with id 4394ec50fb2b142a953475327e3fa5f6724411a0c9b44d475a05948716390c6b Nov 24 09:02:57 crc kubenswrapper[4886]: I1124 09:02:57.245506 4886 generic.go:334] "Generic (PLEG): container finished" podID="395daed1-d0f6-4c6f-8ffe-bd89d43c576f" containerID="3c2937ef31b5f8ec2a341b91d7f6233bb9f0aec54e862f46c097be4e202f8aae" exitCode=0 Nov 24 09:02:57 crc kubenswrapper[4886]: I1124 09:02:57.245876 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2dvtb" event={"ID":"395daed1-d0f6-4c6f-8ffe-bd89d43c576f","Type":"ContainerDied","Data":"3c2937ef31b5f8ec2a341b91d7f6233bb9f0aec54e862f46c097be4e202f8aae"} Nov 24 09:02:57 crc kubenswrapper[4886]: I1124 09:02:57.245915 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2dvtb" event={"ID":"395daed1-d0f6-4c6f-8ffe-bd89d43c576f","Type":"ContainerStarted","Data":"4394ec50fb2b142a953475327e3fa5f6724411a0c9b44d475a05948716390c6b"} Nov 24 09:03:01 crc kubenswrapper[4886]: I1124 09:03:01.281035 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-6998585d5-npnj4" event={"ID":"3d2f363e-5545-4437-90ff-060ba6628fa9","Type":"ContainerStarted","Data":"e29c1cb79d392eba511d87c1d1a2b0636f2ff476185e607dc908388dc966c640"} Nov 24 09:03:01 crc kubenswrapper[4886]: I1124 09:03:01.281741 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-6998585d5-npnj4" Nov 24 09:03:01 crc kubenswrapper[4886]: I1124 09:03:01.284527 4886 generic.go:334] "Generic (PLEG): container finished" podID="68ac7d9f-558c-415d-a499-9aca2c3c7d62" containerID="e6027050eb7151f841f64c120a337ce8d38a2e051fe0b815a5c5a4233e0c9421" exitCode=0 Nov 24 09:03:01 crc kubenswrapper[4886]: I1124 09:03:01.284633 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-gr6pn" event={"ID":"68ac7d9f-558c-415d-a499-9aca2c3c7d62","Type":"ContainerDied","Data":"e6027050eb7151f841f64c120a337ce8d38a2e051fe0b815a5c5a4233e0c9421"} Nov 24 09:03:01 crc kubenswrapper[4886]: I1124 09:03:01.287895 4886 generic.go:334] "Generic (PLEG): container finished" podID="395daed1-d0f6-4c6f-8ffe-bd89d43c576f" containerID="c8efe2b519da1b41cbfb86f0b3f8e98737aeb2dd0fb4fc7a9653a48a91ea7def" exitCode=0 Nov 24 09:03:01 crc kubenswrapper[4886]: I1124 09:03:01.287936 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2dvtb" event={"ID":"395daed1-d0f6-4c6f-8ffe-bd89d43c576f","Type":"ContainerDied","Data":"c8efe2b519da1b41cbfb86f0b3f8e98737aeb2dd0fb4fc7a9653a48a91ea7def"} Nov 24 09:03:01 crc kubenswrapper[4886]: I1124 09:03:01.306586 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-6998585d5-npnj4" podStartSLOduration=2.321805304 podStartE2EDuration="10.306566062s" podCreationTimestamp="2025-11-24 09:02:51 +0000 UTC" firstStartedPulling="2025-11-24 09:02:52.550714846 +0000 UTC m=+828.437452981" lastFinishedPulling="2025-11-24 09:03:00.535475604 +0000 UTC m=+836.422213739" observedRunningTime="2025-11-24 09:03:01.303408821 +0000 UTC m=+837.190146966" watchObservedRunningTime="2025-11-24 09:03:01.306566062 +0000 UTC m=+837.193304197" Nov 24 09:03:01 crc kubenswrapper[4886]: I1124 09:03:01.784273 4886 patch_prober.go:28] interesting pod/machine-config-daemon-zc46q container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 09:03:01 crc kubenswrapper[4886]: I1124 09:03:01.784826 4886 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zc46q" podUID="23cb993e-0360-4449-b604-8ddd825a6502" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 09:03:01 crc kubenswrapper[4886]: I1124 09:03:01.784885 4886 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-zc46q" Nov 24 09:03:01 crc kubenswrapper[4886]: I1124 09:03:01.785765 4886 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"2e3a75d48f5b6c64a0453de51e83f56ff421f563e7ead2b6374e43297260b2ce"} pod="openshift-machine-config-operator/machine-config-daemon-zc46q" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 24 09:03:01 crc kubenswrapper[4886]: I1124 09:03:01.785853 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-zc46q" podUID="23cb993e-0360-4449-b604-8ddd825a6502" containerName="machine-config-daemon" containerID="cri-o://2e3a75d48f5b6c64a0453de51e83f56ff421f563e7ead2b6374e43297260b2ce" gracePeriod=600 Nov 24 09:03:02 crc kubenswrapper[4886]: I1124 09:03:02.298485 4886 generic.go:334] "Generic (PLEG): container finished" podID="68ac7d9f-558c-415d-a499-9aca2c3c7d62" containerID="45967936d9cdc5c64c3fd36b274414af6cb1d232e5e8769eb72eac4bf56a7399" exitCode=0 Nov 24 09:03:02 crc kubenswrapper[4886]: I1124 09:03:02.298562 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-gr6pn" event={"ID":"68ac7d9f-558c-415d-a499-9aca2c3c7d62","Type":"ContainerDied","Data":"45967936d9cdc5c64c3fd36b274414af6cb1d232e5e8769eb72eac4bf56a7399"} Nov 24 09:03:02 crc kubenswrapper[4886]: I1124 09:03:02.307845 4886 generic.go:334] "Generic (PLEG): container finished" podID="23cb993e-0360-4449-b604-8ddd825a6502" containerID="2e3a75d48f5b6c64a0453de51e83f56ff421f563e7ead2b6374e43297260b2ce" exitCode=0 Nov 24 09:03:02 crc kubenswrapper[4886]: I1124 09:03:02.307928 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zc46q" event={"ID":"23cb993e-0360-4449-b604-8ddd825a6502","Type":"ContainerDied","Data":"2e3a75d48f5b6c64a0453de51e83f56ff421f563e7ead2b6374e43297260b2ce"} Nov 24 09:03:02 crc kubenswrapper[4886]: I1124 09:03:02.307986 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zc46q" event={"ID":"23cb993e-0360-4449-b604-8ddd825a6502","Type":"ContainerStarted","Data":"c28e4d2681964faf5e8db0a7f606c313301cd5d8f7fd6af733f6e4caf7367ebc"} Nov 24 09:03:02 crc kubenswrapper[4886]: I1124 09:03:02.308010 4886 scope.go:117] "RemoveContainer" containerID="10588af6709fb47b831a7119f79d39a2660cc9b0982198d8ef6ad1d8444269b4" Nov 24 09:03:02 crc kubenswrapper[4886]: I1124 09:03:02.312756 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2dvtb" event={"ID":"395daed1-d0f6-4c6f-8ffe-bd89d43c576f","Type":"ContainerStarted","Data":"63065b948f4df49c996f830a90b385adce56b6b26acbae9acbfcbdc975fcbc58"} Nov 24 09:03:02 crc kubenswrapper[4886]: I1124 09:03:02.388881 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-2dvtb" podStartSLOduration=3.911056615 podStartE2EDuration="8.388833736s" podCreationTimestamp="2025-11-24 09:02:54 +0000 UTC" firstStartedPulling="2025-11-24 09:02:57.248329747 +0000 UTC m=+833.135067872" lastFinishedPulling="2025-11-24 09:03:01.726106858 +0000 UTC m=+837.612844993" observedRunningTime="2025-11-24 09:03:02.386630513 +0000 UTC m=+838.273368658" watchObservedRunningTime="2025-11-24 09:03:02.388833736 +0000 UTC m=+838.275571871" Nov 24 09:03:03 crc kubenswrapper[4886]: I1124 09:03:03.323145 4886 generic.go:334] "Generic (PLEG): container finished" podID="68ac7d9f-558c-415d-a499-9aca2c3c7d62" containerID="dee5f421d298cc6bdc554539041e4985459a4a9a02269ad8bfc29bc3f2688a6e" exitCode=0 Nov 24 09:03:03 crc kubenswrapper[4886]: I1124 09:03:03.323255 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-gr6pn" event={"ID":"68ac7d9f-558c-415d-a499-9aca2c3c7d62","Type":"ContainerDied","Data":"dee5f421d298cc6bdc554539041e4985459a4a9a02269ad8bfc29bc3f2688a6e"} Nov 24 09:03:04 crc kubenswrapper[4886]: I1124 09:03:04.343755 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-gr6pn" event={"ID":"68ac7d9f-558c-415d-a499-9aca2c3c7d62","Type":"ContainerStarted","Data":"80145036c44c7e4025e83114989e98db5dc63285e1fad3466727fa7aa0d3d108"} Nov 24 09:03:04 crc kubenswrapper[4886]: I1124 09:03:04.344349 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-gr6pn" event={"ID":"68ac7d9f-558c-415d-a499-9aca2c3c7d62","Type":"ContainerStarted","Data":"95c066eb384665e9bf1a5f11d2e45f8ea9bce07f2f96b4251a2ee3f6f9672f53"} Nov 24 09:03:04 crc kubenswrapper[4886]: I1124 09:03:04.344364 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-gr6pn" event={"ID":"68ac7d9f-558c-415d-a499-9aca2c3c7d62","Type":"ContainerStarted","Data":"df90601464d6e190c7be44bf01214e6cad24511936f5a4adb384d4c407167d04"} Nov 24 09:03:04 crc kubenswrapper[4886]: I1124 09:03:04.344374 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-gr6pn" event={"ID":"68ac7d9f-558c-415d-a499-9aca2c3c7d62","Type":"ContainerStarted","Data":"785e220cdd7906ebf7e58db89b98c9924e308c2a73ae2a36c65529ce9e916937"} Nov 24 09:03:04 crc kubenswrapper[4886]: I1124 09:03:04.344386 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-gr6pn" event={"ID":"68ac7d9f-558c-415d-a499-9aca2c3c7d62","Type":"ContainerStarted","Data":"11bdb7829adbe143702d1defa3d572e8c5d03c02bf4a8ced9ce9ae2f58b0d8ff"} Nov 24 09:03:05 crc kubenswrapper[4886]: I1124 09:03:05.370002 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-gr6pn" event={"ID":"68ac7d9f-558c-415d-a499-9aca2c3c7d62","Type":"ContainerStarted","Data":"3a00c9311968224bd16f4524526ba6b890d3b2265598c1ffd3af153299c4781c"} Nov 24 09:03:05 crc kubenswrapper[4886]: I1124 09:03:05.370464 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-gr6pn" Nov 24 09:03:05 crc kubenswrapper[4886]: I1124 09:03:05.401583 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-gr6pn" podStartSLOduration=6.398045689 podStartE2EDuration="14.401560399s" podCreationTimestamp="2025-11-24 09:02:51 +0000 UTC" firstStartedPulling="2025-11-24 09:02:52.527219407 +0000 UTC m=+828.413957542" lastFinishedPulling="2025-11-24 09:03:00.530734117 +0000 UTC m=+836.417472252" observedRunningTime="2025-11-24 09:03:05.399108508 +0000 UTC m=+841.285846653" watchObservedRunningTime="2025-11-24 09:03:05.401560399 +0000 UTC m=+841.288298534" Nov 24 09:03:05 crc kubenswrapper[4886]: I1124 09:03:05.595352 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-2dvtb" Nov 24 09:03:05 crc kubenswrapper[4886]: I1124 09:03:05.595427 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-2dvtb" Nov 24 09:03:05 crc kubenswrapper[4886]: I1124 09:03:05.644450 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-2dvtb" Nov 24 09:03:06 crc kubenswrapper[4886]: I1124 09:03:06.422662 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-2dvtb" Nov 24 09:03:06 crc kubenswrapper[4886]: I1124 09:03:06.476855 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-2dvtb"] Nov 24 09:03:07 crc kubenswrapper[4886]: I1124 09:03:07.244702 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-gr6pn" Nov 24 09:03:07 crc kubenswrapper[4886]: I1124 09:03:07.288776 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-gr6pn" Nov 24 09:03:08 crc kubenswrapper[4886]: I1124 09:03:08.396959 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-2dvtb" podUID="395daed1-d0f6-4c6f-8ffe-bd89d43c576f" containerName="registry-server" containerID="cri-o://63065b948f4df49c996f830a90b385adce56b6b26acbae9acbfcbdc975fcbc58" gracePeriod=2 Nov 24 09:03:08 crc kubenswrapper[4886]: I1124 09:03:08.789632 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2dvtb" Nov 24 09:03:08 crc kubenswrapper[4886]: I1124 09:03:08.821548 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/395daed1-d0f6-4c6f-8ffe-bd89d43c576f-utilities\") pod \"395daed1-d0f6-4c6f-8ffe-bd89d43c576f\" (UID: \"395daed1-d0f6-4c6f-8ffe-bd89d43c576f\") " Nov 24 09:03:08 crc kubenswrapper[4886]: I1124 09:03:08.821654 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l4vm5\" (UniqueName: \"kubernetes.io/projected/395daed1-d0f6-4c6f-8ffe-bd89d43c576f-kube-api-access-l4vm5\") pod \"395daed1-d0f6-4c6f-8ffe-bd89d43c576f\" (UID: \"395daed1-d0f6-4c6f-8ffe-bd89d43c576f\") " Nov 24 09:03:08 crc kubenswrapper[4886]: I1124 09:03:08.821729 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/395daed1-d0f6-4c6f-8ffe-bd89d43c576f-catalog-content\") pod \"395daed1-d0f6-4c6f-8ffe-bd89d43c576f\" (UID: \"395daed1-d0f6-4c6f-8ffe-bd89d43c576f\") " Nov 24 09:03:08 crc kubenswrapper[4886]: I1124 09:03:08.822493 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/395daed1-d0f6-4c6f-8ffe-bd89d43c576f-utilities" (OuterVolumeSpecName: "utilities") pod "395daed1-d0f6-4c6f-8ffe-bd89d43c576f" (UID: "395daed1-d0f6-4c6f-8ffe-bd89d43c576f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 09:03:08 crc kubenswrapper[4886]: I1124 09:03:08.828569 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/395daed1-d0f6-4c6f-8ffe-bd89d43c576f-kube-api-access-l4vm5" (OuterVolumeSpecName: "kube-api-access-l4vm5") pod "395daed1-d0f6-4c6f-8ffe-bd89d43c576f" (UID: "395daed1-d0f6-4c6f-8ffe-bd89d43c576f"). InnerVolumeSpecName "kube-api-access-l4vm5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 09:03:08 crc kubenswrapper[4886]: I1124 09:03:08.886550 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/395daed1-d0f6-4c6f-8ffe-bd89d43c576f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "395daed1-d0f6-4c6f-8ffe-bd89d43c576f" (UID: "395daed1-d0f6-4c6f-8ffe-bd89d43c576f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 09:03:08 crc kubenswrapper[4886]: I1124 09:03:08.923459 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l4vm5\" (UniqueName: \"kubernetes.io/projected/395daed1-d0f6-4c6f-8ffe-bd89d43c576f-kube-api-access-l4vm5\") on node \"crc\" DevicePath \"\"" Nov 24 09:03:08 crc kubenswrapper[4886]: I1124 09:03:08.923529 4886 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/395daed1-d0f6-4c6f-8ffe-bd89d43c576f-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 09:03:08 crc kubenswrapper[4886]: I1124 09:03:08.923547 4886 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/395daed1-d0f6-4c6f-8ffe-bd89d43c576f-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 09:03:09 crc kubenswrapper[4886]: I1124 09:03:09.405411 4886 generic.go:334] "Generic (PLEG): container finished" podID="395daed1-d0f6-4c6f-8ffe-bd89d43c576f" containerID="63065b948f4df49c996f830a90b385adce56b6b26acbae9acbfcbdc975fcbc58" exitCode=0 Nov 24 09:03:09 crc kubenswrapper[4886]: I1124 09:03:09.405469 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2dvtb" event={"ID":"395daed1-d0f6-4c6f-8ffe-bd89d43c576f","Type":"ContainerDied","Data":"63065b948f4df49c996f830a90b385adce56b6b26acbae9acbfcbdc975fcbc58"} Nov 24 09:03:09 crc kubenswrapper[4886]: I1124 09:03:09.405504 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2dvtb" event={"ID":"395daed1-d0f6-4c6f-8ffe-bd89d43c576f","Type":"ContainerDied","Data":"4394ec50fb2b142a953475327e3fa5f6724411a0c9b44d475a05948716390c6b"} Nov 24 09:03:09 crc kubenswrapper[4886]: I1124 09:03:09.405527 4886 scope.go:117] "RemoveContainer" containerID="63065b948f4df49c996f830a90b385adce56b6b26acbae9acbfcbdc975fcbc58" Nov 24 09:03:09 crc kubenswrapper[4886]: I1124 09:03:09.405549 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2dvtb" Nov 24 09:03:09 crc kubenswrapper[4886]: I1124 09:03:09.434675 4886 scope.go:117] "RemoveContainer" containerID="c8efe2b519da1b41cbfb86f0b3f8e98737aeb2dd0fb4fc7a9653a48a91ea7def" Nov 24 09:03:09 crc kubenswrapper[4886]: I1124 09:03:09.443773 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-2dvtb"] Nov 24 09:03:09 crc kubenswrapper[4886]: I1124 09:03:09.453854 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-2dvtb"] Nov 24 09:03:09 crc kubenswrapper[4886]: I1124 09:03:09.463080 4886 scope.go:117] "RemoveContainer" containerID="3c2937ef31b5f8ec2a341b91d7f6233bb9f0aec54e862f46c097be4e202f8aae" Nov 24 09:03:09 crc kubenswrapper[4886]: I1124 09:03:09.481550 4886 scope.go:117] "RemoveContainer" containerID="63065b948f4df49c996f830a90b385adce56b6b26acbae9acbfcbdc975fcbc58" Nov 24 09:03:09 crc kubenswrapper[4886]: E1124 09:03:09.482889 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"63065b948f4df49c996f830a90b385adce56b6b26acbae9acbfcbdc975fcbc58\": container with ID starting with 63065b948f4df49c996f830a90b385adce56b6b26acbae9acbfcbdc975fcbc58 not found: ID does not exist" containerID="63065b948f4df49c996f830a90b385adce56b6b26acbae9acbfcbdc975fcbc58" Nov 24 09:03:09 crc kubenswrapper[4886]: I1124 09:03:09.483032 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"63065b948f4df49c996f830a90b385adce56b6b26acbae9acbfcbdc975fcbc58"} err="failed to get container status \"63065b948f4df49c996f830a90b385adce56b6b26acbae9acbfcbdc975fcbc58\": rpc error: code = NotFound desc = could not find container \"63065b948f4df49c996f830a90b385adce56b6b26acbae9acbfcbdc975fcbc58\": container with ID starting with 63065b948f4df49c996f830a90b385adce56b6b26acbae9acbfcbdc975fcbc58 not found: ID does not exist" Nov 24 09:03:09 crc kubenswrapper[4886]: I1124 09:03:09.483168 4886 scope.go:117] "RemoveContainer" containerID="c8efe2b519da1b41cbfb86f0b3f8e98737aeb2dd0fb4fc7a9653a48a91ea7def" Nov 24 09:03:09 crc kubenswrapper[4886]: E1124 09:03:09.483705 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c8efe2b519da1b41cbfb86f0b3f8e98737aeb2dd0fb4fc7a9653a48a91ea7def\": container with ID starting with c8efe2b519da1b41cbfb86f0b3f8e98737aeb2dd0fb4fc7a9653a48a91ea7def not found: ID does not exist" containerID="c8efe2b519da1b41cbfb86f0b3f8e98737aeb2dd0fb4fc7a9653a48a91ea7def" Nov 24 09:03:09 crc kubenswrapper[4886]: I1124 09:03:09.483839 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c8efe2b519da1b41cbfb86f0b3f8e98737aeb2dd0fb4fc7a9653a48a91ea7def"} err="failed to get container status \"c8efe2b519da1b41cbfb86f0b3f8e98737aeb2dd0fb4fc7a9653a48a91ea7def\": rpc error: code = NotFound desc = could not find container \"c8efe2b519da1b41cbfb86f0b3f8e98737aeb2dd0fb4fc7a9653a48a91ea7def\": container with ID starting with c8efe2b519da1b41cbfb86f0b3f8e98737aeb2dd0fb4fc7a9653a48a91ea7def not found: ID does not exist" Nov 24 09:03:09 crc kubenswrapper[4886]: I1124 09:03:09.483928 4886 scope.go:117] "RemoveContainer" containerID="3c2937ef31b5f8ec2a341b91d7f6233bb9f0aec54e862f46c097be4e202f8aae" Nov 24 09:03:09 crc kubenswrapper[4886]: E1124 09:03:09.484542 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3c2937ef31b5f8ec2a341b91d7f6233bb9f0aec54e862f46c097be4e202f8aae\": container with ID starting with 3c2937ef31b5f8ec2a341b91d7f6233bb9f0aec54e862f46c097be4e202f8aae not found: ID does not exist" containerID="3c2937ef31b5f8ec2a341b91d7f6233bb9f0aec54e862f46c097be4e202f8aae" Nov 24 09:03:09 crc kubenswrapper[4886]: I1124 09:03:09.484590 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3c2937ef31b5f8ec2a341b91d7f6233bb9f0aec54e862f46c097be4e202f8aae"} err="failed to get container status \"3c2937ef31b5f8ec2a341b91d7f6233bb9f0aec54e862f46c097be4e202f8aae\": rpc error: code = NotFound desc = could not find container \"3c2937ef31b5f8ec2a341b91d7f6233bb9f0aec54e862f46c097be4e202f8aae\": container with ID starting with 3c2937ef31b5f8ec2a341b91d7f6233bb9f0aec54e862f46c097be4e202f8aae not found: ID does not exist" Nov 24 09:03:10 crc kubenswrapper[4886]: I1124 09:03:10.857912 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="395daed1-d0f6-4c6f-8ffe-bd89d43c576f" path="/var/lib/kubelet/pods/395daed1-d0f6-4c6f-8ffe-bd89d43c576f/volumes" Nov 24 09:03:12 crc kubenswrapper[4886]: I1124 09:03:12.261208 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-6998585d5-npnj4" Nov 24 09:03:12 crc kubenswrapper[4886]: I1124 09:03:12.990617 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-6c7b4b5f48-znxdl" Nov 24 09:03:13 crc kubenswrapper[4886]: I1124 09:03:13.856025 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-gwqzh" Nov 24 09:03:16 crc kubenswrapper[4886]: I1124 09:03:16.803000 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-lqwtt"] Nov 24 09:03:16 crc kubenswrapper[4886]: E1124 09:03:16.803294 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="395daed1-d0f6-4c6f-8ffe-bd89d43c576f" containerName="registry-server" Nov 24 09:03:16 crc kubenswrapper[4886]: I1124 09:03:16.803309 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="395daed1-d0f6-4c6f-8ffe-bd89d43c576f" containerName="registry-server" Nov 24 09:03:16 crc kubenswrapper[4886]: E1124 09:03:16.803321 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="395daed1-d0f6-4c6f-8ffe-bd89d43c576f" containerName="extract-utilities" Nov 24 09:03:16 crc kubenswrapper[4886]: I1124 09:03:16.803327 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="395daed1-d0f6-4c6f-8ffe-bd89d43c576f" containerName="extract-utilities" Nov 24 09:03:16 crc kubenswrapper[4886]: E1124 09:03:16.803350 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="395daed1-d0f6-4c6f-8ffe-bd89d43c576f" containerName="extract-content" Nov 24 09:03:16 crc kubenswrapper[4886]: I1124 09:03:16.803356 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="395daed1-d0f6-4c6f-8ffe-bd89d43c576f" containerName="extract-content" Nov 24 09:03:16 crc kubenswrapper[4886]: I1124 09:03:16.803462 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="395daed1-d0f6-4c6f-8ffe-bd89d43c576f" containerName="registry-server" Nov 24 09:03:16 crc kubenswrapper[4886]: I1124 09:03:16.803923 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-lqwtt" Nov 24 09:03:16 crc kubenswrapper[4886]: I1124 09:03:16.805871 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Nov 24 09:03:16 crc kubenswrapper[4886]: I1124 09:03:16.806357 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Nov 24 09:03:16 crc kubenswrapper[4886]: I1124 09:03:16.806479 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-w7pmw" Nov 24 09:03:16 crc kubenswrapper[4886]: I1124 09:03:16.843519 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vlzx4\" (UniqueName: \"kubernetes.io/projected/a0d1d03c-56b2-4300-823f-7ccdbad491e8-kube-api-access-vlzx4\") pod \"openstack-operator-index-lqwtt\" (UID: \"a0d1d03c-56b2-4300-823f-7ccdbad491e8\") " pod="openstack-operators/openstack-operator-index-lqwtt" Nov 24 09:03:16 crc kubenswrapper[4886]: I1124 09:03:16.857043 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-lqwtt"] Nov 24 09:03:16 crc kubenswrapper[4886]: I1124 09:03:16.945259 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vlzx4\" (UniqueName: \"kubernetes.io/projected/a0d1d03c-56b2-4300-823f-7ccdbad491e8-kube-api-access-vlzx4\") pod \"openstack-operator-index-lqwtt\" (UID: \"a0d1d03c-56b2-4300-823f-7ccdbad491e8\") " pod="openstack-operators/openstack-operator-index-lqwtt" Nov 24 09:03:16 crc kubenswrapper[4886]: I1124 09:03:16.977473 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vlzx4\" (UniqueName: \"kubernetes.io/projected/a0d1d03c-56b2-4300-823f-7ccdbad491e8-kube-api-access-vlzx4\") pod \"openstack-operator-index-lqwtt\" (UID: \"a0d1d03c-56b2-4300-823f-7ccdbad491e8\") " pod="openstack-operators/openstack-operator-index-lqwtt" Nov 24 09:03:17 crc kubenswrapper[4886]: I1124 09:03:17.130408 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-lqwtt" Nov 24 09:03:17 crc kubenswrapper[4886]: I1124 09:03:17.542749 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-lqwtt"] Nov 24 09:03:18 crc kubenswrapper[4886]: I1124 09:03:18.469496 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-lqwtt" event={"ID":"a0d1d03c-56b2-4300-823f-7ccdbad491e8","Type":"ContainerStarted","Data":"0b4d323058f0b5b06680c2d772fa5b7e4fcb860a186412e56d39a5906a3dcea9"} Nov 24 09:03:20 crc kubenswrapper[4886]: I1124 09:03:20.484576 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-lqwtt" event={"ID":"a0d1d03c-56b2-4300-823f-7ccdbad491e8","Type":"ContainerStarted","Data":"e97f541667ae911df1c19dab067a35cf14e43d0a23a94ae6d49b370fc6c7251d"} Nov 24 09:03:20 crc kubenswrapper[4886]: I1124 09:03:20.505113 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-lqwtt" podStartSLOduration=2.347020002 podStartE2EDuration="4.505089654s" podCreationTimestamp="2025-11-24 09:03:16 +0000 UTC" firstStartedPulling="2025-11-24 09:03:17.561283521 +0000 UTC m=+853.448021656" lastFinishedPulling="2025-11-24 09:03:19.719353173 +0000 UTC m=+855.606091308" observedRunningTime="2025-11-24 09:03:20.50493578 +0000 UTC m=+856.391673915" watchObservedRunningTime="2025-11-24 09:03:20.505089654 +0000 UTC m=+856.391827789" Nov 24 09:03:20 crc kubenswrapper[4886]: I1124 09:03:20.559008 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-lqwtt"] Nov 24 09:03:21 crc kubenswrapper[4886]: I1124 09:03:21.160680 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-wfxd2"] Nov 24 09:03:21 crc kubenswrapper[4886]: I1124 09:03:21.161623 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-wfxd2" Nov 24 09:03:21 crc kubenswrapper[4886]: I1124 09:03:21.170016 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-wfxd2"] Nov 24 09:03:21 crc kubenswrapper[4886]: I1124 09:03:21.217351 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4q795\" (UniqueName: \"kubernetes.io/projected/f134bfae-349d-4078-b49c-7aba86c32093-kube-api-access-4q795\") pod \"openstack-operator-index-wfxd2\" (UID: \"f134bfae-349d-4078-b49c-7aba86c32093\") " pod="openstack-operators/openstack-operator-index-wfxd2" Nov 24 09:03:21 crc kubenswrapper[4886]: I1124 09:03:21.319366 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4q795\" (UniqueName: \"kubernetes.io/projected/f134bfae-349d-4078-b49c-7aba86c32093-kube-api-access-4q795\") pod \"openstack-operator-index-wfxd2\" (UID: \"f134bfae-349d-4078-b49c-7aba86c32093\") " pod="openstack-operators/openstack-operator-index-wfxd2" Nov 24 09:03:21 crc kubenswrapper[4886]: I1124 09:03:21.336474 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4q795\" (UniqueName: \"kubernetes.io/projected/f134bfae-349d-4078-b49c-7aba86c32093-kube-api-access-4q795\") pod \"openstack-operator-index-wfxd2\" (UID: \"f134bfae-349d-4078-b49c-7aba86c32093\") " pod="openstack-operators/openstack-operator-index-wfxd2" Nov 24 09:03:21 crc kubenswrapper[4886]: I1124 09:03:21.488589 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-wfxd2" Nov 24 09:03:21 crc kubenswrapper[4886]: I1124 09:03:21.884089 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-wfxd2"] Nov 24 09:03:22 crc kubenswrapper[4886]: I1124 09:03:22.247177 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-gr6pn" Nov 24 09:03:22 crc kubenswrapper[4886]: I1124 09:03:22.499293 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-wfxd2" event={"ID":"f134bfae-349d-4078-b49c-7aba86c32093","Type":"ContainerStarted","Data":"cffa49de415326be82a607608d1ea1bb8c5ec28b0c93a2c33e7f05450595185b"} Nov 24 09:03:22 crc kubenswrapper[4886]: I1124 09:03:22.499374 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-wfxd2" event={"ID":"f134bfae-349d-4078-b49c-7aba86c32093","Type":"ContainerStarted","Data":"939134775cdde51d35ae80ec40d2168e52e0f568798a7cd89a336d3895fb1d51"} Nov 24 09:03:22 crc kubenswrapper[4886]: I1124 09:03:22.499388 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-lqwtt" podUID="a0d1d03c-56b2-4300-823f-7ccdbad491e8" containerName="registry-server" containerID="cri-o://e97f541667ae911df1c19dab067a35cf14e43d0a23a94ae6d49b370fc6c7251d" gracePeriod=2 Nov 24 09:03:22 crc kubenswrapper[4886]: I1124 09:03:22.522097 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-wfxd2" podStartSLOduration=1.474889288 podStartE2EDuration="1.522067041s" podCreationTimestamp="2025-11-24 09:03:21 +0000 UTC" firstStartedPulling="2025-11-24 09:03:21.898860183 +0000 UTC m=+857.785598318" lastFinishedPulling="2025-11-24 09:03:21.946037946 +0000 UTC m=+857.832776071" observedRunningTime="2025-11-24 09:03:22.519339572 +0000 UTC m=+858.406077717" watchObservedRunningTime="2025-11-24 09:03:22.522067041 +0000 UTC m=+858.408805176" Nov 24 09:03:22 crc kubenswrapper[4886]: I1124 09:03:22.883276 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-lqwtt" Nov 24 09:03:22 crc kubenswrapper[4886]: I1124 09:03:22.942947 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vlzx4\" (UniqueName: \"kubernetes.io/projected/a0d1d03c-56b2-4300-823f-7ccdbad491e8-kube-api-access-vlzx4\") pod \"a0d1d03c-56b2-4300-823f-7ccdbad491e8\" (UID: \"a0d1d03c-56b2-4300-823f-7ccdbad491e8\") " Nov 24 09:03:22 crc kubenswrapper[4886]: I1124 09:03:22.949904 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0d1d03c-56b2-4300-823f-7ccdbad491e8-kube-api-access-vlzx4" (OuterVolumeSpecName: "kube-api-access-vlzx4") pod "a0d1d03c-56b2-4300-823f-7ccdbad491e8" (UID: "a0d1d03c-56b2-4300-823f-7ccdbad491e8"). InnerVolumeSpecName "kube-api-access-vlzx4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 09:03:23 crc kubenswrapper[4886]: I1124 09:03:23.045435 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vlzx4\" (UniqueName: \"kubernetes.io/projected/a0d1d03c-56b2-4300-823f-7ccdbad491e8-kube-api-access-vlzx4\") on node \"crc\" DevicePath \"\"" Nov 24 09:03:23 crc kubenswrapper[4886]: I1124 09:03:23.509896 4886 generic.go:334] "Generic (PLEG): container finished" podID="a0d1d03c-56b2-4300-823f-7ccdbad491e8" containerID="e97f541667ae911df1c19dab067a35cf14e43d0a23a94ae6d49b370fc6c7251d" exitCode=0 Nov 24 09:03:23 crc kubenswrapper[4886]: I1124 09:03:23.509953 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-lqwtt" event={"ID":"a0d1d03c-56b2-4300-823f-7ccdbad491e8","Type":"ContainerDied","Data":"e97f541667ae911df1c19dab067a35cf14e43d0a23a94ae6d49b370fc6c7251d"} Nov 24 09:03:23 crc kubenswrapper[4886]: I1124 09:03:23.509990 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-lqwtt" Nov 24 09:03:23 crc kubenswrapper[4886]: I1124 09:03:23.510023 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-lqwtt" event={"ID":"a0d1d03c-56b2-4300-823f-7ccdbad491e8","Type":"ContainerDied","Data":"0b4d323058f0b5b06680c2d772fa5b7e4fcb860a186412e56d39a5906a3dcea9"} Nov 24 09:03:23 crc kubenswrapper[4886]: I1124 09:03:23.510047 4886 scope.go:117] "RemoveContainer" containerID="e97f541667ae911df1c19dab067a35cf14e43d0a23a94ae6d49b370fc6c7251d" Nov 24 09:03:23 crc kubenswrapper[4886]: I1124 09:03:23.537798 4886 scope.go:117] "RemoveContainer" containerID="e97f541667ae911df1c19dab067a35cf14e43d0a23a94ae6d49b370fc6c7251d" Nov 24 09:03:23 crc kubenswrapper[4886]: E1124 09:03:23.538596 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e97f541667ae911df1c19dab067a35cf14e43d0a23a94ae6d49b370fc6c7251d\": container with ID starting with e97f541667ae911df1c19dab067a35cf14e43d0a23a94ae6d49b370fc6c7251d not found: ID does not exist" containerID="e97f541667ae911df1c19dab067a35cf14e43d0a23a94ae6d49b370fc6c7251d" Nov 24 09:03:23 crc kubenswrapper[4886]: I1124 09:03:23.538659 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e97f541667ae911df1c19dab067a35cf14e43d0a23a94ae6d49b370fc6c7251d"} err="failed to get container status \"e97f541667ae911df1c19dab067a35cf14e43d0a23a94ae6d49b370fc6c7251d\": rpc error: code = NotFound desc = could not find container \"e97f541667ae911df1c19dab067a35cf14e43d0a23a94ae6d49b370fc6c7251d\": container with ID starting with e97f541667ae911df1c19dab067a35cf14e43d0a23a94ae6d49b370fc6c7251d not found: ID does not exist" Nov 24 09:03:23 crc kubenswrapper[4886]: I1124 09:03:23.542481 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-lqwtt"] Nov 24 09:03:23 crc kubenswrapper[4886]: I1124 09:03:23.546222 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-lqwtt"] Nov 24 09:03:24 crc kubenswrapper[4886]: I1124 09:03:24.860706 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0d1d03c-56b2-4300-823f-7ccdbad491e8" path="/var/lib/kubelet/pods/a0d1d03c-56b2-4300-823f-7ccdbad491e8/volumes" Nov 24 09:03:31 crc kubenswrapper[4886]: I1124 09:03:31.489386 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-wfxd2" Nov 24 09:03:31 crc kubenswrapper[4886]: I1124 09:03:31.489977 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-wfxd2" Nov 24 09:03:31 crc kubenswrapper[4886]: I1124 09:03:31.525291 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-wfxd2" Nov 24 09:03:31 crc kubenswrapper[4886]: I1124 09:03:31.591848 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-wfxd2" Nov 24 09:03:36 crc kubenswrapper[4886]: I1124 09:03:36.839320 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/0542c7b7525972e13a7189603754e544003130cfecb9b535ff041f20e1wkb22"] Nov 24 09:03:36 crc kubenswrapper[4886]: E1124 09:03:36.840663 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a0d1d03c-56b2-4300-823f-7ccdbad491e8" containerName="registry-server" Nov 24 09:03:36 crc kubenswrapper[4886]: I1124 09:03:36.840683 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0d1d03c-56b2-4300-823f-7ccdbad491e8" containerName="registry-server" Nov 24 09:03:36 crc kubenswrapper[4886]: I1124 09:03:36.840833 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="a0d1d03c-56b2-4300-823f-7ccdbad491e8" containerName="registry-server" Nov 24 09:03:36 crc kubenswrapper[4886]: I1124 09:03:36.841931 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/0542c7b7525972e13a7189603754e544003130cfecb9b535ff041f20e1wkb22" Nov 24 09:03:36 crc kubenswrapper[4886]: I1124 09:03:36.851955 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-6fwf4" Nov 24 09:03:36 crc kubenswrapper[4886]: I1124 09:03:36.858863 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/0542c7b7525972e13a7189603754e544003130cfecb9b535ff041f20e1wkb22"] Nov 24 09:03:36 crc kubenswrapper[4886]: I1124 09:03:36.951602 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/603fdc43-36f5-4e80-9037-36c972f7cf05-bundle\") pod \"0542c7b7525972e13a7189603754e544003130cfecb9b535ff041f20e1wkb22\" (UID: \"603fdc43-36f5-4e80-9037-36c972f7cf05\") " pod="openstack-operators/0542c7b7525972e13a7189603754e544003130cfecb9b535ff041f20e1wkb22" Nov 24 09:03:36 crc kubenswrapper[4886]: I1124 09:03:36.952804 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gwpg8\" (UniqueName: \"kubernetes.io/projected/603fdc43-36f5-4e80-9037-36c972f7cf05-kube-api-access-gwpg8\") pod \"0542c7b7525972e13a7189603754e544003130cfecb9b535ff041f20e1wkb22\" (UID: \"603fdc43-36f5-4e80-9037-36c972f7cf05\") " pod="openstack-operators/0542c7b7525972e13a7189603754e544003130cfecb9b535ff041f20e1wkb22" Nov 24 09:03:36 crc kubenswrapper[4886]: I1124 09:03:36.952905 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/603fdc43-36f5-4e80-9037-36c972f7cf05-util\") pod \"0542c7b7525972e13a7189603754e544003130cfecb9b535ff041f20e1wkb22\" (UID: \"603fdc43-36f5-4e80-9037-36c972f7cf05\") " pod="openstack-operators/0542c7b7525972e13a7189603754e544003130cfecb9b535ff041f20e1wkb22" Nov 24 09:03:37 crc kubenswrapper[4886]: I1124 09:03:37.055939 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/603fdc43-36f5-4e80-9037-36c972f7cf05-bundle\") pod \"0542c7b7525972e13a7189603754e544003130cfecb9b535ff041f20e1wkb22\" (UID: \"603fdc43-36f5-4e80-9037-36c972f7cf05\") " pod="openstack-operators/0542c7b7525972e13a7189603754e544003130cfecb9b535ff041f20e1wkb22" Nov 24 09:03:37 crc kubenswrapper[4886]: I1124 09:03:37.056071 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gwpg8\" (UniqueName: \"kubernetes.io/projected/603fdc43-36f5-4e80-9037-36c972f7cf05-kube-api-access-gwpg8\") pod \"0542c7b7525972e13a7189603754e544003130cfecb9b535ff041f20e1wkb22\" (UID: \"603fdc43-36f5-4e80-9037-36c972f7cf05\") " pod="openstack-operators/0542c7b7525972e13a7189603754e544003130cfecb9b535ff041f20e1wkb22" Nov 24 09:03:37 crc kubenswrapper[4886]: I1124 09:03:37.056118 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/603fdc43-36f5-4e80-9037-36c972f7cf05-util\") pod \"0542c7b7525972e13a7189603754e544003130cfecb9b535ff041f20e1wkb22\" (UID: \"603fdc43-36f5-4e80-9037-36c972f7cf05\") " pod="openstack-operators/0542c7b7525972e13a7189603754e544003130cfecb9b535ff041f20e1wkb22" Nov 24 09:03:37 crc kubenswrapper[4886]: I1124 09:03:37.056549 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/603fdc43-36f5-4e80-9037-36c972f7cf05-bundle\") pod \"0542c7b7525972e13a7189603754e544003130cfecb9b535ff041f20e1wkb22\" (UID: \"603fdc43-36f5-4e80-9037-36c972f7cf05\") " pod="openstack-operators/0542c7b7525972e13a7189603754e544003130cfecb9b535ff041f20e1wkb22" Nov 24 09:03:37 crc kubenswrapper[4886]: I1124 09:03:37.056655 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/603fdc43-36f5-4e80-9037-36c972f7cf05-util\") pod \"0542c7b7525972e13a7189603754e544003130cfecb9b535ff041f20e1wkb22\" (UID: \"603fdc43-36f5-4e80-9037-36c972f7cf05\") " pod="openstack-operators/0542c7b7525972e13a7189603754e544003130cfecb9b535ff041f20e1wkb22" Nov 24 09:03:37 crc kubenswrapper[4886]: I1124 09:03:37.080128 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gwpg8\" (UniqueName: \"kubernetes.io/projected/603fdc43-36f5-4e80-9037-36c972f7cf05-kube-api-access-gwpg8\") pod \"0542c7b7525972e13a7189603754e544003130cfecb9b535ff041f20e1wkb22\" (UID: \"603fdc43-36f5-4e80-9037-36c972f7cf05\") " pod="openstack-operators/0542c7b7525972e13a7189603754e544003130cfecb9b535ff041f20e1wkb22" Nov 24 09:03:37 crc kubenswrapper[4886]: I1124 09:03:37.170267 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/0542c7b7525972e13a7189603754e544003130cfecb9b535ff041f20e1wkb22" Nov 24 09:03:37 crc kubenswrapper[4886]: I1124 09:03:37.572301 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/0542c7b7525972e13a7189603754e544003130cfecb9b535ff041f20e1wkb22"] Nov 24 09:03:37 crc kubenswrapper[4886]: W1124 09:03:37.579860 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod603fdc43_36f5_4e80_9037_36c972f7cf05.slice/crio-cf8e7f6fa00cab0adb3220da2094a04711d66e56ad7890b405f8ae611ce23b5e WatchSource:0}: Error finding container cf8e7f6fa00cab0adb3220da2094a04711d66e56ad7890b405f8ae611ce23b5e: Status 404 returned error can't find the container with id cf8e7f6fa00cab0adb3220da2094a04711d66e56ad7890b405f8ae611ce23b5e Nov 24 09:03:37 crc kubenswrapper[4886]: I1124 09:03:37.604761 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/0542c7b7525972e13a7189603754e544003130cfecb9b535ff041f20e1wkb22" event={"ID":"603fdc43-36f5-4e80-9037-36c972f7cf05","Type":"ContainerStarted","Data":"cf8e7f6fa00cab0adb3220da2094a04711d66e56ad7890b405f8ae611ce23b5e"} Nov 24 09:03:38 crc kubenswrapper[4886]: I1124 09:03:38.615558 4886 generic.go:334] "Generic (PLEG): container finished" podID="603fdc43-36f5-4e80-9037-36c972f7cf05" containerID="4847efa5f91ba43ed4b996e15615754bfa4ca783c0cab30d89a92b9dc7d889e7" exitCode=0 Nov 24 09:03:38 crc kubenswrapper[4886]: I1124 09:03:38.615651 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/0542c7b7525972e13a7189603754e544003130cfecb9b535ff041f20e1wkb22" event={"ID":"603fdc43-36f5-4e80-9037-36c972f7cf05","Type":"ContainerDied","Data":"4847efa5f91ba43ed4b996e15615754bfa4ca783c0cab30d89a92b9dc7d889e7"} Nov 24 09:03:39 crc kubenswrapper[4886]: I1124 09:03:39.624424 4886 generic.go:334] "Generic (PLEG): container finished" podID="603fdc43-36f5-4e80-9037-36c972f7cf05" containerID="2539d80d8cb74796bf1ed6daaf0eaaf7db52a228eb942f7d7825a530b38f7012" exitCode=0 Nov 24 09:03:39 crc kubenswrapper[4886]: I1124 09:03:39.624479 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/0542c7b7525972e13a7189603754e544003130cfecb9b535ff041f20e1wkb22" event={"ID":"603fdc43-36f5-4e80-9037-36c972f7cf05","Type":"ContainerDied","Data":"2539d80d8cb74796bf1ed6daaf0eaaf7db52a228eb942f7d7825a530b38f7012"} Nov 24 09:03:40 crc kubenswrapper[4886]: I1124 09:03:40.633429 4886 generic.go:334] "Generic (PLEG): container finished" podID="603fdc43-36f5-4e80-9037-36c972f7cf05" containerID="c8778537533dcd012c679bf72dcd582f7339ff04660a29280fd97d372aab0a31" exitCode=0 Nov 24 09:03:40 crc kubenswrapper[4886]: I1124 09:03:40.633501 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/0542c7b7525972e13a7189603754e544003130cfecb9b535ff041f20e1wkb22" event={"ID":"603fdc43-36f5-4e80-9037-36c972f7cf05","Type":"ContainerDied","Data":"c8778537533dcd012c679bf72dcd582f7339ff04660a29280fd97d372aab0a31"} Nov 24 09:03:41 crc kubenswrapper[4886]: I1124 09:03:41.906100 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/0542c7b7525972e13a7189603754e544003130cfecb9b535ff041f20e1wkb22" Nov 24 09:03:41 crc kubenswrapper[4886]: I1124 09:03:41.930707 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/603fdc43-36f5-4e80-9037-36c972f7cf05-util\") pod \"603fdc43-36f5-4e80-9037-36c972f7cf05\" (UID: \"603fdc43-36f5-4e80-9037-36c972f7cf05\") " Nov 24 09:03:41 crc kubenswrapper[4886]: I1124 09:03:41.930888 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gwpg8\" (UniqueName: \"kubernetes.io/projected/603fdc43-36f5-4e80-9037-36c972f7cf05-kube-api-access-gwpg8\") pod \"603fdc43-36f5-4e80-9037-36c972f7cf05\" (UID: \"603fdc43-36f5-4e80-9037-36c972f7cf05\") " Nov 24 09:03:41 crc kubenswrapper[4886]: I1124 09:03:41.930956 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/603fdc43-36f5-4e80-9037-36c972f7cf05-bundle\") pod \"603fdc43-36f5-4e80-9037-36c972f7cf05\" (UID: \"603fdc43-36f5-4e80-9037-36c972f7cf05\") " Nov 24 09:03:41 crc kubenswrapper[4886]: I1124 09:03:41.932575 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/603fdc43-36f5-4e80-9037-36c972f7cf05-bundle" (OuterVolumeSpecName: "bundle") pod "603fdc43-36f5-4e80-9037-36c972f7cf05" (UID: "603fdc43-36f5-4e80-9037-36c972f7cf05"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 09:03:41 crc kubenswrapper[4886]: I1124 09:03:41.939338 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/603fdc43-36f5-4e80-9037-36c972f7cf05-kube-api-access-gwpg8" (OuterVolumeSpecName: "kube-api-access-gwpg8") pod "603fdc43-36f5-4e80-9037-36c972f7cf05" (UID: "603fdc43-36f5-4e80-9037-36c972f7cf05"). InnerVolumeSpecName "kube-api-access-gwpg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 09:03:41 crc kubenswrapper[4886]: I1124 09:03:41.949012 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/603fdc43-36f5-4e80-9037-36c972f7cf05-util" (OuterVolumeSpecName: "util") pod "603fdc43-36f5-4e80-9037-36c972f7cf05" (UID: "603fdc43-36f5-4e80-9037-36c972f7cf05"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 09:03:42 crc kubenswrapper[4886]: I1124 09:03:42.033061 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gwpg8\" (UniqueName: \"kubernetes.io/projected/603fdc43-36f5-4e80-9037-36c972f7cf05-kube-api-access-gwpg8\") on node \"crc\" DevicePath \"\"" Nov 24 09:03:42 crc kubenswrapper[4886]: I1124 09:03:42.033111 4886 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/603fdc43-36f5-4e80-9037-36c972f7cf05-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 09:03:42 crc kubenswrapper[4886]: I1124 09:03:42.033125 4886 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/603fdc43-36f5-4e80-9037-36c972f7cf05-util\") on node \"crc\" DevicePath \"\"" Nov 24 09:03:42 crc kubenswrapper[4886]: I1124 09:03:42.649087 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/0542c7b7525972e13a7189603754e544003130cfecb9b535ff041f20e1wkb22" event={"ID":"603fdc43-36f5-4e80-9037-36c972f7cf05","Type":"ContainerDied","Data":"cf8e7f6fa00cab0adb3220da2094a04711d66e56ad7890b405f8ae611ce23b5e"} Nov 24 09:03:42 crc kubenswrapper[4886]: I1124 09:03:42.649158 4886 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cf8e7f6fa00cab0adb3220da2094a04711d66e56ad7890b405f8ae611ce23b5e" Nov 24 09:03:42 crc kubenswrapper[4886]: I1124 09:03:42.649146 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/0542c7b7525972e13a7189603754e544003130cfecb9b535ff041f20e1wkb22" Nov 24 09:03:49 crc kubenswrapper[4886]: I1124 09:03:49.946460 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-operator-5968c54bfb-nfhfk"] Nov 24 09:03:49 crc kubenswrapper[4886]: E1124 09:03:49.947742 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="603fdc43-36f5-4e80-9037-36c972f7cf05" containerName="extract" Nov 24 09:03:49 crc kubenswrapper[4886]: I1124 09:03:49.947767 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="603fdc43-36f5-4e80-9037-36c972f7cf05" containerName="extract" Nov 24 09:03:49 crc kubenswrapper[4886]: E1124 09:03:49.947785 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="603fdc43-36f5-4e80-9037-36c972f7cf05" containerName="util" Nov 24 09:03:49 crc kubenswrapper[4886]: I1124 09:03:49.947793 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="603fdc43-36f5-4e80-9037-36c972f7cf05" containerName="util" Nov 24 09:03:49 crc kubenswrapper[4886]: E1124 09:03:49.947825 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="603fdc43-36f5-4e80-9037-36c972f7cf05" containerName="pull" Nov 24 09:03:49 crc kubenswrapper[4886]: I1124 09:03:49.947833 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="603fdc43-36f5-4e80-9037-36c972f7cf05" containerName="pull" Nov 24 09:03:49 crc kubenswrapper[4886]: I1124 09:03:49.948015 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="603fdc43-36f5-4e80-9037-36c972f7cf05" containerName="extract" Nov 24 09:03:49 crc kubenswrapper[4886]: I1124 09:03:49.948991 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-operator-5968c54bfb-nfhfk" Nov 24 09:03:49 crc kubenswrapper[4886]: I1124 09:03:49.953045 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-operator-dockercfg-cdf9w" Nov 24 09:03:49 crc kubenswrapper[4886]: I1124 09:03:49.973042 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-operator-5968c54bfb-nfhfk"] Nov 24 09:03:50 crc kubenswrapper[4886]: I1124 09:03:50.049534 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gslxx\" (UniqueName: \"kubernetes.io/projected/48f1853b-9770-4f82-af2b-fc2be2f426b6-kube-api-access-gslxx\") pod \"openstack-operator-controller-operator-5968c54bfb-nfhfk\" (UID: \"48f1853b-9770-4f82-af2b-fc2be2f426b6\") " pod="openstack-operators/openstack-operator-controller-operator-5968c54bfb-nfhfk" Nov 24 09:03:50 crc kubenswrapper[4886]: I1124 09:03:50.151500 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gslxx\" (UniqueName: \"kubernetes.io/projected/48f1853b-9770-4f82-af2b-fc2be2f426b6-kube-api-access-gslxx\") pod \"openstack-operator-controller-operator-5968c54bfb-nfhfk\" (UID: \"48f1853b-9770-4f82-af2b-fc2be2f426b6\") " pod="openstack-operators/openstack-operator-controller-operator-5968c54bfb-nfhfk" Nov 24 09:03:50 crc kubenswrapper[4886]: I1124 09:03:50.187487 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gslxx\" (UniqueName: \"kubernetes.io/projected/48f1853b-9770-4f82-af2b-fc2be2f426b6-kube-api-access-gslxx\") pod \"openstack-operator-controller-operator-5968c54bfb-nfhfk\" (UID: \"48f1853b-9770-4f82-af2b-fc2be2f426b6\") " pod="openstack-operators/openstack-operator-controller-operator-5968c54bfb-nfhfk" Nov 24 09:03:50 crc kubenswrapper[4886]: I1124 09:03:50.271710 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-operator-5968c54bfb-nfhfk" Nov 24 09:03:50 crc kubenswrapper[4886]: I1124 09:03:50.547962 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-operator-5968c54bfb-nfhfk"] Nov 24 09:03:50 crc kubenswrapper[4886]: I1124 09:03:50.702332 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-5968c54bfb-nfhfk" event={"ID":"48f1853b-9770-4f82-af2b-fc2be2f426b6","Type":"ContainerStarted","Data":"f4c44a960a7f8ac38f4d5f866ae08261ab3357e3742b8a3bc953faa5d89d3a85"} Nov 24 09:03:55 crc kubenswrapper[4886]: I1124 09:03:55.737989 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-5968c54bfb-nfhfk" event={"ID":"48f1853b-9770-4f82-af2b-fc2be2f426b6","Type":"ContainerStarted","Data":"7f650458bec51fd42884b0b3bca96f8455d335ad2840f0fea0d48ee0e13b464c"} Nov 24 09:03:57 crc kubenswrapper[4886]: I1124 09:03:57.752811 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-5968c54bfb-nfhfk" event={"ID":"48f1853b-9770-4f82-af2b-fc2be2f426b6","Type":"ContainerStarted","Data":"2d45287cd83bdbc00259b260621724f21dab8f1053a44e3a871636fe66e4eb0b"} Nov 24 09:03:57 crc kubenswrapper[4886]: I1124 09:03:57.753212 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-operator-5968c54bfb-nfhfk" Nov 24 09:03:57 crc kubenswrapper[4886]: I1124 09:03:57.788840 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-operator-5968c54bfb-nfhfk" podStartSLOduration=2.034397076 podStartE2EDuration="8.788810562s" podCreationTimestamp="2025-11-24 09:03:49 +0000 UTC" firstStartedPulling="2025-11-24 09:03:50.556537096 +0000 UTC m=+886.443275231" lastFinishedPulling="2025-11-24 09:03:57.310950582 +0000 UTC m=+893.197688717" observedRunningTime="2025-11-24 09:03:57.783505359 +0000 UTC m=+893.670243504" watchObservedRunningTime="2025-11-24 09:03:57.788810562 +0000 UTC m=+893.675548717" Nov 24 09:04:00 crc kubenswrapper[4886]: I1124 09:04:00.275861 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-operator-5968c54bfb-nfhfk" Nov 24 09:04:41 crc kubenswrapper[4886]: I1124 09:04:41.441342 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-75fb479bcc-pvdd8"] Nov 24 09:04:41 crc kubenswrapper[4886]: I1124 09:04:41.443430 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-75fb479bcc-pvdd8" Nov 24 09:04:41 crc kubenswrapper[4886]: I1124 09:04:41.447376 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-6498cbf48f-6pwgl"] Nov 24 09:04:41 crc kubenswrapper[4886]: I1124 09:04:41.448657 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-6498cbf48f-6pwgl" Nov 24 09:04:41 crc kubenswrapper[4886]: I1124 09:04:41.452275 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-9b7zv" Nov 24 09:04:41 crc kubenswrapper[4886]: I1124 09:04:41.453474 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-ss47q" Nov 24 09:04:41 crc kubenswrapper[4886]: I1124 09:04:41.545220 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-767ccfd65f-9lqmh"] Nov 24 09:04:41 crc kubenswrapper[4886]: I1124 09:04:41.546844 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-767ccfd65f-9lqmh" Nov 24 09:04:41 crc kubenswrapper[4886]: I1124 09:04:41.550800 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-6498cbf48f-6pwgl"] Nov 24 09:04:41 crc kubenswrapper[4886]: I1124 09:04:41.582221 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-25dlm" Nov 24 09:04:41 crc kubenswrapper[4886]: I1124 09:04:41.596172 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cj5vg\" (UniqueName: \"kubernetes.io/projected/ad04acbe-59a4-490c-ae4e-eacfbd65257c-kube-api-access-cj5vg\") pod \"designate-operator-controller-manager-767ccfd65f-9lqmh\" (UID: \"ad04acbe-59a4-490c-ae4e-eacfbd65257c\") " pod="openstack-operators/designate-operator-controller-manager-767ccfd65f-9lqmh" Nov 24 09:04:41 crc kubenswrapper[4886]: I1124 09:04:41.596235 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vhcln\" (UniqueName: \"kubernetes.io/projected/0ca0fbbb-1734-4a4a-b996-c96aa000131c-kube-api-access-vhcln\") pod \"cinder-operator-controller-manager-6498cbf48f-6pwgl\" (UID: \"0ca0fbbb-1734-4a4a-b996-c96aa000131c\") " pod="openstack-operators/cinder-operator-controller-manager-6498cbf48f-6pwgl" Nov 24 09:04:41 crc kubenswrapper[4886]: I1124 09:04:41.596296 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r8w6l\" (UniqueName: \"kubernetes.io/projected/6c8c64e0-e4d5-45c1-a697-205deeb19c54-kube-api-access-r8w6l\") pod \"barbican-operator-controller-manager-75fb479bcc-pvdd8\" (UID: \"6c8c64e0-e4d5-45c1-a697-205deeb19c54\") " pod="openstack-operators/barbican-operator-controller-manager-75fb479bcc-pvdd8" Nov 24 09:04:41 crc kubenswrapper[4886]: I1124 09:04:41.609706 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-7969689c84-jb6p4"] Nov 24 09:04:41 crc kubenswrapper[4886]: I1124 09:04:41.611512 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-7969689c84-jb6p4" Nov 24 09:04:41 crc kubenswrapper[4886]: I1124 09:04:41.614557 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-767ccfd65f-9lqmh"] Nov 24 09:04:41 crc kubenswrapper[4886]: I1124 09:04:41.622941 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-75fb479bcc-pvdd8"] Nov 24 09:04:41 crc kubenswrapper[4886]: I1124 09:04:41.627677 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-c62l8" Nov 24 09:04:41 crc kubenswrapper[4886]: I1124 09:04:41.635208 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-56f54d6746-glmkz"] Nov 24 09:04:41 crc kubenswrapper[4886]: I1124 09:04:41.636906 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-56f54d6746-glmkz" Nov 24 09:04:41 crc kubenswrapper[4886]: I1124 09:04:41.649212 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-mj79r" Nov 24 09:04:41 crc kubenswrapper[4886]: I1124 09:04:41.665243 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-7969689c84-jb6p4"] Nov 24 09:04:41 crc kubenswrapper[4886]: I1124 09:04:41.683238 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-598f69df5d-z7c6j"] Nov 24 09:04:41 crc kubenswrapper[4886]: I1124 09:04:41.685751 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-598f69df5d-z7c6j" Nov 24 09:04:41 crc kubenswrapper[4886]: I1124 09:04:41.688580 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-6t85f" Nov 24 09:04:41 crc kubenswrapper[4886]: I1124 09:04:41.697283 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r8w6l\" (UniqueName: \"kubernetes.io/projected/6c8c64e0-e4d5-45c1-a697-205deeb19c54-kube-api-access-r8w6l\") pod \"barbican-operator-controller-manager-75fb479bcc-pvdd8\" (UID: \"6c8c64e0-e4d5-45c1-a697-205deeb19c54\") " pod="openstack-operators/barbican-operator-controller-manager-75fb479bcc-pvdd8" Nov 24 09:04:41 crc kubenswrapper[4886]: I1124 09:04:41.697344 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pvjp2\" (UniqueName: \"kubernetes.io/projected/a991f440-958e-42d4-b062-7369966d84c3-kube-api-access-pvjp2\") pod \"glance-operator-controller-manager-7969689c84-jb6p4\" (UID: \"a991f440-958e-42d4-b062-7369966d84c3\") " pod="openstack-operators/glance-operator-controller-manager-7969689c84-jb6p4" Nov 24 09:04:41 crc kubenswrapper[4886]: I1124 09:04:41.697407 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cj5vg\" (UniqueName: \"kubernetes.io/projected/ad04acbe-59a4-490c-ae4e-eacfbd65257c-kube-api-access-cj5vg\") pod \"designate-operator-controller-manager-767ccfd65f-9lqmh\" (UID: \"ad04acbe-59a4-490c-ae4e-eacfbd65257c\") " pod="openstack-operators/designate-operator-controller-manager-767ccfd65f-9lqmh" Nov 24 09:04:41 crc kubenswrapper[4886]: I1124 09:04:41.697448 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vhcln\" (UniqueName: \"kubernetes.io/projected/0ca0fbbb-1734-4a4a-b996-c96aa000131c-kube-api-access-vhcln\") pod \"cinder-operator-controller-manager-6498cbf48f-6pwgl\" (UID: \"0ca0fbbb-1734-4a4a-b996-c96aa000131c\") " pod="openstack-operators/cinder-operator-controller-manager-6498cbf48f-6pwgl" Nov 24 09:04:41 crc kubenswrapper[4886]: I1124 09:04:41.697507 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mq7qt\" (UniqueName: \"kubernetes.io/projected/f52431d9-53d4-415b-9e99-3e92fe7be4ca-kube-api-access-mq7qt\") pod \"heat-operator-controller-manager-56f54d6746-glmkz\" (UID: \"f52431d9-53d4-415b-9e99-3e92fe7be4ca\") " pod="openstack-operators/heat-operator-controller-manager-56f54d6746-glmkz" Nov 24 09:04:41 crc kubenswrapper[4886]: I1124 09:04:41.718327 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-598f69df5d-z7c6j"] Nov 24 09:04:41 crc kubenswrapper[4886]: I1124 09:04:41.747235 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-6df98c44d8-rsqm2"] Nov 24 09:04:41 crc kubenswrapper[4886]: I1124 09:04:41.748979 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-6df98c44d8-rsqm2" Nov 24 09:04:41 crc kubenswrapper[4886]: I1124 09:04:41.754013 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-qrrw6" Nov 24 09:04:41 crc kubenswrapper[4886]: I1124 09:04:41.754293 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Nov 24 09:04:41 crc kubenswrapper[4886]: I1124 09:04:41.769574 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-56f54d6746-glmkz"] Nov 24 09:04:41 crc kubenswrapper[4886]: I1124 09:04:41.774342 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cj5vg\" (UniqueName: \"kubernetes.io/projected/ad04acbe-59a4-490c-ae4e-eacfbd65257c-kube-api-access-cj5vg\") pod \"designate-operator-controller-manager-767ccfd65f-9lqmh\" (UID: \"ad04acbe-59a4-490c-ae4e-eacfbd65257c\") " pod="openstack-operators/designate-operator-controller-manager-767ccfd65f-9lqmh" Nov 24 09:04:41 crc kubenswrapper[4886]: I1124 09:04:41.778672 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vhcln\" (UniqueName: \"kubernetes.io/projected/0ca0fbbb-1734-4a4a-b996-c96aa000131c-kube-api-access-vhcln\") pod \"cinder-operator-controller-manager-6498cbf48f-6pwgl\" (UID: \"0ca0fbbb-1734-4a4a-b996-c96aa000131c\") " pod="openstack-operators/cinder-operator-controller-manager-6498cbf48f-6pwgl" Nov 24 09:04:41 crc kubenswrapper[4886]: I1124 09:04:41.794358 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r8w6l\" (UniqueName: \"kubernetes.io/projected/6c8c64e0-e4d5-45c1-a697-205deeb19c54-kube-api-access-r8w6l\") pod \"barbican-operator-controller-manager-75fb479bcc-pvdd8\" (UID: \"6c8c64e0-e4d5-45c1-a697-205deeb19c54\") " pod="openstack-operators/barbican-operator-controller-manager-75fb479bcc-pvdd8" Nov 24 09:04:41 crc kubenswrapper[4886]: I1124 09:04:41.794762 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-99b499f4-tjkbx"] Nov 24 09:04:41 crc kubenswrapper[4886]: I1124 09:04:41.795669 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-6498cbf48f-6pwgl" Nov 24 09:04:41 crc kubenswrapper[4886]: I1124 09:04:41.796435 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-99b499f4-tjkbx" Nov 24 09:04:41 crc kubenswrapper[4886]: I1124 09:04:41.802250 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-6df98c44d8-rsqm2"] Nov 24 09:04:41 crc kubenswrapper[4886]: I1124 09:04:41.802922 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-lnj7b" Nov 24 09:04:41 crc kubenswrapper[4886]: I1124 09:04:41.813894 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hxtf5\" (UniqueName: \"kubernetes.io/projected/0f03538e-297e-410d-bf6e-0f947cba868c-kube-api-access-hxtf5\") pod \"infra-operator-controller-manager-6df98c44d8-rsqm2\" (UID: \"0f03538e-297e-410d-bf6e-0f947cba868c\") " pod="openstack-operators/infra-operator-controller-manager-6df98c44d8-rsqm2" Nov 24 09:04:41 crc kubenswrapper[4886]: I1124 09:04:41.814008 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pvjp2\" (UniqueName: \"kubernetes.io/projected/a991f440-958e-42d4-b062-7369966d84c3-kube-api-access-pvjp2\") pod \"glance-operator-controller-manager-7969689c84-jb6p4\" (UID: \"a991f440-958e-42d4-b062-7369966d84c3\") " pod="openstack-operators/glance-operator-controller-manager-7969689c84-jb6p4" Nov 24 09:04:41 crc kubenswrapper[4886]: I1124 09:04:41.814214 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gzxsw\" (UniqueName: \"kubernetes.io/projected/def4f2b0-daf8-48c1-95ab-98c2c6f8c72d-kube-api-access-gzxsw\") pod \"horizon-operator-controller-manager-598f69df5d-z7c6j\" (UID: \"def4f2b0-daf8-48c1-95ab-98c2c6f8c72d\") " pod="openstack-operators/horizon-operator-controller-manager-598f69df5d-z7c6j" Nov 24 09:04:41 crc kubenswrapper[4886]: I1124 09:04:41.814346 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mq7qt\" (UniqueName: \"kubernetes.io/projected/f52431d9-53d4-415b-9e99-3e92fe7be4ca-kube-api-access-mq7qt\") pod \"heat-operator-controller-manager-56f54d6746-glmkz\" (UID: \"f52431d9-53d4-415b-9e99-3e92fe7be4ca\") " pod="openstack-operators/heat-operator-controller-manager-56f54d6746-glmkz" Nov 24 09:04:41 crc kubenswrapper[4886]: I1124 09:04:41.814394 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0f03538e-297e-410d-bf6e-0f947cba868c-cert\") pod \"infra-operator-controller-manager-6df98c44d8-rsqm2\" (UID: \"0f03538e-297e-410d-bf6e-0f947cba868c\") " pod="openstack-operators/infra-operator-controller-manager-6df98c44d8-rsqm2" Nov 24 09:04:41 crc kubenswrapper[4886]: I1124 09:04:41.826569 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-99b499f4-tjkbx"] Nov 24 09:04:41 crc kubenswrapper[4886]: I1124 09:04:41.835109 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-7454b96578-zks44"] Nov 24 09:04:41 crc kubenswrapper[4886]: I1124 09:04:41.844243 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-7454b96578-zks44" Nov 24 09:04:41 crc kubenswrapper[4886]: I1124 09:04:41.856256 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-7454b96578-zks44"] Nov 24 09:04:41 crc kubenswrapper[4886]: I1124 09:04:41.858541 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-l47jt" Nov 24 09:04:41 crc kubenswrapper[4886]: I1124 09:04:41.866255 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-58f887965d-5zcvh"] Nov 24 09:04:41 crc kubenswrapper[4886]: I1124 09:04:41.867618 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-58f887965d-5zcvh" Nov 24 09:04:41 crc kubenswrapper[4886]: I1124 09:04:41.878518 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pvjp2\" (UniqueName: \"kubernetes.io/projected/a991f440-958e-42d4-b062-7369966d84c3-kube-api-access-pvjp2\") pod \"glance-operator-controller-manager-7969689c84-jb6p4\" (UID: \"a991f440-958e-42d4-b062-7369966d84c3\") " pod="openstack-operators/glance-operator-controller-manager-7969689c84-jb6p4" Nov 24 09:04:41 crc kubenswrapper[4886]: I1124 09:04:41.878841 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-54b5986bb8-47vf5"] Nov 24 09:04:41 crc kubenswrapper[4886]: I1124 09:04:41.883549 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-vnpfq" Nov 24 09:04:41 crc kubenswrapper[4886]: I1124 09:04:41.898004 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-767ccfd65f-9lqmh" Nov 24 09:04:41 crc kubenswrapper[4886]: I1124 09:04:41.899594 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-58f887965d-5zcvh"] Nov 24 09:04:41 crc kubenswrapper[4886]: I1124 09:04:41.899726 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-54b5986bb8-47vf5" Nov 24 09:04:41 crc kubenswrapper[4886]: I1124 09:04:41.909734 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mq7qt\" (UniqueName: \"kubernetes.io/projected/f52431d9-53d4-415b-9e99-3e92fe7be4ca-kube-api-access-mq7qt\") pod \"heat-operator-controller-manager-56f54d6746-glmkz\" (UID: \"f52431d9-53d4-415b-9e99-3e92fe7be4ca\") " pod="openstack-operators/heat-operator-controller-manager-56f54d6746-glmkz" Nov 24 09:04:41 crc kubenswrapper[4886]: I1124 09:04:41.910170 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-gmgxp" Nov 24 09:04:41 crc kubenswrapper[4886]: I1124 09:04:41.919395 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0f03538e-297e-410d-bf6e-0f947cba868c-cert\") pod \"infra-operator-controller-manager-6df98c44d8-rsqm2\" (UID: \"0f03538e-297e-410d-bf6e-0f947cba868c\") " pod="openstack-operators/infra-operator-controller-manager-6df98c44d8-rsqm2" Nov 24 09:04:41 crc kubenswrapper[4886]: I1124 09:04:41.919478 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-877xn\" (UniqueName: \"kubernetes.io/projected/607c4e63-3cb6-43f8-86b0-7af4b07e81e4-kube-api-access-877xn\") pod \"keystone-operator-controller-manager-7454b96578-zks44\" (UID: \"607c4e63-3cb6-43f8-86b0-7af4b07e81e4\") " pod="openstack-operators/keystone-operator-controller-manager-7454b96578-zks44" Nov 24 09:04:41 crc kubenswrapper[4886]: I1124 09:04:41.919503 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hxtf5\" (UniqueName: \"kubernetes.io/projected/0f03538e-297e-410d-bf6e-0f947cba868c-kube-api-access-hxtf5\") pod \"infra-operator-controller-manager-6df98c44d8-rsqm2\" (UID: \"0f03538e-297e-410d-bf6e-0f947cba868c\") " pod="openstack-operators/infra-operator-controller-manager-6df98c44d8-rsqm2" Nov 24 09:04:41 crc kubenswrapper[4886]: I1124 09:04:41.919592 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l5x2x\" (UniqueName: \"kubernetes.io/projected/6fc8a4d5-fad4-4eca-95c0-329b968d5c9d-kube-api-access-l5x2x\") pod \"ironic-operator-controller-manager-99b499f4-tjkbx\" (UID: \"6fc8a4d5-fad4-4eca-95c0-329b968d5c9d\") " pod="openstack-operators/ironic-operator-controller-manager-99b499f4-tjkbx" Nov 24 09:04:41 crc kubenswrapper[4886]: I1124 09:04:41.919618 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gzxsw\" (UniqueName: \"kubernetes.io/projected/def4f2b0-daf8-48c1-95ab-98c2c6f8c72d-kube-api-access-gzxsw\") pod \"horizon-operator-controller-manager-598f69df5d-z7c6j\" (UID: \"def4f2b0-daf8-48c1-95ab-98c2c6f8c72d\") " pod="openstack-operators/horizon-operator-controller-manager-598f69df5d-z7c6j" Nov 24 09:04:41 crc kubenswrapper[4886]: E1124 09:04:41.922741 4886 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Nov 24 09:04:41 crc kubenswrapper[4886]: E1124 09:04:41.922837 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0f03538e-297e-410d-bf6e-0f947cba868c-cert podName:0f03538e-297e-410d-bf6e-0f947cba868c nodeName:}" failed. No retries permitted until 2025-11-24 09:04:42.422815038 +0000 UTC m=+938.309553163 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/0f03538e-297e-410d-bf6e-0f947cba868c-cert") pod "infra-operator-controller-manager-6df98c44d8-rsqm2" (UID: "0f03538e-297e-410d-bf6e-0f947cba868c") : secret "infra-operator-webhook-server-cert" not found Nov 24 09:04:41 crc kubenswrapper[4886]: I1124 09:04:41.923451 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-78bd47f458-kczhh"] Nov 24 09:04:41 crc kubenswrapper[4886]: I1124 09:04:41.942068 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-78bd47f458-kczhh" Nov 24 09:04:41 crc kubenswrapper[4886]: I1124 09:04:41.943558 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-7969689c84-jb6p4" Nov 24 09:04:41 crc kubenswrapper[4886]: I1124 09:04:41.944201 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-54b5986bb8-47vf5"] Nov 24 09:04:41 crc kubenswrapper[4886]: I1124 09:04:41.951844 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-96gr7" Nov 24 09:04:41 crc kubenswrapper[4886]: I1124 09:04:41.960712 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hxtf5\" (UniqueName: \"kubernetes.io/projected/0f03538e-297e-410d-bf6e-0f947cba868c-kube-api-access-hxtf5\") pod \"infra-operator-controller-manager-6df98c44d8-rsqm2\" (UID: \"0f03538e-297e-410d-bf6e-0f947cba868c\") " pod="openstack-operators/infra-operator-controller-manager-6df98c44d8-rsqm2" Nov 24 09:04:41 crc kubenswrapper[4886]: I1124 09:04:41.961373 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-78bd47f458-kczhh"] Nov 24 09:04:41 crc kubenswrapper[4886]: I1124 09:04:41.964306 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gzxsw\" (UniqueName: \"kubernetes.io/projected/def4f2b0-daf8-48c1-95ab-98c2c6f8c72d-kube-api-access-gzxsw\") pod \"horizon-operator-controller-manager-598f69df5d-z7c6j\" (UID: \"def4f2b0-daf8-48c1-95ab-98c2c6f8c72d\") " pod="openstack-operators/horizon-operator-controller-manager-598f69df5d-z7c6j" Nov 24 09:04:41 crc kubenswrapper[4886]: I1124 09:04:41.964416 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-cfbb9c588-bpzxz"] Nov 24 09:04:41 crc kubenswrapper[4886]: I1124 09:04:41.975463 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-cfbb9c588-bpzxz" Nov 24 09:04:41 crc kubenswrapper[4886]: I1124 09:04:41.980219 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-wbl8b" Nov 24 09:04:41 crc kubenswrapper[4886]: I1124 09:04:41.982782 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-56f54d6746-glmkz" Nov 24 09:04:41 crc kubenswrapper[4886]: I1124 09:04:41.997394 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-cfbb9c588-bpzxz"] Nov 24 09:04:42 crc kubenswrapper[4886]: I1124 09:04:42.007000 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-54cfbf4c7d-qnv8p"] Nov 24 09:04:42 crc kubenswrapper[4886]: I1124 09:04:42.008329 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-54cfbf4c7d-qnv8p" Nov 24 09:04:42 crc kubenswrapper[4886]: I1124 09:04:42.021486 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-w6s2h" Nov 24 09:04:42 crc kubenswrapper[4886]: I1124 09:04:42.022649 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-598f69df5d-z7c6j" Nov 24 09:04:42 crc kubenswrapper[4886]: I1124 09:04:42.039134 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l5x2x\" (UniqueName: \"kubernetes.io/projected/6fc8a4d5-fad4-4eca-95c0-329b968d5c9d-kube-api-access-l5x2x\") pod \"ironic-operator-controller-manager-99b499f4-tjkbx\" (UID: \"6fc8a4d5-fad4-4eca-95c0-329b968d5c9d\") " pod="openstack-operators/ironic-operator-controller-manager-99b499f4-tjkbx" Nov 24 09:04:42 crc kubenswrapper[4886]: I1124 09:04:42.042584 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-smrqf\" (UniqueName: \"kubernetes.io/projected/9a2dc275-73a5-4caf-89fe-120ce9401655-kube-api-access-smrqf\") pod \"neutron-operator-controller-manager-78bd47f458-kczhh\" (UID: \"9a2dc275-73a5-4caf-89fe-120ce9401655\") " pod="openstack-operators/neutron-operator-controller-manager-78bd47f458-kczhh" Nov 24 09:04:42 crc kubenswrapper[4886]: I1124 09:04:42.042684 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dsf5v\" (UniqueName: \"kubernetes.io/projected/671e2772-1d7f-4c97-91f6-83f0782b4f6b-kube-api-access-dsf5v\") pod \"mariadb-operator-controller-manager-54b5986bb8-47vf5\" (UID: \"671e2772-1d7f-4c97-91f6-83f0782b4f6b\") " pod="openstack-operators/mariadb-operator-controller-manager-54b5986bb8-47vf5" Nov 24 09:04:42 crc kubenswrapper[4886]: I1124 09:04:42.042947 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7fjq9\" (UniqueName: \"kubernetes.io/projected/8aadf5e6-b19e-4b19-b812-50c5bd4721a4-kube-api-access-7fjq9\") pod \"octavia-operator-controller-manager-54cfbf4c7d-qnv8p\" (UID: \"8aadf5e6-b19e-4b19-b812-50c5bd4721a4\") " pod="openstack-operators/octavia-operator-controller-manager-54cfbf4c7d-qnv8p" Nov 24 09:04:42 crc kubenswrapper[4886]: I1124 09:04:42.043126 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pbx2c\" (UniqueName: \"kubernetes.io/projected/73e41e35-4218-492b-93d6-d068c687ee6e-kube-api-access-pbx2c\") pod \"manila-operator-controller-manager-58f887965d-5zcvh\" (UID: \"73e41e35-4218-492b-93d6-d068c687ee6e\") " pod="openstack-operators/manila-operator-controller-manager-58f887965d-5zcvh" Nov 24 09:04:42 crc kubenswrapper[4886]: I1124 09:04:42.043317 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-877xn\" (UniqueName: \"kubernetes.io/projected/607c4e63-3cb6-43f8-86b0-7af4b07e81e4-kube-api-access-877xn\") pod \"keystone-operator-controller-manager-7454b96578-zks44\" (UID: \"607c4e63-3cb6-43f8-86b0-7af4b07e81e4\") " pod="openstack-operators/keystone-operator-controller-manager-7454b96578-zks44" Nov 24 09:04:42 crc kubenswrapper[4886]: I1124 09:04:42.043642 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cw5vg\" (UniqueName: \"kubernetes.io/projected/f269ac9a-b191-4262-93bf-6cbd27c0d445-kube-api-access-cw5vg\") pod \"nova-operator-controller-manager-cfbb9c588-bpzxz\" (UID: \"f269ac9a-b191-4262-93bf-6cbd27c0d445\") " pod="openstack-operators/nova-operator-controller-manager-cfbb9c588-bpzxz" Nov 24 09:04:42 crc kubenswrapper[4886]: I1124 09:04:42.071832 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-75fb479bcc-pvdd8" Nov 24 09:04:42 crc kubenswrapper[4886]: I1124 09:04:42.111842 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-8c7444f48-n5gmw"] Nov 24 09:04:42 crc kubenswrapper[4886]: I1124 09:04:42.153568 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-smrqf\" (UniqueName: \"kubernetes.io/projected/9a2dc275-73a5-4caf-89fe-120ce9401655-kube-api-access-smrqf\") pod \"neutron-operator-controller-manager-78bd47f458-kczhh\" (UID: \"9a2dc275-73a5-4caf-89fe-120ce9401655\") " pod="openstack-operators/neutron-operator-controller-manager-78bd47f458-kczhh" Nov 24 09:04:42 crc kubenswrapper[4886]: I1124 09:04:42.182576 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-8c7444f48-n5gmw" Nov 24 09:04:42 crc kubenswrapper[4886]: I1124 09:04:42.186366 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dsf5v\" (UniqueName: \"kubernetes.io/projected/671e2772-1d7f-4c97-91f6-83f0782b4f6b-kube-api-access-dsf5v\") pod \"mariadb-operator-controller-manager-54b5986bb8-47vf5\" (UID: \"671e2772-1d7f-4c97-91f6-83f0782b4f6b\") " pod="openstack-operators/mariadb-operator-controller-manager-54b5986bb8-47vf5" Nov 24 09:04:42 crc kubenswrapper[4886]: I1124 09:04:42.187013 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7fjq9\" (UniqueName: \"kubernetes.io/projected/8aadf5e6-b19e-4b19-b812-50c5bd4721a4-kube-api-access-7fjq9\") pod \"octavia-operator-controller-manager-54cfbf4c7d-qnv8p\" (UID: \"8aadf5e6-b19e-4b19-b812-50c5bd4721a4\") " pod="openstack-operators/octavia-operator-controller-manager-54cfbf4c7d-qnv8p" Nov 24 09:04:42 crc kubenswrapper[4886]: I1124 09:04:42.187099 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pbx2c\" (UniqueName: \"kubernetes.io/projected/73e41e35-4218-492b-93d6-d068c687ee6e-kube-api-access-pbx2c\") pod \"manila-operator-controller-manager-58f887965d-5zcvh\" (UID: \"73e41e35-4218-492b-93d6-d068c687ee6e\") " pod="openstack-operators/manila-operator-controller-manager-58f887965d-5zcvh" Nov 24 09:04:42 crc kubenswrapper[4886]: I1124 09:04:42.187284 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cw5vg\" (UniqueName: \"kubernetes.io/projected/f269ac9a-b191-4262-93bf-6cbd27c0d445-kube-api-access-cw5vg\") pod \"nova-operator-controller-manager-cfbb9c588-bpzxz\" (UID: \"f269ac9a-b191-4262-93bf-6cbd27c0d445\") " pod="openstack-operators/nova-operator-controller-manager-cfbb9c588-bpzxz" Nov 24 09:04:42 crc kubenswrapper[4886]: I1124 09:04:42.190508 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-n9pjr" Nov 24 09:04:42 crc kubenswrapper[4886]: I1124 09:04:42.197766 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Nov 24 09:04:42 crc kubenswrapper[4886]: I1124 09:04:42.200563 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l5x2x\" (UniqueName: \"kubernetes.io/projected/6fc8a4d5-fad4-4eca-95c0-329b968d5c9d-kube-api-access-l5x2x\") pod \"ironic-operator-controller-manager-99b499f4-tjkbx\" (UID: \"6fc8a4d5-fad4-4eca-95c0-329b968d5c9d\") " pod="openstack-operators/ironic-operator-controller-manager-99b499f4-tjkbx" Nov 24 09:04:42 crc kubenswrapper[4886]: I1124 09:04:42.214078 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-54cfbf4c7d-qnv8p"] Nov 24 09:04:42 crc kubenswrapper[4886]: I1124 09:04:42.221449 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-smrqf\" (UniqueName: \"kubernetes.io/projected/9a2dc275-73a5-4caf-89fe-120ce9401655-kube-api-access-smrqf\") pod \"neutron-operator-controller-manager-78bd47f458-kczhh\" (UID: \"9a2dc275-73a5-4caf-89fe-120ce9401655\") " pod="openstack-operators/neutron-operator-controller-manager-78bd47f458-kczhh" Nov 24 09:04:42 crc kubenswrapper[4886]: I1124 09:04:42.253564 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-877xn\" (UniqueName: \"kubernetes.io/projected/607c4e63-3cb6-43f8-86b0-7af4b07e81e4-kube-api-access-877xn\") pod \"keystone-operator-controller-manager-7454b96578-zks44\" (UID: \"607c4e63-3cb6-43f8-86b0-7af4b07e81e4\") " pod="openstack-operators/keystone-operator-controller-manager-7454b96578-zks44" Nov 24 09:04:42 crc kubenswrapper[4886]: I1124 09:04:42.270408 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-54fc5f65b7-z6p4s"] Nov 24 09:04:42 crc kubenswrapper[4886]: I1124 09:04:42.272098 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-54fc5f65b7-z6p4s" Nov 24 09:04:42 crc kubenswrapper[4886]: I1124 09:04:42.281669 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-9mdc6" Nov 24 09:04:42 crc kubenswrapper[4886]: I1124 09:04:42.281699 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dsf5v\" (UniqueName: \"kubernetes.io/projected/671e2772-1d7f-4c97-91f6-83f0782b4f6b-kube-api-access-dsf5v\") pod \"mariadb-operator-controller-manager-54b5986bb8-47vf5\" (UID: \"671e2772-1d7f-4c97-91f6-83f0782b4f6b\") " pod="openstack-operators/mariadb-operator-controller-manager-54b5986bb8-47vf5" Nov 24 09:04:42 crc kubenswrapper[4886]: I1124 09:04:42.282748 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7fjq9\" (UniqueName: \"kubernetes.io/projected/8aadf5e6-b19e-4b19-b812-50c5bd4721a4-kube-api-access-7fjq9\") pod \"octavia-operator-controller-manager-54cfbf4c7d-qnv8p\" (UID: \"8aadf5e6-b19e-4b19-b812-50c5bd4721a4\") " pod="openstack-operators/octavia-operator-controller-manager-54cfbf4c7d-qnv8p" Nov 24 09:04:42 crc kubenswrapper[4886]: I1124 09:04:42.284249 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-8c7444f48-n5gmw"] Nov 24 09:04:42 crc kubenswrapper[4886]: I1124 09:04:42.290850 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6f4398e5-a5b8-4853-ac68-76385d1a749d-cert\") pod \"openstack-baremetal-operator-controller-manager-8c7444f48-n5gmw\" (UID: \"6f4398e5-a5b8-4853-ac68-76385d1a749d\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-8c7444f48-n5gmw" Nov 24 09:04:42 crc kubenswrapper[4886]: I1124 09:04:42.307389 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x6kl5\" (UniqueName: \"kubernetes.io/projected/26b9db43-5cbd-4513-8685-976bc2bccad8-kube-api-access-x6kl5\") pod \"ovn-operator-controller-manager-54fc5f65b7-z6p4s\" (UID: \"26b9db43-5cbd-4513-8685-976bc2bccad8\") " pod="openstack-operators/ovn-operator-controller-manager-54fc5f65b7-z6p4s" Nov 24 09:04:42 crc kubenswrapper[4886]: I1124 09:04:42.307491 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pw5h7\" (UniqueName: \"kubernetes.io/projected/6f4398e5-a5b8-4853-ac68-76385d1a749d-kube-api-access-pw5h7\") pod \"openstack-baremetal-operator-controller-manager-8c7444f48-n5gmw\" (UID: \"6f4398e5-a5b8-4853-ac68-76385d1a749d\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-8c7444f48-n5gmw" Nov 24 09:04:42 crc kubenswrapper[4886]: I1124 09:04:42.294700 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cw5vg\" (UniqueName: \"kubernetes.io/projected/f269ac9a-b191-4262-93bf-6cbd27c0d445-kube-api-access-cw5vg\") pod \"nova-operator-controller-manager-cfbb9c588-bpzxz\" (UID: \"f269ac9a-b191-4262-93bf-6cbd27c0d445\") " pod="openstack-operators/nova-operator-controller-manager-cfbb9c588-bpzxz" Nov 24 09:04:42 crc kubenswrapper[4886]: I1124 09:04:42.297837 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-5b797b8dff-nwx4f"] Nov 24 09:04:42 crc kubenswrapper[4886]: I1124 09:04:42.302141 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pbx2c\" (UniqueName: \"kubernetes.io/projected/73e41e35-4218-492b-93d6-d068c687ee6e-kube-api-access-pbx2c\") pod \"manila-operator-controller-manager-58f887965d-5zcvh\" (UID: \"73e41e35-4218-492b-93d6-d068c687ee6e\") " pod="openstack-operators/manila-operator-controller-manager-58f887965d-5zcvh" Nov 24 09:04:42 crc kubenswrapper[4886]: I1124 09:04:42.309106 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-5b797b8dff-nwx4f" Nov 24 09:04:42 crc kubenswrapper[4886]: I1124 09:04:42.318696 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-lldgc" Nov 24 09:04:42 crc kubenswrapper[4886]: I1124 09:04:42.323630 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-99b499f4-tjkbx" Nov 24 09:04:42 crc kubenswrapper[4886]: I1124 09:04:42.337186 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-d656998f4-qmlpw"] Nov 24 09:04:42 crc kubenswrapper[4886]: I1124 09:04:42.338550 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-d656998f4-qmlpw" Nov 24 09:04:42 crc kubenswrapper[4886]: I1124 09:04:42.345073 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-d7j72" Nov 24 09:04:42 crc kubenswrapper[4886]: I1124 09:04:42.353255 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-5b797b8dff-nwx4f"] Nov 24 09:04:42 crc kubenswrapper[4886]: I1124 09:04:42.368896 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-54fc5f65b7-z6p4s"] Nov 24 09:04:42 crc kubenswrapper[4886]: I1124 09:04:42.372245 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-7454b96578-zks44" Nov 24 09:04:42 crc kubenswrapper[4886]: I1124 09:04:42.397804 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-d656998f4-qmlpw"] Nov 24 09:04:42 crc kubenswrapper[4886]: I1124 09:04:42.409060 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6f4398e5-a5b8-4853-ac68-76385d1a749d-cert\") pod \"openstack-baremetal-operator-controller-manager-8c7444f48-n5gmw\" (UID: \"6f4398e5-a5b8-4853-ac68-76385d1a749d\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-8c7444f48-n5gmw" Nov 24 09:04:42 crc kubenswrapper[4886]: I1124 09:04:42.409145 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xv62z\" (UniqueName: \"kubernetes.io/projected/789de7d5-5a8b-4005-b37d-83057da5b4e7-kube-api-access-xv62z\") pod \"placement-operator-controller-manager-5b797b8dff-nwx4f\" (UID: \"789de7d5-5a8b-4005-b37d-83057da5b4e7\") " pod="openstack-operators/placement-operator-controller-manager-5b797b8dff-nwx4f" Nov 24 09:04:42 crc kubenswrapper[4886]: I1124 09:04:42.409191 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x6kl5\" (UniqueName: \"kubernetes.io/projected/26b9db43-5cbd-4513-8685-976bc2bccad8-kube-api-access-x6kl5\") pod \"ovn-operator-controller-manager-54fc5f65b7-z6p4s\" (UID: \"26b9db43-5cbd-4513-8685-976bc2bccad8\") " pod="openstack-operators/ovn-operator-controller-manager-54fc5f65b7-z6p4s" Nov 24 09:04:42 crc kubenswrapper[4886]: I1124 09:04:42.409227 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pw5h7\" (UniqueName: \"kubernetes.io/projected/6f4398e5-a5b8-4853-ac68-76385d1a749d-kube-api-access-pw5h7\") pod \"openstack-baremetal-operator-controller-manager-8c7444f48-n5gmw\" (UID: \"6f4398e5-a5b8-4853-ac68-76385d1a749d\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-8c7444f48-n5gmw" Nov 24 09:04:42 crc kubenswrapper[4886]: E1124 09:04:42.409725 4886 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 24 09:04:42 crc kubenswrapper[4886]: E1124 09:04:42.409811 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6f4398e5-a5b8-4853-ac68-76385d1a749d-cert podName:6f4398e5-a5b8-4853-ac68-76385d1a749d nodeName:}" failed. No retries permitted until 2025-11-24 09:04:42.909788461 +0000 UTC m=+938.796526596 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/6f4398e5-a5b8-4853-ac68-76385d1a749d-cert") pod "openstack-baremetal-operator-controller-manager-8c7444f48-n5gmw" (UID: "6f4398e5-a5b8-4853-ac68-76385d1a749d") : secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 24 09:04:42 crc kubenswrapper[4886]: I1124 09:04:42.409851 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r4fhr\" (UniqueName: \"kubernetes.io/projected/213c4726-cd5c-4f79-ac2a-bc3ca07f0019-kube-api-access-r4fhr\") pod \"swift-operator-controller-manager-d656998f4-qmlpw\" (UID: \"213c4726-cd5c-4f79-ac2a-bc3ca07f0019\") " pod="openstack-operators/swift-operator-controller-manager-d656998f4-qmlpw" Nov 24 09:04:42 crc kubenswrapper[4886]: I1124 09:04:42.419087 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-58f887965d-5zcvh" Nov 24 09:04:42 crc kubenswrapper[4886]: I1124 09:04:42.423234 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-6d4bf84b58-62fz7"] Nov 24 09:04:42 crc kubenswrapper[4886]: I1124 09:04:42.424895 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-6d4bf84b58-62fz7" Nov 24 09:04:42 crc kubenswrapper[4886]: I1124 09:04:42.433185 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-rn69g" Nov 24 09:04:42 crc kubenswrapper[4886]: I1124 09:04:42.439498 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-b4c496f69-kgnpt"] Nov 24 09:04:42 crc kubenswrapper[4886]: I1124 09:04:42.440860 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-6d4bf84b58-62fz7"] Nov 24 09:04:42 crc kubenswrapper[4886]: I1124 09:04:42.440966 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-b4c496f69-kgnpt" Nov 24 09:04:42 crc kubenswrapper[4886]: I1124 09:04:42.457403 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-d664l" Nov 24 09:04:42 crc kubenswrapper[4886]: I1124 09:04:42.465268 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x6kl5\" (UniqueName: \"kubernetes.io/projected/26b9db43-5cbd-4513-8685-976bc2bccad8-kube-api-access-x6kl5\") pod \"ovn-operator-controller-manager-54fc5f65b7-z6p4s\" (UID: \"26b9db43-5cbd-4513-8685-976bc2bccad8\") " pod="openstack-operators/ovn-operator-controller-manager-54fc5f65b7-z6p4s" Nov 24 09:04:42 crc kubenswrapper[4886]: I1124 09:04:42.466590 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pw5h7\" (UniqueName: \"kubernetes.io/projected/6f4398e5-a5b8-4853-ac68-76385d1a749d-kube-api-access-pw5h7\") pod \"openstack-baremetal-operator-controller-manager-8c7444f48-n5gmw\" (UID: \"6f4398e5-a5b8-4853-ac68-76385d1a749d\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-8c7444f48-n5gmw" Nov 24 09:04:42 crc kubenswrapper[4886]: I1124 09:04:42.482136 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-54b5986bb8-47vf5" Nov 24 09:04:42 crc kubenswrapper[4886]: I1124 09:04:42.482605 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-b4c496f69-kgnpt"] Nov 24 09:04:42 crc kubenswrapper[4886]: I1124 09:04:42.490595 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-8c6448b9f-7zbrr"] Nov 24 09:04:42 crc kubenswrapper[4886]: I1124 09:04:42.493814 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-8c6448b9f-7zbrr" Nov 24 09:04:42 crc kubenswrapper[4886]: I1124 09:04:42.502208 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-g2m52" Nov 24 09:04:42 crc kubenswrapper[4886]: I1124 09:04:42.502474 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-8c6448b9f-7zbrr"] Nov 24 09:04:42 crc kubenswrapper[4886]: I1124 09:04:42.503398 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-78bd47f458-kczhh" Nov 24 09:04:42 crc kubenswrapper[4886]: I1124 09:04:42.512429 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0f03538e-297e-410d-bf6e-0f947cba868c-cert\") pod \"infra-operator-controller-manager-6df98c44d8-rsqm2\" (UID: \"0f03538e-297e-410d-bf6e-0f947cba868c\") " pod="openstack-operators/infra-operator-controller-manager-6df98c44d8-rsqm2" Nov 24 09:04:42 crc kubenswrapper[4886]: I1124 09:04:42.512513 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xv62z\" (UniqueName: \"kubernetes.io/projected/789de7d5-5a8b-4005-b37d-83057da5b4e7-kube-api-access-xv62z\") pod \"placement-operator-controller-manager-5b797b8dff-nwx4f\" (UID: \"789de7d5-5a8b-4005-b37d-83057da5b4e7\") " pod="openstack-operators/placement-operator-controller-manager-5b797b8dff-nwx4f" Nov 24 09:04:42 crc kubenswrapper[4886]: I1124 09:04:42.512565 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z4zjj\" (UniqueName: \"kubernetes.io/projected/ac24d05a-4485-4fad-a03c-2fb381960d7b-kube-api-access-z4zjj\") pod \"telemetry-operator-controller-manager-6d4bf84b58-62fz7\" (UID: \"ac24d05a-4485-4fad-a03c-2fb381960d7b\") " pod="openstack-operators/telemetry-operator-controller-manager-6d4bf84b58-62fz7" Nov 24 09:04:42 crc kubenswrapper[4886]: I1124 09:04:42.512598 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r4fhr\" (UniqueName: \"kubernetes.io/projected/213c4726-cd5c-4f79-ac2a-bc3ca07f0019-kube-api-access-r4fhr\") pod \"swift-operator-controller-manager-d656998f4-qmlpw\" (UID: \"213c4726-cd5c-4f79-ac2a-bc3ca07f0019\") " pod="openstack-operators/swift-operator-controller-manager-d656998f4-qmlpw" Nov 24 09:04:42 crc kubenswrapper[4886]: I1124 09:04:42.512619 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sqm6n\" (UniqueName: \"kubernetes.io/projected/e69be7ce-2069-42ab-a8c9-7b4c29243ff0-kube-api-access-sqm6n\") pod \"test-operator-controller-manager-b4c496f69-kgnpt\" (UID: \"e69be7ce-2069-42ab-a8c9-7b4c29243ff0\") " pod="openstack-operators/test-operator-controller-manager-b4c496f69-kgnpt" Nov 24 09:04:42 crc kubenswrapper[4886]: I1124 09:04:42.535617 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r4fhr\" (UniqueName: \"kubernetes.io/projected/213c4726-cd5c-4f79-ac2a-bc3ca07f0019-kube-api-access-r4fhr\") pod \"swift-operator-controller-manager-d656998f4-qmlpw\" (UID: \"213c4726-cd5c-4f79-ac2a-bc3ca07f0019\") " pod="openstack-operators/swift-operator-controller-manager-d656998f4-qmlpw" Nov 24 09:04:42 crc kubenswrapper[4886]: I1124 09:04:42.543817 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-cfbb9c588-bpzxz" Nov 24 09:04:42 crc kubenswrapper[4886]: I1124 09:04:42.548790 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xv62z\" (UniqueName: \"kubernetes.io/projected/789de7d5-5a8b-4005-b37d-83057da5b4e7-kube-api-access-xv62z\") pod \"placement-operator-controller-manager-5b797b8dff-nwx4f\" (UID: \"789de7d5-5a8b-4005-b37d-83057da5b4e7\") " pod="openstack-operators/placement-operator-controller-manager-5b797b8dff-nwx4f" Nov 24 09:04:42 crc kubenswrapper[4886]: I1124 09:04:42.548847 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0f03538e-297e-410d-bf6e-0f947cba868c-cert\") pod \"infra-operator-controller-manager-6df98c44d8-rsqm2\" (UID: \"0f03538e-297e-410d-bf6e-0f947cba868c\") " pod="openstack-operators/infra-operator-controller-manager-6df98c44d8-rsqm2" Nov 24 09:04:42 crc kubenswrapper[4886]: I1124 09:04:42.554369 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-6df98c44d8-rsqm2" Nov 24 09:04:42 crc kubenswrapper[4886]: I1124 09:04:42.575047 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-54cfbf4c7d-qnv8p" Nov 24 09:04:42 crc kubenswrapper[4886]: I1124 09:04:42.606071 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-5cd7fdf8c-ztg92"] Nov 24 09:04:42 crc kubenswrapper[4886]: I1124 09:04:42.614581 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-5cd7fdf8c-ztg92" Nov 24 09:04:42 crc kubenswrapper[4886]: I1124 09:04:42.614830 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sqm6n\" (UniqueName: \"kubernetes.io/projected/e69be7ce-2069-42ab-a8c9-7b4c29243ff0-kube-api-access-sqm6n\") pod \"test-operator-controller-manager-b4c496f69-kgnpt\" (UID: \"e69be7ce-2069-42ab-a8c9-7b4c29243ff0\") " pod="openstack-operators/test-operator-controller-manager-b4c496f69-kgnpt" Nov 24 09:04:42 crc kubenswrapper[4886]: I1124 09:04:42.616932 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Nov 24 09:04:42 crc kubenswrapper[4886]: I1124 09:04:42.619270 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-58k4k" Nov 24 09:04:42 crc kubenswrapper[4886]: I1124 09:04:42.634342 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m8zqd\" (UniqueName: \"kubernetes.io/projected/dc151242-3f76-4414-9a2b-a5e28adf12af-kube-api-access-m8zqd\") pod \"watcher-operator-controller-manager-8c6448b9f-7zbrr\" (UID: \"dc151242-3f76-4414-9a2b-a5e28adf12af\") " pod="openstack-operators/watcher-operator-controller-manager-8c6448b9f-7zbrr" Nov 24 09:04:42 crc kubenswrapper[4886]: I1124 09:04:42.634500 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z4zjj\" (UniqueName: \"kubernetes.io/projected/ac24d05a-4485-4fad-a03c-2fb381960d7b-kube-api-access-z4zjj\") pod \"telemetry-operator-controller-manager-6d4bf84b58-62fz7\" (UID: \"ac24d05a-4485-4fad-a03c-2fb381960d7b\") " pod="openstack-operators/telemetry-operator-controller-manager-6d4bf84b58-62fz7" Nov 24 09:04:42 crc kubenswrapper[4886]: I1124 09:04:42.637041 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sqm6n\" (UniqueName: \"kubernetes.io/projected/e69be7ce-2069-42ab-a8c9-7b4c29243ff0-kube-api-access-sqm6n\") pod \"test-operator-controller-manager-b4c496f69-kgnpt\" (UID: \"e69be7ce-2069-42ab-a8c9-7b4c29243ff0\") " pod="openstack-operators/test-operator-controller-manager-b4c496f69-kgnpt" Nov 24 09:04:42 crc kubenswrapper[4886]: I1124 09:04:42.637879 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-54fc5f65b7-z6p4s" Nov 24 09:04:42 crc kubenswrapper[4886]: I1124 09:04:42.677663 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z4zjj\" (UniqueName: \"kubernetes.io/projected/ac24d05a-4485-4fad-a03c-2fb381960d7b-kube-api-access-z4zjj\") pod \"telemetry-operator-controller-manager-6d4bf84b58-62fz7\" (UID: \"ac24d05a-4485-4fad-a03c-2fb381960d7b\") " pod="openstack-operators/telemetry-operator-controller-manager-6d4bf84b58-62fz7" Nov 24 09:04:42 crc kubenswrapper[4886]: I1124 09:04:42.684687 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-5b797b8dff-nwx4f" Nov 24 09:04:42 crc kubenswrapper[4886]: I1124 09:04:42.689239 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-5cd7fdf8c-ztg92"] Nov 24 09:04:42 crc kubenswrapper[4886]: I1124 09:04:42.703490 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-sgnjz"] Nov 24 09:04:42 crc kubenswrapper[4886]: I1124 09:04:42.709558 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-sgnjz" Nov 24 09:04:42 crc kubenswrapper[4886]: I1124 09:04:42.709710 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-d656998f4-qmlpw" Nov 24 09:04:42 crc kubenswrapper[4886]: I1124 09:04:42.714651 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-4hz65" Nov 24 09:04:42 crc kubenswrapper[4886]: I1124 09:04:42.716405 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-sgnjz"] Nov 24 09:04:42 crc kubenswrapper[4886]: I1124 09:04:42.738896 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/33c0c863-6350-4195-acb5-0dcc801d867b-cert\") pod \"openstack-operator-controller-manager-5cd7fdf8c-ztg92\" (UID: \"33c0c863-6350-4195-acb5-0dcc801d867b\") " pod="openstack-operators/openstack-operator-controller-manager-5cd7fdf8c-ztg92" Nov 24 09:04:42 crc kubenswrapper[4886]: I1124 09:04:42.739054 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hv52b\" (UniqueName: \"kubernetes.io/projected/33c0c863-6350-4195-acb5-0dcc801d867b-kube-api-access-hv52b\") pod \"openstack-operator-controller-manager-5cd7fdf8c-ztg92\" (UID: \"33c0c863-6350-4195-acb5-0dcc801d867b\") " pod="openstack-operators/openstack-operator-controller-manager-5cd7fdf8c-ztg92" Nov 24 09:04:42 crc kubenswrapper[4886]: I1124 09:04:42.739495 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m8zqd\" (UniqueName: \"kubernetes.io/projected/dc151242-3f76-4414-9a2b-a5e28adf12af-kube-api-access-m8zqd\") pod \"watcher-operator-controller-manager-8c6448b9f-7zbrr\" (UID: \"dc151242-3f76-4414-9a2b-a5e28adf12af\") " pod="openstack-operators/watcher-operator-controller-manager-8c6448b9f-7zbrr" Nov 24 09:04:42 crc kubenswrapper[4886]: I1124 09:04:42.817660 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m8zqd\" (UniqueName: \"kubernetes.io/projected/dc151242-3f76-4414-9a2b-a5e28adf12af-kube-api-access-m8zqd\") pod \"watcher-operator-controller-manager-8c6448b9f-7zbrr\" (UID: \"dc151242-3f76-4414-9a2b-a5e28adf12af\") " pod="openstack-operators/watcher-operator-controller-manager-8c6448b9f-7zbrr" Nov 24 09:04:42 crc kubenswrapper[4886]: I1124 09:04:42.843292 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/33c0c863-6350-4195-acb5-0dcc801d867b-cert\") pod \"openstack-operator-controller-manager-5cd7fdf8c-ztg92\" (UID: \"33c0c863-6350-4195-acb5-0dcc801d867b\") " pod="openstack-operators/openstack-operator-controller-manager-5cd7fdf8c-ztg92" Nov 24 09:04:42 crc kubenswrapper[4886]: I1124 09:04:42.843375 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hv52b\" (UniqueName: \"kubernetes.io/projected/33c0c863-6350-4195-acb5-0dcc801d867b-kube-api-access-hv52b\") pod \"openstack-operator-controller-manager-5cd7fdf8c-ztg92\" (UID: \"33c0c863-6350-4195-acb5-0dcc801d867b\") " pod="openstack-operators/openstack-operator-controller-manager-5cd7fdf8c-ztg92" Nov 24 09:04:42 crc kubenswrapper[4886]: I1124 09:04:42.843437 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8bxzp\" (UniqueName: \"kubernetes.io/projected/50b161b3-4911-4ab1-b348-b1b52713c856-kube-api-access-8bxzp\") pod \"rabbitmq-cluster-operator-manager-5f97d8c699-sgnjz\" (UID: \"50b161b3-4911-4ab1-b348-b1b52713c856\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-sgnjz" Nov 24 09:04:42 crc kubenswrapper[4886]: E1124 09:04:42.843644 4886 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Nov 24 09:04:42 crc kubenswrapper[4886]: E1124 09:04:42.843709 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/33c0c863-6350-4195-acb5-0dcc801d867b-cert podName:33c0c863-6350-4195-acb5-0dcc801d867b nodeName:}" failed. No retries permitted until 2025-11-24 09:04:43.343685761 +0000 UTC m=+939.230423896 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/33c0c863-6350-4195-acb5-0dcc801d867b-cert") pod "openstack-operator-controller-manager-5cd7fdf8c-ztg92" (UID: "33c0c863-6350-4195-acb5-0dcc801d867b") : secret "webhook-server-cert" not found Nov 24 09:04:42 crc kubenswrapper[4886]: I1124 09:04:42.876217 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-6d4bf84b58-62fz7" Nov 24 09:04:42 crc kubenswrapper[4886]: I1124 09:04:42.887491 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hv52b\" (UniqueName: \"kubernetes.io/projected/33c0c863-6350-4195-acb5-0dcc801d867b-kube-api-access-hv52b\") pod \"openstack-operator-controller-manager-5cd7fdf8c-ztg92\" (UID: \"33c0c863-6350-4195-acb5-0dcc801d867b\") " pod="openstack-operators/openstack-operator-controller-manager-5cd7fdf8c-ztg92" Nov 24 09:04:42 crc kubenswrapper[4886]: I1124 09:04:42.954575 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6f4398e5-a5b8-4853-ac68-76385d1a749d-cert\") pod \"openstack-baremetal-operator-controller-manager-8c7444f48-n5gmw\" (UID: \"6f4398e5-a5b8-4853-ac68-76385d1a749d\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-8c7444f48-n5gmw" Nov 24 09:04:42 crc kubenswrapper[4886]: I1124 09:04:42.954695 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8bxzp\" (UniqueName: \"kubernetes.io/projected/50b161b3-4911-4ab1-b348-b1b52713c856-kube-api-access-8bxzp\") pod \"rabbitmq-cluster-operator-manager-5f97d8c699-sgnjz\" (UID: \"50b161b3-4911-4ab1-b348-b1b52713c856\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-sgnjz" Nov 24 09:04:43 crc kubenswrapper[4886]: I1124 09:04:43.014082 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-6498cbf48f-6pwgl"] Nov 24 09:04:43 crc kubenswrapper[4886]: I1124 09:04:43.020427 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6f4398e5-a5b8-4853-ac68-76385d1a749d-cert\") pod \"openstack-baremetal-operator-controller-manager-8c7444f48-n5gmw\" (UID: \"6f4398e5-a5b8-4853-ac68-76385d1a749d\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-8c7444f48-n5gmw" Nov 24 09:04:43 crc kubenswrapper[4886]: I1124 09:04:43.027600 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8bxzp\" (UniqueName: \"kubernetes.io/projected/50b161b3-4911-4ab1-b348-b1b52713c856-kube-api-access-8bxzp\") pod \"rabbitmq-cluster-operator-manager-5f97d8c699-sgnjz\" (UID: \"50b161b3-4911-4ab1-b348-b1b52713c856\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-sgnjz" Nov 24 09:04:43 crc kubenswrapper[4886]: W1124 09:04:43.063993 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0ca0fbbb_1734_4a4a_b996_c96aa000131c.slice/crio-6b4efb9e6179a43cbe8ba7f2a405fe0540f3b43bce979b44172b55dc5a398329 WatchSource:0}: Error finding container 6b4efb9e6179a43cbe8ba7f2a405fe0540f3b43bce979b44172b55dc5a398329: Status 404 returned error can't find the container with id 6b4efb9e6179a43cbe8ba7f2a405fe0540f3b43bce979b44172b55dc5a398329 Nov 24 09:04:43 crc kubenswrapper[4886]: I1124 09:04:43.069461 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-b4c496f69-kgnpt" Nov 24 09:04:43 crc kubenswrapper[4886]: I1124 09:04:43.100857 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-8c6448b9f-7zbrr" Nov 24 09:04:43 crc kubenswrapper[4886]: I1124 09:04:43.172236 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-6498cbf48f-6pwgl" event={"ID":"0ca0fbbb-1734-4a4a-b996-c96aa000131c","Type":"ContainerStarted","Data":"6b4efb9e6179a43cbe8ba7f2a405fe0540f3b43bce979b44172b55dc5a398329"} Nov 24 09:04:43 crc kubenswrapper[4886]: I1124 09:04:43.190405 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-sgnjz" Nov 24 09:04:43 crc kubenswrapper[4886]: I1124 09:04:43.201637 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-8c7444f48-n5gmw" Nov 24 09:04:43 crc kubenswrapper[4886]: I1124 09:04:43.389431 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/33c0c863-6350-4195-acb5-0dcc801d867b-cert\") pod \"openstack-operator-controller-manager-5cd7fdf8c-ztg92\" (UID: \"33c0c863-6350-4195-acb5-0dcc801d867b\") " pod="openstack-operators/openstack-operator-controller-manager-5cd7fdf8c-ztg92" Nov 24 09:04:43 crc kubenswrapper[4886]: I1124 09:04:43.398062 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/33c0c863-6350-4195-acb5-0dcc801d867b-cert\") pod \"openstack-operator-controller-manager-5cd7fdf8c-ztg92\" (UID: \"33c0c863-6350-4195-acb5-0dcc801d867b\") " pod="openstack-operators/openstack-operator-controller-manager-5cd7fdf8c-ztg92" Nov 24 09:04:43 crc kubenswrapper[4886]: I1124 09:04:43.485837 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-5cd7fdf8c-ztg92" Nov 24 09:04:43 crc kubenswrapper[4886]: I1124 09:04:43.687453 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-99b499f4-tjkbx"] Nov 24 09:04:43 crc kubenswrapper[4886]: W1124 09:04:43.713759 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podad04acbe_59a4_490c_ae4e_eacfbd65257c.slice/crio-eb58941e24d23671f2cb2b022f3ef04b7166a2cdfe553bff01a7690387ce1875 WatchSource:0}: Error finding container eb58941e24d23671f2cb2b022f3ef04b7166a2cdfe553bff01a7690387ce1875: Status 404 returned error can't find the container with id eb58941e24d23671f2cb2b022f3ef04b7166a2cdfe553bff01a7690387ce1875 Nov 24 09:04:43 crc kubenswrapper[4886]: I1124 09:04:43.715479 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-767ccfd65f-9lqmh"] Nov 24 09:04:44 crc kubenswrapper[4886]: I1124 09:04:44.043867 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-cfbb9c588-bpzxz"] Nov 24 09:04:44 crc kubenswrapper[4886]: I1124 09:04:44.077536 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-75fb479bcc-pvdd8"] Nov 24 09:04:44 crc kubenswrapper[4886]: I1124 09:04:44.112274 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-58f887965d-5zcvh"] Nov 24 09:04:44 crc kubenswrapper[4886]: I1124 09:04:44.122068 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-7969689c84-jb6p4"] Nov 24 09:04:44 crc kubenswrapper[4886]: I1124 09:04:44.153037 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-78bd47f458-kczhh"] Nov 24 09:04:44 crc kubenswrapper[4886]: I1124 09:04:44.172244 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-6df98c44d8-rsqm2"] Nov 24 09:04:44 crc kubenswrapper[4886]: I1124 09:04:44.199443 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-cfbb9c588-bpzxz" event={"ID":"f269ac9a-b191-4262-93bf-6cbd27c0d445","Type":"ContainerStarted","Data":"52b5fc575891885159a09c23fdfd173907ae939fa8ae3cc8dd67abfa179d28ca"} Nov 24 09:04:44 crc kubenswrapper[4886]: I1124 09:04:44.205208 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-54cfbf4c7d-qnv8p"] Nov 24 09:04:44 crc kubenswrapper[4886]: I1124 09:04:44.230132 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-598f69df5d-z7c6j"] Nov 24 09:04:44 crc kubenswrapper[4886]: I1124 09:04:44.238504 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-7454b96578-zks44"] Nov 24 09:04:44 crc kubenswrapper[4886]: I1124 09:04:44.239302 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-99b499f4-tjkbx" event={"ID":"6fc8a4d5-fad4-4eca-95c0-329b968d5c9d","Type":"ContainerStarted","Data":"67f5f1b83f40f7ab12e8bc3f50c568eb5212bde50b57ad1de003faf05d3a88b6"} Nov 24 09:04:44 crc kubenswrapper[4886]: I1124 09:04:44.248317 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-58f887965d-5zcvh" event={"ID":"73e41e35-4218-492b-93d6-d068c687ee6e","Type":"ContainerStarted","Data":"fc8f1a36d44e13eb3533a3306a8ba8a3afcdccc2d42be70316bdca52c3cf263e"} Nov 24 09:04:44 crc kubenswrapper[4886]: I1124 09:04:44.250005 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-54b5986bb8-47vf5"] Nov 24 09:04:44 crc kubenswrapper[4886]: I1124 09:04:44.259324 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-56f54d6746-glmkz"] Nov 24 09:04:44 crc kubenswrapper[4886]: I1124 09:04:44.263388 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-75fb479bcc-pvdd8" event={"ID":"6c8c64e0-e4d5-45c1-a697-205deeb19c54","Type":"ContainerStarted","Data":"4a43ef0c73a05f2118929b10264b861f2bcc7a66521ddf62d93510d265400062"} Nov 24 09:04:44 crc kubenswrapper[4886]: I1124 09:04:44.273925 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-767ccfd65f-9lqmh" event={"ID":"ad04acbe-59a4-490c-ae4e-eacfbd65257c","Type":"ContainerStarted","Data":"eb58941e24d23671f2cb2b022f3ef04b7166a2cdfe553bff01a7690387ce1875"} Nov 24 09:04:44 crc kubenswrapper[4886]: I1124 09:04:44.285338 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-6d4bf84b58-62fz7"] Nov 24 09:04:44 crc kubenswrapper[4886]: I1124 09:04:44.307438 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-d656998f4-qmlpw"] Nov 24 09:04:44 crc kubenswrapper[4886]: I1124 09:04:44.314972 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-8c6448b9f-7zbrr"] Nov 24 09:04:44 crc kubenswrapper[4886]: E1124 09:04:44.327888 4886 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:82207e753574d4be246f86c4b074500d66cf20214aa80f0a8525cf3287a35e6d,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-sqm6n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-b4c496f69-kgnpt_openstack-operators(e69be7ce-2069-42ab-a8c9-7b4c29243ff0): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 24 09:04:44 crc kubenswrapper[4886]: E1124 09:04:44.327882 4886 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/swift-operator@sha256:c0b5f124a37c1538042c0e63f0978429572e2a851d7f3a6eb80de09b86d755a0,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-r4fhr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-d656998f4-qmlpw_openstack-operators(213c4726-cd5c-4f79-ac2a-bc3ca07f0019): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 24 09:04:44 crc kubenswrapper[4886]: E1124 09:04:44.328038 4886 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/heat-operator@sha256:5edd825a235f5784d9a65892763c5388c39df1731d0fcbf4ee33408b8c83ac96,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-mq7qt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-operator-controller-manager-56f54d6746-glmkz_openstack-operators(f52431d9-53d4-415b-9e99-3e92fe7be4ca): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 24 09:04:44 crc kubenswrapper[4886]: E1124 09:04:44.331619 4886 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/telemetry-operator@sha256:5324a6d2f76fc3041023b0cbd09a733ef2b59f310d390e4d6483d219eb96494f,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-z4zjj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-6d4bf84b58-62fz7_openstack-operators(ac24d05a-4485-4fad-a03c-2fb381960d7b): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 24 09:04:44 crc kubenswrapper[4886]: W1124 09:04:44.335664 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod26b9db43_5cbd_4513_8685_976bc2bccad8.slice/crio-83b007f4d731594072c80a231a43cc7b09859b4df5d490022c94647d2e325837 WatchSource:0}: Error finding container 83b007f4d731594072c80a231a43cc7b09859b4df5d490022c94647d2e325837: Status 404 returned error can't find the container with id 83b007f4d731594072c80a231a43cc7b09859b4df5d490022c94647d2e325837 Nov 24 09:04:44 crc kubenswrapper[4886]: I1124 09:04:44.339427 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-b4c496f69-kgnpt"] Nov 24 09:04:44 crc kubenswrapper[4886]: E1124 09:04:44.346734 4886 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ovn-operator@sha256:5d49d4594c66eda7b151746cc6e1d3c67c0129b4503eeb043a64ae8ec2da6a1b,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-x6kl5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-54fc5f65b7-z6p4s_openstack-operators(26b9db43-5cbd-4513-8685-976bc2bccad8): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 24 09:04:44 crc kubenswrapper[4886]: I1124 09:04:44.348775 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-5b797b8dff-nwx4f"] Nov 24 09:04:44 crc kubenswrapper[4886]: W1124 09:04:44.357101 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod789de7d5_5a8b_4005_b37d_83057da5b4e7.slice/crio-8e2fb057a448a0cb685c6d3bc7cb1f904d1d6fad7196b5b587c8cfeb0075310f WatchSource:0}: Error finding container 8e2fb057a448a0cb685c6d3bc7cb1f904d1d6fad7196b5b587c8cfeb0075310f: Status 404 returned error can't find the container with id 8e2fb057a448a0cb685c6d3bc7cb1f904d1d6fad7196b5b587c8cfeb0075310f Nov 24 09:04:44 crc kubenswrapper[4886]: I1124 09:04:44.361316 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-54fc5f65b7-z6p4s"] Nov 24 09:04:44 crc kubenswrapper[4886]: E1124 09:04:44.363746 4886 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/placement-operator@sha256:4094e7fc11a33e8e2b6768a053cafaf5b122446d23f9113d43d520cb64e9776c,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-xv62z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-5b797b8dff-nwx4f_openstack-operators(789de7d5-5a8b-4005-b37d-83057da5b4e7): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 24 09:04:44 crc kubenswrapper[4886]: W1124 09:04:44.364554 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod50b161b3_4911_4ab1_b348_b1b52713c856.slice/crio-7f46b45a29cb4dc34e944b227c6060654659193263b75c46f7aeff2c0986dac2 WatchSource:0}: Error finding container 7f46b45a29cb4dc34e944b227c6060654659193263b75c46f7aeff2c0986dac2: Status 404 returned error can't find the container with id 7f46b45a29cb4dc34e944b227c6060654659193263b75c46f7aeff2c0986dac2 Nov 24 09:04:44 crc kubenswrapper[4886]: I1124 09:04:44.366671 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-sgnjz"] Nov 24 09:04:44 crc kubenswrapper[4886]: E1124 09:04:44.369077 4886 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-8bxzp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-5f97d8c699-sgnjz_openstack-operators(50b161b3-4911-4ab1-b348-b1b52713c856): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 24 09:04:44 crc kubenswrapper[4886]: E1124 09:04:44.371391 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-sgnjz" podUID="50b161b3-4911-4ab1-b348-b1b52713c856" Nov 24 09:04:44 crc kubenswrapper[4886]: I1124 09:04:44.443408 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-5cd7fdf8c-ztg92"] Nov 24 09:04:44 crc kubenswrapper[4886]: I1124 09:04:44.455834 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-8c7444f48-n5gmw"] Nov 24 09:04:44 crc kubenswrapper[4886]: E1124 09:04:44.668238 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/heat-operator-controller-manager-56f54d6746-glmkz" podUID="f52431d9-53d4-415b-9e99-3e92fe7be4ca" Nov 24 09:04:44 crc kubenswrapper[4886]: E1124 09:04:44.668381 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/test-operator-controller-manager-b4c496f69-kgnpt" podUID="e69be7ce-2069-42ab-a8c9-7b4c29243ff0" Nov 24 09:04:44 crc kubenswrapper[4886]: E1124 09:04:44.725547 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/telemetry-operator-controller-manager-6d4bf84b58-62fz7" podUID="ac24d05a-4485-4fad-a03c-2fb381960d7b" Nov 24 09:04:44 crc kubenswrapper[4886]: E1124 09:04:44.768476 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/swift-operator-controller-manager-d656998f4-qmlpw" podUID="213c4726-cd5c-4f79-ac2a-bc3ca07f0019" Nov 24 09:04:44 crc kubenswrapper[4886]: E1124 09:04:44.820796 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/placement-operator-controller-manager-5b797b8dff-nwx4f" podUID="789de7d5-5a8b-4005-b37d-83057da5b4e7" Nov 24 09:04:44 crc kubenswrapper[4886]: E1124 09:04:44.996848 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/ovn-operator-controller-manager-54fc5f65b7-z6p4s" podUID="26b9db43-5cbd-4513-8685-976bc2bccad8" Nov 24 09:04:45 crc kubenswrapper[4886]: I1124 09:04:45.577101 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-7454b96578-zks44" event={"ID":"607c4e63-3cb6-43f8-86b0-7af4b07e81e4","Type":"ContainerStarted","Data":"94ffd0a1fec82abfa45193b721ee1ee7f01bb47afb3dfbcbc2e2e897307c1c13"} Nov 24 09:04:45 crc kubenswrapper[4886]: I1124 09:04:45.595743 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-b4c496f69-kgnpt" event={"ID":"e69be7ce-2069-42ab-a8c9-7b4c29243ff0","Type":"ContainerStarted","Data":"08fe9e37b587a3c847560b7b7bf01d4a83de4fb8e831042e0a400428f8874dfc"} Nov 24 09:04:45 crc kubenswrapper[4886]: I1124 09:04:45.595799 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-b4c496f69-kgnpt" event={"ID":"e69be7ce-2069-42ab-a8c9-7b4c29243ff0","Type":"ContainerStarted","Data":"fd63df438a973363fe78b82a2f245bebdc86f465aac0e543f63de58c8d7042c3"} Nov 24 09:04:45 crc kubenswrapper[4886]: E1124 09:04:45.599855 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:82207e753574d4be246f86c4b074500d66cf20214aa80f0a8525cf3287a35e6d\\\"\"" pod="openstack-operators/test-operator-controller-manager-b4c496f69-kgnpt" podUID="e69be7ce-2069-42ab-a8c9-7b4c29243ff0" Nov 24 09:04:45 crc kubenswrapper[4886]: I1124 09:04:45.600292 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-7969689c84-jb6p4" event={"ID":"a991f440-958e-42d4-b062-7369966d84c3","Type":"ContainerStarted","Data":"55dcbc682eaafd408e3679554e74a207200b19c0eb24c352777b13d317eabbae"} Nov 24 09:04:45 crc kubenswrapper[4886]: I1124 09:04:45.603491 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5b797b8dff-nwx4f" event={"ID":"789de7d5-5a8b-4005-b37d-83057da5b4e7","Type":"ContainerStarted","Data":"ecf2b67ac3ea644d14da6a4cd047d79196829d994d62bdcfcd8f5052e0d8efbe"} Nov 24 09:04:45 crc kubenswrapper[4886]: I1124 09:04:45.603531 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5b797b8dff-nwx4f" event={"ID":"789de7d5-5a8b-4005-b37d-83057da5b4e7","Type":"ContainerStarted","Data":"8e2fb057a448a0cb685c6d3bc7cb1f904d1d6fad7196b5b587c8cfeb0075310f"} Nov 24 09:04:45 crc kubenswrapper[4886]: E1124 09:04:45.605057 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:4094e7fc11a33e8e2b6768a053cafaf5b122446d23f9113d43d520cb64e9776c\\\"\"" pod="openstack-operators/placement-operator-controller-manager-5b797b8dff-nwx4f" podUID="789de7d5-5a8b-4005-b37d-83057da5b4e7" Nov 24 09:04:45 crc kubenswrapper[4886]: I1124 09:04:45.606040 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-6df98c44d8-rsqm2" event={"ID":"0f03538e-297e-410d-bf6e-0f947cba868c","Type":"ContainerStarted","Data":"c6da80742ff52ca5763a040fb1992f4ec778fd2c48110ff27de5b1f9c8a29114"} Nov 24 09:04:45 crc kubenswrapper[4886]: I1124 09:04:45.631666 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-5cd7fdf8c-ztg92" event={"ID":"33c0c863-6350-4195-acb5-0dcc801d867b","Type":"ContainerStarted","Data":"0e4ed32cf2c94c6c6c100ca8903067721960e4bf3fe808d96324c163401812d5"} Nov 24 09:04:45 crc kubenswrapper[4886]: I1124 09:04:45.631719 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-5cd7fdf8c-ztg92" event={"ID":"33c0c863-6350-4195-acb5-0dcc801d867b","Type":"ContainerStarted","Data":"05b0f492cc06b301ce23ddf4ae7d4bc323d4c3b71fc63b6c537eb2605d4aca7e"} Nov 24 09:04:45 crc kubenswrapper[4886]: I1124 09:04:45.632840 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-5cd7fdf8c-ztg92" Nov 24 09:04:45 crc kubenswrapper[4886]: I1124 09:04:45.634454 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-54b5986bb8-47vf5" event={"ID":"671e2772-1d7f-4c97-91f6-83f0782b4f6b","Type":"ContainerStarted","Data":"58473728bd0c26dba7d43d5eae1d9da640dec4cb921b8ac97759d30dbe005921"} Nov 24 09:04:45 crc kubenswrapper[4886]: I1124 09:04:45.636401 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-d656998f4-qmlpw" event={"ID":"213c4726-cd5c-4f79-ac2a-bc3ca07f0019","Type":"ContainerStarted","Data":"ef94c02e780435e7ee103f7cfc1ed9ff63b2eb1061f9fc45c1e47cf29c22752c"} Nov 24 09:04:45 crc kubenswrapper[4886]: I1124 09:04:45.636458 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-d656998f4-qmlpw" event={"ID":"213c4726-cd5c-4f79-ac2a-bc3ca07f0019","Type":"ContainerStarted","Data":"6eca7a17e01fb9cbd90b36a3e1d5f8f27e6e541e03526f9b181b8bde0394b0e1"} Nov 24 09:04:45 crc kubenswrapper[4886]: I1124 09:04:45.642589 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-54cfbf4c7d-qnv8p" event={"ID":"8aadf5e6-b19e-4b19-b812-50c5bd4721a4","Type":"ContainerStarted","Data":"cf925ece905f4f2fecc6b25c1c931ba860a8e5a0c219fa55bda6417d5022e179"} Nov 24 09:04:45 crc kubenswrapper[4886]: E1124 09:04:45.642664 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:c0b5f124a37c1538042c0e63f0978429572e2a851d7f3a6eb80de09b86d755a0\\\"\"" pod="openstack-operators/swift-operator-controller-manager-d656998f4-qmlpw" podUID="213c4726-cd5c-4f79-ac2a-bc3ca07f0019" Nov 24 09:04:45 crc kubenswrapper[4886]: I1124 09:04:45.653761 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-sgnjz" event={"ID":"50b161b3-4911-4ab1-b348-b1b52713c856","Type":"ContainerStarted","Data":"7f46b45a29cb4dc34e944b227c6060654659193263b75c46f7aeff2c0986dac2"} Nov 24 09:04:45 crc kubenswrapper[4886]: E1124 09:04:45.656753 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-sgnjz" podUID="50b161b3-4911-4ab1-b348-b1b52713c856" Nov 24 09:04:45 crc kubenswrapper[4886]: I1124 09:04:45.681617 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-56f54d6746-glmkz" event={"ID":"f52431d9-53d4-415b-9e99-3e92fe7be4ca","Type":"ContainerStarted","Data":"36c7126acb3ae8cfaf87bcec74575e800c6b5a6abbd1c2051936165889c4bbba"} Nov 24 09:04:45 crc kubenswrapper[4886]: I1124 09:04:45.681688 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-56f54d6746-glmkz" event={"ID":"f52431d9-53d4-415b-9e99-3e92fe7be4ca","Type":"ContainerStarted","Data":"9827b26788cff6603d40630ecf854b5ec72c06bb7d0344a933fabcb9ab25b424"} Nov 24 09:04:45 crc kubenswrapper[4886]: E1124 09:04:45.683114 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/heat-operator@sha256:5edd825a235f5784d9a65892763c5388c39df1731d0fcbf4ee33408b8c83ac96\\\"\"" pod="openstack-operators/heat-operator-controller-manager-56f54d6746-glmkz" podUID="f52431d9-53d4-415b-9e99-3e92fe7be4ca" Nov 24 09:04:45 crc kubenswrapper[4886]: I1124 09:04:45.686979 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-78bd47f458-kczhh" event={"ID":"9a2dc275-73a5-4caf-89fe-120ce9401655","Type":"ContainerStarted","Data":"9798124d36fecf26c05d44822619299398d24147525420a910f52039f95ea5e1"} Nov 24 09:04:45 crc kubenswrapper[4886]: I1124 09:04:45.699751 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-6d4bf84b58-62fz7" event={"ID":"ac24d05a-4485-4fad-a03c-2fb381960d7b","Type":"ContainerStarted","Data":"21cc19b25aaa16d9570591e35a2ae595c2103f2483ae9795803c042091d1e4c6"} Nov 24 09:04:45 crc kubenswrapper[4886]: I1124 09:04:45.699807 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-6d4bf84b58-62fz7" event={"ID":"ac24d05a-4485-4fad-a03c-2fb381960d7b","Type":"ContainerStarted","Data":"934d944d9fc4b758320a3e99cd0fa92d03fdfccf43e27d41618635d0242c0d54"} Nov 24 09:04:45 crc kubenswrapper[4886]: E1124 09:04:45.705457 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:5324a6d2f76fc3041023b0cbd09a733ef2b59f310d390e4d6483d219eb96494f\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-6d4bf84b58-62fz7" podUID="ac24d05a-4485-4fad-a03c-2fb381960d7b" Nov 24 09:04:45 crc kubenswrapper[4886]: I1124 09:04:45.707809 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-8c7444f48-n5gmw" event={"ID":"6f4398e5-a5b8-4853-ac68-76385d1a749d","Type":"ContainerStarted","Data":"95acdbb30893978f8d83445f5bf348477f7141095bc6b107bcf3b16584c2a686"} Nov 24 09:04:45 crc kubenswrapper[4886]: I1124 09:04:45.713256 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-598f69df5d-z7c6j" event={"ID":"def4f2b0-daf8-48c1-95ab-98c2c6f8c72d","Type":"ContainerStarted","Data":"dd0c1bcf47ece9cedfcfc69c5278a6d2091792d90ac4d3e411569577bb4ef62b"} Nov 24 09:04:45 crc kubenswrapper[4886]: I1124 09:04:45.731137 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-8c6448b9f-7zbrr" event={"ID":"dc151242-3f76-4414-9a2b-a5e28adf12af","Type":"ContainerStarted","Data":"3328efbfe824026e84f7e387501ca6d0401f656dc61c446179389e5b993c2195"} Nov 24 09:04:45 crc kubenswrapper[4886]: I1124 09:04:45.734496 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-54fc5f65b7-z6p4s" event={"ID":"26b9db43-5cbd-4513-8685-976bc2bccad8","Type":"ContainerStarted","Data":"00732d2766aefc8df73f4c8c9c1cbaae43919d6ee7144737378c8c4cd82c7c31"} Nov 24 09:04:45 crc kubenswrapper[4886]: I1124 09:04:45.734536 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-54fc5f65b7-z6p4s" event={"ID":"26b9db43-5cbd-4513-8685-976bc2bccad8","Type":"ContainerStarted","Data":"83b007f4d731594072c80a231a43cc7b09859b4df5d490022c94647d2e325837"} Nov 24 09:04:45 crc kubenswrapper[4886]: E1124 09:04:45.742409 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:5d49d4594c66eda7b151746cc6e1d3c67c0129b4503eeb043a64ae8ec2da6a1b\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-54fc5f65b7-z6p4s" podUID="26b9db43-5cbd-4513-8685-976bc2bccad8" Nov 24 09:04:45 crc kubenswrapper[4886]: I1124 09:04:45.976041 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-5cd7fdf8c-ztg92" podStartSLOduration=3.9760003470000003 podStartE2EDuration="3.976000347s" podCreationTimestamp="2025-11-24 09:04:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 09:04:45.967056309 +0000 UTC m=+941.853794444" watchObservedRunningTime="2025-11-24 09:04:45.976000347 +0000 UTC m=+941.862738482" Nov 24 09:04:46 crc kubenswrapper[4886]: I1124 09:04:46.789106 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-5cd7fdf8c-ztg92" event={"ID":"33c0c863-6350-4195-acb5-0dcc801d867b","Type":"ContainerStarted","Data":"eb4cc4856214baec070b9e7420be872b63002b06a6ebe02c601c191f29e537ae"} Nov 24 09:04:46 crc kubenswrapper[4886]: E1124 09:04:46.804434 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:4094e7fc11a33e8e2b6768a053cafaf5b122446d23f9113d43d520cb64e9776c\\\"\"" pod="openstack-operators/placement-operator-controller-manager-5b797b8dff-nwx4f" podUID="789de7d5-5a8b-4005-b37d-83057da5b4e7" Nov 24 09:04:46 crc kubenswrapper[4886]: E1124 09:04:46.804801 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-sgnjz" podUID="50b161b3-4911-4ab1-b348-b1b52713c856" Nov 24 09:04:46 crc kubenswrapper[4886]: E1124 09:04:46.804888 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:c0b5f124a37c1538042c0e63f0978429572e2a851d7f3a6eb80de09b86d755a0\\\"\"" pod="openstack-operators/swift-operator-controller-manager-d656998f4-qmlpw" podUID="213c4726-cd5c-4f79-ac2a-bc3ca07f0019" Nov 24 09:04:46 crc kubenswrapper[4886]: E1124 09:04:46.804923 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:5324a6d2f76fc3041023b0cbd09a733ef2b59f310d390e4d6483d219eb96494f\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-6d4bf84b58-62fz7" podUID="ac24d05a-4485-4fad-a03c-2fb381960d7b" Nov 24 09:04:46 crc kubenswrapper[4886]: E1124 09:04:46.804945 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:5d49d4594c66eda7b151746cc6e1d3c67c0129b4503eeb043a64ae8ec2da6a1b\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-54fc5f65b7-z6p4s" podUID="26b9db43-5cbd-4513-8685-976bc2bccad8" Nov 24 09:04:46 crc kubenswrapper[4886]: E1124 09:04:46.804987 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:82207e753574d4be246f86c4b074500d66cf20214aa80f0a8525cf3287a35e6d\\\"\"" pod="openstack-operators/test-operator-controller-manager-b4c496f69-kgnpt" podUID="e69be7ce-2069-42ab-a8c9-7b4c29243ff0" Nov 24 09:04:46 crc kubenswrapper[4886]: E1124 09:04:46.805007 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/heat-operator@sha256:5edd825a235f5784d9a65892763c5388c39df1731d0fcbf4ee33408b8c83ac96\\\"\"" pod="openstack-operators/heat-operator-controller-manager-56f54d6746-glmkz" podUID="f52431d9-53d4-415b-9e99-3e92fe7be4ca" Nov 24 09:04:53 crc kubenswrapper[4886]: I1124 09:04:53.493133 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-5cd7fdf8c-ztg92" Nov 24 09:04:57 crc kubenswrapper[4886]: I1124 09:04:57.850495 4886 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 24 09:05:05 crc kubenswrapper[4886]: E1124 09:05:05.861088 4886 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/ironic-operator@sha256:b582189b55fddc180a6d468c9dba7078009a693db37b4093d4ba0c99ec675377" Nov 24 09:05:05 crc kubenswrapper[4886]: E1124 09:05:05.861696 4886 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ironic-operator@sha256:b582189b55fddc180a6d468c9dba7078009a693db37b4093d4ba0c99ec675377,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-l5x2x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ironic-operator-controller-manager-99b499f4-tjkbx_openstack-operators(6fc8a4d5-fad4-4eca-95c0-329b968d5c9d): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 24 09:05:06 crc kubenswrapper[4886]: E1124 09:05:06.719205 4886 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/manila-operator@sha256:b749a5dd8bc718875c3f5e81b38d54d003be77ab92de4a3e9f9595566496a58a" Nov 24 09:05:06 crc kubenswrapper[4886]: E1124 09:05:06.719494 4886 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/manila-operator@sha256:b749a5dd8bc718875c3f5e81b38d54d003be77ab92de4a3e9f9595566496a58a,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-pbx2c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod manila-operator-controller-manager-58f887965d-5zcvh_openstack-operators(73e41e35-4218-492b-93d6-d068c687ee6e): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 24 09:05:07 crc kubenswrapper[4886]: E1124 09:05:07.242705 4886 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/keystone-operator@sha256:3ef72bbd7cce89ff54d850ff44ca6d7b2360834a502da3d561aeb6fd3d9af50a" Nov 24 09:05:07 crc kubenswrapper[4886]: E1124 09:05:07.243307 4886 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/keystone-operator@sha256:3ef72bbd7cce89ff54d850ff44ca6d7b2360834a502da3d561aeb6fd3d9af50a,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-877xn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-7454b96578-zks44_openstack-operators(607c4e63-3cb6-43f8-86b0-7af4b07e81e4): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 24 09:05:09 crc kubenswrapper[4886]: E1124 09:05:09.451293 4886 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/watcher-operator@sha256:4838402d41d42c56613d43dc5041aae475a2b18e6172491d6c4d4a78a580697f" Nov 24 09:05:09 crc kubenswrapper[4886]: E1124 09:05:09.451845 4886 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/watcher-operator@sha256:4838402d41d42c56613d43dc5041aae475a2b18e6172491d6c4d4a78a580697f,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-m8zqd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-8c6448b9f-7zbrr_openstack-operators(dc151242-3f76-4414-9a2b-a5e28adf12af): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 24 09:05:09 crc kubenswrapper[4886]: E1124 09:05:09.526499 4886 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.53:5001/openstack-k8s-operators/infra-operator:4997532743541199919c49dcbcd62c97cf913a0c" Nov 24 09:05:09 crc kubenswrapper[4886]: E1124 09:05:09.526592 4886 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.53:5001/openstack-k8s-operators/infra-operator:4997532743541199919c49dcbcd62c97cf913a0c" Nov 24 09:05:09 crc kubenswrapper[4886]: E1124 09:05:09.526848 4886 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:38.102.83.53:5001/openstack-k8s-operators/infra-operator:4997532743541199919c49dcbcd62c97cf913a0c,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{600 -3} {} 600m DecimalSI},memory: {{2147483648 0} {} 2Gi BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{536870912 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cert,ReadOnly:true,MountPath:/tmp/k8s-webhook-server/serving-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hxtf5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod infra-operator-controller-manager-6df98c44d8-rsqm2_openstack-operators(0f03538e-297e-410d-bf6e-0f947cba868c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 24 09:05:14 crc kubenswrapper[4886]: E1124 09:05:14.686633 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/ironic-operator-controller-manager-99b499f4-tjkbx" podUID="6fc8a4d5-fad4-4eca-95c0-329b968d5c9d" Nov 24 09:05:14 crc kubenswrapper[4886]: E1124 09:05:14.744486 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/infra-operator-controller-manager-6df98c44d8-rsqm2" podUID="0f03538e-297e-410d-bf6e-0f947cba868c" Nov 24 09:05:14 crc kubenswrapper[4886]: E1124 09:05:14.910363 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/watcher-operator-controller-manager-8c6448b9f-7zbrr" podUID="dc151242-3f76-4414-9a2b-a5e28adf12af" Nov 24 09:05:15 crc kubenswrapper[4886]: E1124 09:05:15.041999 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/keystone-operator-controller-manager-7454b96578-zks44" podUID="607c4e63-3cb6-43f8-86b0-7af4b07e81e4" Nov 24 09:05:15 crc kubenswrapper[4886]: E1124 09:05:15.053027 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/manila-operator-controller-manager-58f887965d-5zcvh" podUID="73e41e35-4218-492b-93d6-d068c687ee6e" Nov 24 09:05:15 crc kubenswrapper[4886]: I1124 09:05:15.054201 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-6498cbf48f-6pwgl" event={"ID":"0ca0fbbb-1734-4a4a-b996-c96aa000131c","Type":"ContainerStarted","Data":"e0d75e93e278d52528252849260224d4c8703607d08bc3dab27e702bc3943f84"} Nov 24 09:05:15 crc kubenswrapper[4886]: I1124 09:05:15.057024 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-54cfbf4c7d-qnv8p" event={"ID":"8aadf5e6-b19e-4b19-b812-50c5bd4721a4","Type":"ContainerStarted","Data":"c4e148f75494a4d5213eb02e40699cbbc3c762884c8ae6bb71dc27fe3ccd60ae"} Nov 24 09:05:15 crc kubenswrapper[4886]: E1124 09:05:15.079504 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/manila-operator@sha256:b749a5dd8bc718875c3f5e81b38d54d003be77ab92de4a3e9f9595566496a58a\\\"\"" pod="openstack-operators/manila-operator-controller-manager-58f887965d-5zcvh" podUID="73e41e35-4218-492b-93d6-d068c687ee6e" Nov 24 09:05:15 crc kubenswrapper[4886]: I1124 09:05:15.082800 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-99b499f4-tjkbx" event={"ID":"6fc8a4d5-fad4-4eca-95c0-329b968d5c9d","Type":"ContainerStarted","Data":"b4b467fe62e53352cab639ad6643318225b2659b8cab5415a7bcf42b2f372eb2"} Nov 24 09:05:15 crc kubenswrapper[4886]: E1124 09:05:15.089718 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ironic-operator@sha256:b582189b55fddc180a6d468c9dba7078009a693db37b4093d4ba0c99ec675377\\\"\"" pod="openstack-operators/ironic-operator-controller-manager-99b499f4-tjkbx" podUID="6fc8a4d5-fad4-4eca-95c0-329b968d5c9d" Nov 24 09:05:15 crc kubenswrapper[4886]: I1124 09:05:15.102507 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-78bd47f458-kczhh" event={"ID":"9a2dc275-73a5-4caf-89fe-120ce9401655","Type":"ContainerStarted","Data":"1fcf56b282da2b1df4c86b9fbea349e73508a9b47ba6bccfd612d5c704da6b8f"} Nov 24 09:05:15 crc kubenswrapper[4886]: I1124 09:05:15.144556 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-598f69df5d-z7c6j" event={"ID":"def4f2b0-daf8-48c1-95ab-98c2c6f8c72d","Type":"ContainerStarted","Data":"bc0b887da9e2a6961218750de3f32199099563a6091dc0aab6025d94acbd38e3"} Nov 24 09:05:15 crc kubenswrapper[4886]: I1124 09:05:15.162469 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-8c6448b9f-7zbrr" event={"ID":"dc151242-3f76-4414-9a2b-a5e28adf12af","Type":"ContainerStarted","Data":"134c47269e25655220db028386f7fe99fadaa7a108daf1b6ac8e00a9875d960d"} Nov 24 09:05:15 crc kubenswrapper[4886]: E1124 09:05:15.167681 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:4838402d41d42c56613d43dc5041aae475a2b18e6172491d6c4d4a78a580697f\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-8c6448b9f-7zbrr" podUID="dc151242-3f76-4414-9a2b-a5e28adf12af" Nov 24 09:05:15 crc kubenswrapper[4886]: I1124 09:05:15.168981 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-6df98c44d8-rsqm2" event={"ID":"0f03538e-297e-410d-bf6e-0f947cba868c","Type":"ContainerStarted","Data":"648e2be7774ae95c5267f7ef94663f7fb2f5e200dd6f05ced543b3c1365a7c51"} Nov 24 09:05:15 crc kubenswrapper[4886]: E1124 09:05:15.170082 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.53:5001/openstack-k8s-operators/infra-operator:4997532743541199919c49dcbcd62c97cf913a0c\\\"\"" pod="openstack-operators/infra-operator-controller-manager-6df98c44d8-rsqm2" podUID="0f03538e-297e-410d-bf6e-0f947cba868c" Nov 24 09:05:15 crc kubenswrapper[4886]: I1124 09:05:15.171366 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-54b5986bb8-47vf5" event={"ID":"671e2772-1d7f-4c97-91f6-83f0782b4f6b","Type":"ContainerStarted","Data":"571d52517acfceea6c6a07b8ee42c8e77ecd79785ca94cfe97d8a6e6963618b8"} Nov 24 09:05:15 crc kubenswrapper[4886]: E1124 09:05:15.187891 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/keystone-operator@sha256:3ef72bbd7cce89ff54d850ff44ca6d7b2360834a502da3d561aeb6fd3d9af50a\\\"\"" pod="openstack-operators/keystone-operator-controller-manager-7454b96578-zks44" podUID="607c4e63-3cb6-43f8-86b0-7af4b07e81e4" Nov 24 09:05:16 crc kubenswrapper[4886]: I1124 09:05:16.201630 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-54b5986bb8-47vf5" event={"ID":"671e2772-1d7f-4c97-91f6-83f0782b4f6b","Type":"ContainerStarted","Data":"72b4ea39d6ba3af43e8e41976144f8b646373221f1f424933ab4d0d3156aac9c"} Nov 24 09:05:16 crc kubenswrapper[4886]: I1124 09:05:16.202933 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-54b5986bb8-47vf5" Nov 24 09:05:16 crc kubenswrapper[4886]: I1124 09:05:16.221222 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-75fb479bcc-pvdd8" event={"ID":"6c8c64e0-e4d5-45c1-a697-205deeb19c54","Type":"ContainerStarted","Data":"37b385672e37a8751cf526a60da1fd09252eae3bbe76532c978e54776b113424"} Nov 24 09:05:16 crc kubenswrapper[4886]: I1124 09:05:16.221279 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-75fb479bcc-pvdd8" event={"ID":"6c8c64e0-e4d5-45c1-a697-205deeb19c54","Type":"ContainerStarted","Data":"958e600bdaeeac09d9b603fc820e68444c62529a7ab54fcbe4f7ebf13ec7fb8d"} Nov 24 09:05:16 crc kubenswrapper[4886]: I1124 09:05:16.221399 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-75fb479bcc-pvdd8" Nov 24 09:05:16 crc kubenswrapper[4886]: I1124 09:05:16.227974 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-598f69df5d-z7c6j" event={"ID":"def4f2b0-daf8-48c1-95ab-98c2c6f8c72d","Type":"ContainerStarted","Data":"7b1f2e9bdeb4d772f5c0f3b59b581e14222639d99cbd4872051df2f5fca537cd"} Nov 24 09:05:16 crc kubenswrapper[4886]: I1124 09:05:16.228125 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-598f69df5d-z7c6j" Nov 24 09:05:16 crc kubenswrapper[4886]: I1124 09:05:16.232321 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-58f887965d-5zcvh" event={"ID":"73e41e35-4218-492b-93d6-d068c687ee6e","Type":"ContainerStarted","Data":"ebf8c2757df1e5798086adc44a4622a51d92aff465697082124c26124228a770"} Nov 24 09:05:16 crc kubenswrapper[4886]: E1124 09:05:16.234933 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/manila-operator@sha256:b749a5dd8bc718875c3f5e81b38d54d003be77ab92de4a3e9f9595566496a58a\\\"\"" pod="openstack-operators/manila-operator-controller-manager-58f887965d-5zcvh" podUID="73e41e35-4218-492b-93d6-d068c687ee6e" Nov 24 09:05:16 crc kubenswrapper[4886]: I1124 09:05:16.241243 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-56f54d6746-glmkz" event={"ID":"f52431d9-53d4-415b-9e99-3e92fe7be4ca","Type":"ContainerStarted","Data":"637d5e1be73e9bee4e8c01ae23a80754f8c95e8f8d30ef82d73a7102f457049a"} Nov 24 09:05:16 crc kubenswrapper[4886]: I1124 09:05:16.242290 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-56f54d6746-glmkz" Nov 24 09:05:16 crc kubenswrapper[4886]: I1124 09:05:16.246042 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-54b5986bb8-47vf5" podStartSLOduration=7.269966202 podStartE2EDuration="35.246024015s" podCreationTimestamp="2025-11-24 09:04:41 +0000 UTC" firstStartedPulling="2025-11-24 09:04:44.273486491 +0000 UTC m=+940.160224626" lastFinishedPulling="2025-11-24 09:05:12.249544304 +0000 UTC m=+968.136282439" observedRunningTime="2025-11-24 09:05:16.236348316 +0000 UTC m=+972.123086451" watchObservedRunningTime="2025-11-24 09:05:16.246024015 +0000 UTC m=+972.132762150" Nov 24 09:05:16 crc kubenswrapper[4886]: I1124 09:05:16.253059 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-7454b96578-zks44" event={"ID":"607c4e63-3cb6-43f8-86b0-7af4b07e81e4","Type":"ContainerStarted","Data":"8d717c0defed1153c0d1d56f1a7b99938211801d5555cd06a35c2139d49e321f"} Nov 24 09:05:16 crc kubenswrapper[4886]: E1124 09:05:16.255185 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/keystone-operator@sha256:3ef72bbd7cce89ff54d850ff44ca6d7b2360834a502da3d561aeb6fd3d9af50a\\\"\"" pod="openstack-operators/keystone-operator-controller-manager-7454b96578-zks44" podUID="607c4e63-3cb6-43f8-86b0-7af4b07e81e4" Nov 24 09:05:16 crc kubenswrapper[4886]: I1124 09:05:16.259288 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-b4c496f69-kgnpt" event={"ID":"e69be7ce-2069-42ab-a8c9-7b4c29243ff0","Type":"ContainerStarted","Data":"c81f7a7ad7951e5f2eb0b3496c8c7480ee6cb01ab8ecb1b5f7f6f1eda0548194"} Nov 24 09:05:16 crc kubenswrapper[4886]: I1124 09:05:16.259934 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-b4c496f69-kgnpt" Nov 24 09:05:16 crc kubenswrapper[4886]: I1124 09:05:16.269025 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-54fc5f65b7-z6p4s" event={"ID":"26b9db43-5cbd-4513-8685-976bc2bccad8","Type":"ContainerStarted","Data":"98b2c0380307257fc4772df56d25c2908884cd63aceb7681659a919ba5e25b70"} Nov 24 09:05:16 crc kubenswrapper[4886]: I1124 09:05:16.269872 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-54fc5f65b7-z6p4s" Nov 24 09:05:16 crc kubenswrapper[4886]: I1124 09:05:16.271427 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-6d4bf84b58-62fz7" event={"ID":"ac24d05a-4485-4fad-a03c-2fb381960d7b","Type":"ContainerStarted","Data":"1d2bb418750f24fb4561c929d38e8b92c88677c2c1a386efaae45595279f6164"} Nov 24 09:05:16 crc kubenswrapper[4886]: I1124 09:05:16.271829 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-6d4bf84b58-62fz7" Nov 24 09:05:16 crc kubenswrapper[4886]: I1124 09:05:16.272931 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-d656998f4-qmlpw" event={"ID":"213c4726-cd5c-4f79-ac2a-bc3ca07f0019","Type":"ContainerStarted","Data":"cdf362f05c754c168fa344deeb310fa27303a38962de864850aff4b9da090f8c"} Nov 24 09:05:16 crc kubenswrapper[4886]: I1124 09:05:16.273329 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-d656998f4-qmlpw" Nov 24 09:05:16 crc kubenswrapper[4886]: I1124 09:05:16.274929 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5b797b8dff-nwx4f" event={"ID":"789de7d5-5a8b-4005-b37d-83057da5b4e7","Type":"ContainerStarted","Data":"26b79d86a876d3584732c06fdc88a9ca52870ead00479fef390591234fd74214"} Nov 24 09:05:16 crc kubenswrapper[4886]: I1124 09:05:16.275183 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-5b797b8dff-nwx4f" Nov 24 09:05:16 crc kubenswrapper[4886]: I1124 09:05:16.276408 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-cfbb9c588-bpzxz" event={"ID":"f269ac9a-b191-4262-93bf-6cbd27c0d445","Type":"ContainerStarted","Data":"6e1830b4a9860f39ffbec59d2f95ab1f00427db7cbdfc0aff8563523cafa7d15"} Nov 24 09:05:16 crc kubenswrapper[4886]: I1124 09:05:16.276432 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-cfbb9c588-bpzxz" event={"ID":"f269ac9a-b191-4262-93bf-6cbd27c0d445","Type":"ContainerStarted","Data":"f27ce69c915788f83942306bb9084ef0c5bcfd8c58799a9dd75185859eeefff8"} Nov 24 09:05:16 crc kubenswrapper[4886]: I1124 09:05:16.276790 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-cfbb9c588-bpzxz" Nov 24 09:05:16 crc kubenswrapper[4886]: I1124 09:05:16.277839 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-8c7444f48-n5gmw" event={"ID":"6f4398e5-a5b8-4853-ac68-76385d1a749d","Type":"ContainerStarted","Data":"d498c502d3862051f3c664a21dfafc14c9e88ede5da2b1502d2cb2e3343f3d31"} Nov 24 09:05:16 crc kubenswrapper[4886]: I1124 09:05:16.278249 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-8c7444f48-n5gmw" Nov 24 09:05:16 crc kubenswrapper[4886]: I1124 09:05:16.279140 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-sgnjz" event={"ID":"50b161b3-4911-4ab1-b348-b1b52713c856","Type":"ContainerStarted","Data":"e3821b765dc83eb8f2638c3ac892cff0f2a66afa5bed65c333646d08e6b91c3a"} Nov 24 09:05:16 crc kubenswrapper[4886]: I1124 09:05:16.290757 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-767ccfd65f-9lqmh" event={"ID":"ad04acbe-59a4-490c-ae4e-eacfbd65257c","Type":"ContainerStarted","Data":"304315d2e682ee2ba3720ea9870357cefbe8fdc90bf10fbc6b9e287ff39a8110"} Nov 24 09:05:16 crc kubenswrapper[4886]: I1124 09:05:16.291516 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-767ccfd65f-9lqmh" Nov 24 09:05:16 crc kubenswrapper[4886]: I1124 09:05:16.298945 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-54cfbf4c7d-qnv8p" event={"ID":"8aadf5e6-b19e-4b19-b812-50c5bd4721a4","Type":"ContainerStarted","Data":"69c586695d963390abe5a13aac69b84c0ca5413eb4651ef9eac015e12c4a00ae"} Nov 24 09:05:16 crc kubenswrapper[4886]: I1124 09:05:16.299869 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-54cfbf4c7d-qnv8p" Nov 24 09:05:16 crc kubenswrapper[4886]: I1124 09:05:16.314124 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-7969689c84-jb6p4" event={"ID":"a991f440-958e-42d4-b062-7369966d84c3","Type":"ContainerStarted","Data":"6bde1e88a032e02a1bca8d74b80b9faeeb2b1262e5a91bdd2b843674bac7154b"} Nov 24 09:05:16 crc kubenswrapper[4886]: I1124 09:05:16.315336 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-7969689c84-jb6p4" Nov 24 09:05:16 crc kubenswrapper[4886]: E1124 09:05:16.318855 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.53:5001/openstack-k8s-operators/infra-operator:4997532743541199919c49dcbcd62c97cf913a0c\\\"\"" pod="openstack-operators/infra-operator-controller-manager-6df98c44d8-rsqm2" podUID="0f03538e-297e-410d-bf6e-0f947cba868c" Nov 24 09:05:16 crc kubenswrapper[4886]: E1124 09:05:16.318976 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:4838402d41d42c56613d43dc5041aae475a2b18e6172491d6c4d4a78a580697f\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-8c6448b9f-7zbrr" podUID="dc151242-3f76-4414-9a2b-a5e28adf12af" Nov 24 09:05:16 crc kubenswrapper[4886]: I1124 09:05:16.322619 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-75fb479bcc-pvdd8" podStartSLOduration=7.240323467 podStartE2EDuration="35.322598857s" podCreationTimestamp="2025-11-24 09:04:41 +0000 UTC" firstStartedPulling="2025-11-24 09:04:44.168604122 +0000 UTC m=+940.055342257" lastFinishedPulling="2025-11-24 09:05:12.250879512 +0000 UTC m=+968.137617647" observedRunningTime="2025-11-24 09:05:16.314240795 +0000 UTC m=+972.200978930" watchObservedRunningTime="2025-11-24 09:05:16.322598857 +0000 UTC m=+972.209336992" Nov 24 09:05:16 crc kubenswrapper[4886]: I1124 09:05:16.350936 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-598f69df5d-z7c6j" podStartSLOduration=7.783456452 podStartE2EDuration="35.350905934s" podCreationTimestamp="2025-11-24 09:04:41 +0000 UTC" firstStartedPulling="2025-11-24 09:04:44.270786643 +0000 UTC m=+940.157524778" lastFinishedPulling="2025-11-24 09:05:11.838236125 +0000 UTC m=+967.724974260" observedRunningTime="2025-11-24 09:05:16.347329191 +0000 UTC m=+972.234067336" watchObservedRunningTime="2025-11-24 09:05:16.350905934 +0000 UTC m=+972.237644069" Nov 24 09:05:16 crc kubenswrapper[4886]: I1124 09:05:16.418054 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-5b797b8dff-nwx4f" podStartSLOduration=4.37679738 podStartE2EDuration="34.418027722s" podCreationTimestamp="2025-11-24 09:04:42 +0000 UTC" firstStartedPulling="2025-11-24 09:04:44.363555232 +0000 UTC m=+940.250293367" lastFinishedPulling="2025-11-24 09:05:14.404785574 +0000 UTC m=+970.291523709" observedRunningTime="2025-11-24 09:05:16.387174471 +0000 UTC m=+972.273912616" watchObservedRunningTime="2025-11-24 09:05:16.418027722 +0000 UTC m=+972.304765857" Nov 24 09:05:16 crc kubenswrapper[4886]: I1124 09:05:16.454787 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-d656998f4-qmlpw" podStartSLOduration=4.306422608 podStartE2EDuration="34.454764663s" podCreationTimestamp="2025-11-24 09:04:42 +0000 UTC" firstStartedPulling="2025-11-24 09:04:44.327633375 +0000 UTC m=+940.214371510" lastFinishedPulling="2025-11-24 09:05:14.47597543 +0000 UTC m=+970.362713565" observedRunningTime="2025-11-24 09:05:16.451851459 +0000 UTC m=+972.338589594" watchObservedRunningTime="2025-11-24 09:05:16.454764663 +0000 UTC m=+972.341502798" Nov 24 09:05:16 crc kubenswrapper[4886]: I1124 09:05:16.481201 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-6d4bf84b58-62fz7" podStartSLOduration=4.378659094 podStartE2EDuration="34.481179146s" podCreationTimestamp="2025-11-24 09:04:42 +0000 UTC" firstStartedPulling="2025-11-24 09:04:44.331392833 +0000 UTC m=+940.218130968" lastFinishedPulling="2025-11-24 09:05:14.433912885 +0000 UTC m=+970.320651020" observedRunningTime="2025-11-24 09:05:16.479809777 +0000 UTC m=+972.366547912" watchObservedRunningTime="2025-11-24 09:05:16.481179146 +0000 UTC m=+972.367917281" Nov 24 09:05:16 crc kubenswrapper[4886]: I1124 09:05:16.524874 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-54cfbf4c7d-qnv8p" podStartSLOduration=9.429574338 podStartE2EDuration="35.524844097s" podCreationTimestamp="2025-11-24 09:04:41 +0000 UTC" firstStartedPulling="2025-11-24 09:04:44.275965153 +0000 UTC m=+940.162703288" lastFinishedPulling="2025-11-24 09:05:10.371234922 +0000 UTC m=+966.257973047" observedRunningTime="2025-11-24 09:05:16.501975927 +0000 UTC m=+972.388714072" watchObservedRunningTime="2025-11-24 09:05:16.524844097 +0000 UTC m=+972.411582252" Nov 24 09:05:16 crc kubenswrapper[4886]: I1124 09:05:16.530144 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-767ccfd65f-9lqmh" podStartSLOduration=6.997586196 podStartE2EDuration="35.530115599s" podCreationTimestamp="2025-11-24 09:04:41 +0000 UTC" firstStartedPulling="2025-11-24 09:04:43.716884897 +0000 UTC m=+939.603623022" lastFinishedPulling="2025-11-24 09:05:12.24941429 +0000 UTC m=+968.136152425" observedRunningTime="2025-11-24 09:05:16.523788557 +0000 UTC m=+972.410526692" watchObservedRunningTime="2025-11-24 09:05:16.530115599 +0000 UTC m=+972.416853734" Nov 24 09:05:16 crc kubenswrapper[4886]: I1124 09:05:16.557479 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-56f54d6746-glmkz" podStartSLOduration=5.480511576 podStartE2EDuration="35.557453739s" podCreationTimestamp="2025-11-24 09:04:41 +0000 UTC" firstStartedPulling="2025-11-24 09:04:44.327861521 +0000 UTC m=+940.214599656" lastFinishedPulling="2025-11-24 09:05:14.404803684 +0000 UTC m=+970.291541819" observedRunningTime="2025-11-24 09:05:16.551714763 +0000 UTC m=+972.438452908" watchObservedRunningTime="2025-11-24 09:05:16.557453739 +0000 UTC m=+972.444191874" Nov 24 09:05:16 crc kubenswrapper[4886]: I1124 09:05:16.574201 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-7969689c84-jb6p4" podStartSLOduration=7.547016044 podStartE2EDuration="35.574166552s" podCreationTimestamp="2025-11-24 09:04:41 +0000 UTC" firstStartedPulling="2025-11-24 09:04:44.222420966 +0000 UTC m=+940.109159101" lastFinishedPulling="2025-11-24 09:05:12.249571464 +0000 UTC m=+968.136309609" observedRunningTime="2025-11-24 09:05:16.57063992 +0000 UTC m=+972.457378065" watchObservedRunningTime="2025-11-24 09:05:16.574166552 +0000 UTC m=+972.460904687" Nov 24 09:05:16 crc kubenswrapper[4886]: I1124 09:05:16.596531 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-sgnjz" podStartSLOduration=4.388484139 podStartE2EDuration="34.596505527s" podCreationTimestamp="2025-11-24 09:04:42 +0000 UTC" firstStartedPulling="2025-11-24 09:04:44.368809424 +0000 UTC m=+940.255547559" lastFinishedPulling="2025-11-24 09:05:14.576830812 +0000 UTC m=+970.463568947" observedRunningTime="2025-11-24 09:05:16.59454463 +0000 UTC m=+972.481282775" watchObservedRunningTime="2025-11-24 09:05:16.596505527 +0000 UTC m=+972.483243692" Nov 24 09:05:16 crc kubenswrapper[4886]: I1124 09:05:16.674913 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-cfbb9c588-bpzxz" podStartSLOduration=7.487078783 podStartE2EDuration="35.6748878s" podCreationTimestamp="2025-11-24 09:04:41 +0000 UTC" firstStartedPulling="2025-11-24 09:04:44.061663724 +0000 UTC m=+939.948401859" lastFinishedPulling="2025-11-24 09:05:12.249472741 +0000 UTC m=+968.136210876" observedRunningTime="2025-11-24 09:05:16.651637609 +0000 UTC m=+972.538375744" watchObservedRunningTime="2025-11-24 09:05:16.6748878 +0000 UTC m=+972.561625935" Nov 24 09:05:16 crc kubenswrapper[4886]: I1124 09:05:16.699358 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-b4c496f69-kgnpt" podStartSLOduration=4.508211416 podStartE2EDuration="34.699338556s" podCreationTimestamp="2025-11-24 09:04:42 +0000 UTC" firstStartedPulling="2025-11-24 09:04:44.327777809 +0000 UTC m=+940.214515944" lastFinishedPulling="2025-11-24 09:05:14.518904949 +0000 UTC m=+970.405643084" observedRunningTime="2025-11-24 09:05:16.675629662 +0000 UTC m=+972.562367797" watchObservedRunningTime="2025-11-24 09:05:16.699338556 +0000 UTC m=+972.586076691" Nov 24 09:05:16 crc kubenswrapper[4886]: I1124 09:05:16.735631 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-8c7444f48-n5gmw" podStartSLOduration=9.942209283 podStartE2EDuration="35.735601394s" podCreationTimestamp="2025-11-24 09:04:41 +0000 UTC" firstStartedPulling="2025-11-24 09:04:44.575850453 +0000 UTC m=+940.462588588" lastFinishedPulling="2025-11-24 09:05:10.369242564 +0000 UTC m=+966.255980699" observedRunningTime="2025-11-24 09:05:16.728278982 +0000 UTC m=+972.615017147" watchObservedRunningTime="2025-11-24 09:05:16.735601394 +0000 UTC m=+972.622339519" Nov 24 09:05:16 crc kubenswrapper[4886]: I1124 09:05:16.754970 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-54fc5f65b7-z6p4s" podStartSLOduration=4.696499593 podStartE2EDuration="34.754938512s" podCreationTimestamp="2025-11-24 09:04:42 +0000 UTC" firstStartedPulling="2025-11-24 09:04:44.346458588 +0000 UTC m=+940.233196723" lastFinishedPulling="2025-11-24 09:05:14.404897507 +0000 UTC m=+970.291635642" observedRunningTime="2025-11-24 09:05:16.753343036 +0000 UTC m=+972.640081171" watchObservedRunningTime="2025-11-24 09:05:16.754938512 +0000 UTC m=+972.641676657" Nov 24 09:05:17 crc kubenswrapper[4886]: I1124 09:05:17.324305 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-78bd47f458-kczhh" event={"ID":"9a2dc275-73a5-4caf-89fe-120ce9401655","Type":"ContainerStarted","Data":"826b954706c78c13c08c961681e2b9744f028bdcd194a2d566ddb39bbd60e19a"} Nov 24 09:05:17 crc kubenswrapper[4886]: I1124 09:05:17.324869 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-78bd47f458-kczhh" Nov 24 09:05:17 crc kubenswrapper[4886]: I1124 09:05:17.326817 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-767ccfd65f-9lqmh" event={"ID":"ad04acbe-59a4-490c-ae4e-eacfbd65257c","Type":"ContainerStarted","Data":"9dcc6a22727aa8b749af11de19c7583fde74b160bbe34a53a790b336cb893627"} Nov 24 09:05:17 crc kubenswrapper[4886]: I1124 09:05:17.328934 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-6498cbf48f-6pwgl" event={"ID":"0ca0fbbb-1734-4a4a-b996-c96aa000131c","Type":"ContainerStarted","Data":"26e29cdd72023399a575489cec96596841f53b93ce6e351ce3a7092063ffdb74"} Nov 24 09:05:17 crc kubenswrapper[4886]: I1124 09:05:17.329025 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-6498cbf48f-6pwgl" Nov 24 09:05:17 crc kubenswrapper[4886]: I1124 09:05:17.331460 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-7969689c84-jb6p4" event={"ID":"a991f440-958e-42d4-b062-7369966d84c3","Type":"ContainerStarted","Data":"dff9315dd28570d66d01089baeb6df1d52ae342da3436cd3210c775665d5aa6a"} Nov 24 09:05:17 crc kubenswrapper[4886]: I1124 09:05:17.334261 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-99b499f4-tjkbx" event={"ID":"6fc8a4d5-fad4-4eca-95c0-329b968d5c9d","Type":"ContainerStarted","Data":"04006e14bc9519da6e669821537c69443110a2f5a61e793d7dd845a9f167edb2"} Nov 24 09:05:17 crc kubenswrapper[4886]: I1124 09:05:17.334793 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-99b499f4-tjkbx" Nov 24 09:05:17 crc kubenswrapper[4886]: I1124 09:05:17.338793 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-8c7444f48-n5gmw" event={"ID":"6f4398e5-a5b8-4853-ac68-76385d1a749d","Type":"ContainerStarted","Data":"89307e425ab389c19f9de8c59391d91d063fee76533ca1ee3dc2a8193ea071e7"} Nov 24 09:05:17 crc kubenswrapper[4886]: I1124 09:05:17.344901 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-78bd47f458-kczhh" podStartSLOduration=10.222421025 podStartE2EDuration="36.344880679s" podCreationTimestamp="2025-11-24 09:04:41 +0000 UTC" firstStartedPulling="2025-11-24 09:04:44.243534596 +0000 UTC m=+940.130272721" lastFinishedPulling="2025-11-24 09:05:10.36599424 +0000 UTC m=+966.252732375" observedRunningTime="2025-11-24 09:05:17.341936054 +0000 UTC m=+973.228674189" watchObservedRunningTime="2025-11-24 09:05:17.344880679 +0000 UTC m=+973.231618814" Nov 24 09:05:17 crc kubenswrapper[4886]: I1124 09:05:17.377898 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-6498cbf48f-6pwgl" podStartSLOduration=8.250086187 podStartE2EDuration="36.377873241s" podCreationTimestamp="2025-11-24 09:04:41 +0000 UTC" firstStartedPulling="2025-11-24 09:04:43.099672514 +0000 UTC m=+938.986410659" lastFinishedPulling="2025-11-24 09:05:11.227459578 +0000 UTC m=+967.114197713" observedRunningTime="2025-11-24 09:05:17.368744218 +0000 UTC m=+973.255482383" watchObservedRunningTime="2025-11-24 09:05:17.377873241 +0000 UTC m=+973.264611376" Nov 24 09:05:17 crc kubenswrapper[4886]: I1124 09:05:17.394573 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-99b499f4-tjkbx" podStartSLOduration=3.053872129 podStartE2EDuration="36.394547913s" podCreationTimestamp="2025-11-24 09:04:41 +0000 UTC" firstStartedPulling="2025-11-24 09:04:43.69690482 +0000 UTC m=+939.583642955" lastFinishedPulling="2025-11-24 09:05:17.037580614 +0000 UTC m=+972.924318739" observedRunningTime="2025-11-24 09:05:17.392945907 +0000 UTC m=+973.279684042" watchObservedRunningTime="2025-11-24 09:05:17.394547913 +0000 UTC m=+973.281286048" Nov 24 09:05:21 crc kubenswrapper[4886]: I1124 09:05:21.800907 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-6498cbf48f-6pwgl" Nov 24 09:05:21 crc kubenswrapper[4886]: I1124 09:05:21.902635 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-767ccfd65f-9lqmh" Nov 24 09:05:21 crc kubenswrapper[4886]: I1124 09:05:21.951126 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-7969689c84-jb6p4" Nov 24 09:05:21 crc kubenswrapper[4886]: I1124 09:05:21.990645 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-56f54d6746-glmkz" Nov 24 09:05:22 crc kubenswrapper[4886]: I1124 09:05:22.027231 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-598f69df5d-z7c6j" Nov 24 09:05:22 crc kubenswrapper[4886]: I1124 09:05:22.079028 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-75fb479bcc-pvdd8" Nov 24 09:05:22 crc kubenswrapper[4886]: I1124 09:05:22.328014 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-99b499f4-tjkbx" Nov 24 09:05:22 crc kubenswrapper[4886]: I1124 09:05:22.486700 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-54b5986bb8-47vf5" Nov 24 09:05:22 crc kubenswrapper[4886]: I1124 09:05:22.510338 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-78bd47f458-kczhh" Nov 24 09:05:22 crc kubenswrapper[4886]: I1124 09:05:22.547756 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-cfbb9c588-bpzxz" Nov 24 09:05:22 crc kubenswrapper[4886]: I1124 09:05:22.582107 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-54cfbf4c7d-qnv8p" Nov 24 09:05:22 crc kubenswrapper[4886]: I1124 09:05:22.644600 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-54fc5f65b7-z6p4s" Nov 24 09:05:22 crc kubenswrapper[4886]: I1124 09:05:22.688619 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-5b797b8dff-nwx4f" Nov 24 09:05:22 crc kubenswrapper[4886]: I1124 09:05:22.713864 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-d656998f4-qmlpw" Nov 24 09:05:22 crc kubenswrapper[4886]: I1124 09:05:22.879080 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-6d4bf84b58-62fz7" Nov 24 09:05:23 crc kubenswrapper[4886]: I1124 09:05:23.075251 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-b4c496f69-kgnpt" Nov 24 09:05:23 crc kubenswrapper[4886]: I1124 09:05:23.208383 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-8c7444f48-n5gmw" Nov 24 09:05:30 crc kubenswrapper[4886]: I1124 09:05:30.458967 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-7454b96578-zks44" event={"ID":"607c4e63-3cb6-43f8-86b0-7af4b07e81e4","Type":"ContainerStarted","Data":"c593a8dc957707ca3ebee34b68940cafadc3701ca7f9844e373f0cd521bf870c"} Nov 24 09:05:30 crc kubenswrapper[4886]: I1124 09:05:30.460129 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-7454b96578-zks44" Nov 24 09:05:30 crc kubenswrapper[4886]: I1124 09:05:30.463964 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-58f887965d-5zcvh" event={"ID":"73e41e35-4218-492b-93d6-d068c687ee6e","Type":"ContainerStarted","Data":"196fba8c362d9b0b6948c6b847a17427bf9f73a0c31a62abb666657cbfebbff0"} Nov 24 09:05:30 crc kubenswrapper[4886]: I1124 09:05:30.464202 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-58f887965d-5zcvh" Nov 24 09:05:30 crc kubenswrapper[4886]: I1124 09:05:30.484366 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-7454b96578-zks44" podStartSLOduration=3.607261539 podStartE2EDuration="49.484344642s" podCreationTimestamp="2025-11-24 09:04:41 +0000 UTC" firstStartedPulling="2025-11-24 09:04:44.307819093 +0000 UTC m=+940.194557228" lastFinishedPulling="2025-11-24 09:05:30.184902196 +0000 UTC m=+986.071640331" observedRunningTime="2025-11-24 09:05:30.480860582 +0000 UTC m=+986.367598717" watchObservedRunningTime="2025-11-24 09:05:30.484344642 +0000 UTC m=+986.371082767" Nov 24 09:05:30 crc kubenswrapper[4886]: I1124 09:05:30.505349 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-58f887965d-5zcvh" podStartSLOduration=3.656655784 podStartE2EDuration="49.505325212s" podCreationTimestamp="2025-11-24 09:04:41 +0000 UTC" firstStartedPulling="2025-11-24 09:04:44.169189789 +0000 UTC m=+940.055927924" lastFinishedPulling="2025-11-24 09:05:30.017859217 +0000 UTC m=+985.904597352" observedRunningTime="2025-11-24 09:05:30.502220813 +0000 UTC m=+986.388958968" watchObservedRunningTime="2025-11-24 09:05:30.505325212 +0000 UTC m=+986.392063347" Nov 24 09:05:31 crc kubenswrapper[4886]: I1124 09:05:31.473686 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-8c6448b9f-7zbrr" event={"ID":"dc151242-3f76-4414-9a2b-a5e28adf12af","Type":"ContainerStarted","Data":"b0aabab22830c360cab7ac8dc90611fb9cd002078d203d42a49b50fd7dfeb86e"} Nov 24 09:05:31 crc kubenswrapper[4886]: I1124 09:05:31.473904 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-8c6448b9f-7zbrr" Nov 24 09:05:31 crc kubenswrapper[4886]: I1124 09:05:31.476388 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-6df98c44d8-rsqm2" event={"ID":"0f03538e-297e-410d-bf6e-0f947cba868c","Type":"ContainerStarted","Data":"9f0ebf472b6355b008fe275920612be9e4cdfb3cc3af5b1d3d2dfbce4349e00b"} Nov 24 09:05:31 crc kubenswrapper[4886]: I1124 09:05:31.495196 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-8c6448b9f-7zbrr" podStartSLOduration=3.364227487 podStartE2EDuration="49.495168877s" podCreationTimestamp="2025-11-24 09:04:42 +0000 UTC" firstStartedPulling="2025-11-24 09:04:44.327103389 +0000 UTC m=+940.213841534" lastFinishedPulling="2025-11-24 09:05:30.458044789 +0000 UTC m=+986.344782924" observedRunningTime="2025-11-24 09:05:31.493407356 +0000 UTC m=+987.380145521" watchObservedRunningTime="2025-11-24 09:05:31.495168877 +0000 UTC m=+987.381907012" Nov 24 09:05:31 crc kubenswrapper[4886]: I1124 09:05:31.521598 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-6df98c44d8-rsqm2" podStartSLOduration=3.842832938 podStartE2EDuration="50.521577732s" podCreationTimestamp="2025-11-24 09:04:41 +0000 UTC" firstStartedPulling="2025-11-24 09:04:44.238993135 +0000 UTC m=+940.125731280" lastFinishedPulling="2025-11-24 09:05:30.917737939 +0000 UTC m=+986.804476074" observedRunningTime="2025-11-24 09:05:31.516110446 +0000 UTC m=+987.402848581" watchObservedRunningTime="2025-11-24 09:05:31.521577732 +0000 UTC m=+987.408315867" Nov 24 09:05:31 crc kubenswrapper[4886]: I1124 09:05:31.784256 4886 patch_prober.go:28] interesting pod/machine-config-daemon-zc46q container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 09:05:31 crc kubenswrapper[4886]: I1124 09:05:31.784672 4886 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zc46q" podUID="23cb993e-0360-4449-b604-8ddd825a6502" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 09:05:32 crc kubenswrapper[4886]: I1124 09:05:32.554967 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-6df98c44d8-rsqm2" Nov 24 09:05:42 crc kubenswrapper[4886]: I1124 09:05:42.376726 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-7454b96578-zks44" Nov 24 09:05:42 crc kubenswrapper[4886]: I1124 09:05:42.428978 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-58f887965d-5zcvh" Nov 24 09:05:42 crc kubenswrapper[4886]: I1124 09:05:42.565011 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-6df98c44d8-rsqm2" Nov 24 09:05:43 crc kubenswrapper[4886]: I1124 09:05:43.107439 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-8c6448b9f-7zbrr" Nov 24 09:05:58 crc kubenswrapper[4886]: I1124 09:05:58.003548 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-jpvhq"] Nov 24 09:05:58 crc kubenswrapper[4886]: I1124 09:05:58.008116 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-jpvhq" Nov 24 09:05:58 crc kubenswrapper[4886]: I1124 09:05:58.013166 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-n7dnz" Nov 24 09:05:58 crc kubenswrapper[4886]: I1124 09:05:58.013519 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Nov 24 09:05:58 crc kubenswrapper[4886]: I1124 09:05:58.013706 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Nov 24 09:05:58 crc kubenswrapper[4886]: I1124 09:05:58.013906 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Nov 24 09:05:58 crc kubenswrapper[4886]: I1124 09:05:58.014122 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Nov 24 09:05:58 crc kubenswrapper[4886]: I1124 09:05:58.029625 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-jpvhq"] Nov 24 09:05:58 crc kubenswrapper[4886]: I1124 09:05:58.109574 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/50cb1d88-1e47-46e0-9594-d6e2f3660d9d-config\") pod \"dnsmasq-dns-78dd6ddcc-jpvhq\" (UID: \"50cb1d88-1e47-46e0-9594-d6e2f3660d9d\") " pod="openstack/dnsmasq-dns-78dd6ddcc-jpvhq" Nov 24 09:05:58 crc kubenswrapper[4886]: I1124 09:05:58.109750 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/50cb1d88-1e47-46e0-9594-d6e2f3660d9d-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-jpvhq\" (UID: \"50cb1d88-1e47-46e0-9594-d6e2f3660d9d\") " pod="openstack/dnsmasq-dns-78dd6ddcc-jpvhq" Nov 24 09:05:58 crc kubenswrapper[4886]: I1124 09:05:58.109798 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kl88m\" (UniqueName: \"kubernetes.io/projected/50cb1d88-1e47-46e0-9594-d6e2f3660d9d-kube-api-access-kl88m\") pod \"dnsmasq-dns-78dd6ddcc-jpvhq\" (UID: \"50cb1d88-1e47-46e0-9594-d6e2f3660d9d\") " pod="openstack/dnsmasq-dns-78dd6ddcc-jpvhq" Nov 24 09:05:58 crc kubenswrapper[4886]: I1124 09:05:58.211709 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/50cb1d88-1e47-46e0-9594-d6e2f3660d9d-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-jpvhq\" (UID: \"50cb1d88-1e47-46e0-9594-d6e2f3660d9d\") " pod="openstack/dnsmasq-dns-78dd6ddcc-jpvhq" Nov 24 09:05:58 crc kubenswrapper[4886]: I1124 09:05:58.212288 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kl88m\" (UniqueName: \"kubernetes.io/projected/50cb1d88-1e47-46e0-9594-d6e2f3660d9d-kube-api-access-kl88m\") pod \"dnsmasq-dns-78dd6ddcc-jpvhq\" (UID: \"50cb1d88-1e47-46e0-9594-d6e2f3660d9d\") " pod="openstack/dnsmasq-dns-78dd6ddcc-jpvhq" Nov 24 09:05:58 crc kubenswrapper[4886]: I1124 09:05:58.212362 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/50cb1d88-1e47-46e0-9594-d6e2f3660d9d-config\") pod \"dnsmasq-dns-78dd6ddcc-jpvhq\" (UID: \"50cb1d88-1e47-46e0-9594-d6e2f3660d9d\") " pod="openstack/dnsmasq-dns-78dd6ddcc-jpvhq" Nov 24 09:05:58 crc kubenswrapper[4886]: I1124 09:05:58.213124 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/50cb1d88-1e47-46e0-9594-d6e2f3660d9d-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-jpvhq\" (UID: \"50cb1d88-1e47-46e0-9594-d6e2f3660d9d\") " pod="openstack/dnsmasq-dns-78dd6ddcc-jpvhq" Nov 24 09:05:58 crc kubenswrapper[4886]: I1124 09:05:58.213212 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/50cb1d88-1e47-46e0-9594-d6e2f3660d9d-config\") pod \"dnsmasq-dns-78dd6ddcc-jpvhq\" (UID: \"50cb1d88-1e47-46e0-9594-d6e2f3660d9d\") " pod="openstack/dnsmasq-dns-78dd6ddcc-jpvhq" Nov 24 09:05:58 crc kubenswrapper[4886]: I1124 09:05:58.251202 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kl88m\" (UniqueName: \"kubernetes.io/projected/50cb1d88-1e47-46e0-9594-d6e2f3660d9d-kube-api-access-kl88m\") pod \"dnsmasq-dns-78dd6ddcc-jpvhq\" (UID: \"50cb1d88-1e47-46e0-9594-d6e2f3660d9d\") " pod="openstack/dnsmasq-dns-78dd6ddcc-jpvhq" Nov 24 09:05:58 crc kubenswrapper[4886]: I1124 09:05:58.331297 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-jpvhq" Nov 24 09:05:58 crc kubenswrapper[4886]: I1124 09:05:58.809881 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-jpvhq"] Nov 24 09:05:59 crc kubenswrapper[4886]: I1124 09:05:59.693585 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-jpvhq" event={"ID":"50cb1d88-1e47-46e0-9594-d6e2f3660d9d","Type":"ContainerStarted","Data":"c4c8905a09c01309e29fec2ac69277aa74b1149630bb8f558abdd7a98d8952cb"} Nov 24 09:06:01 crc kubenswrapper[4886]: I1124 09:06:01.216801 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-9p5gk"] Nov 24 09:06:01 crc kubenswrapper[4886]: I1124 09:06:01.218658 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-9p5gk" Nov 24 09:06:01 crc kubenswrapper[4886]: I1124 09:06:01.237774 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-9p5gk"] Nov 24 09:06:01 crc kubenswrapper[4886]: I1124 09:06:01.407832 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e2785a5b-ca3d-45f1-a8b9-7c1eac61afcb-config\") pod \"dnsmasq-dns-666b6646f7-9p5gk\" (UID: \"e2785a5b-ca3d-45f1-a8b9-7c1eac61afcb\") " pod="openstack/dnsmasq-dns-666b6646f7-9p5gk" Nov 24 09:06:01 crc kubenswrapper[4886]: I1124 09:06:01.407904 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e2785a5b-ca3d-45f1-a8b9-7c1eac61afcb-dns-svc\") pod \"dnsmasq-dns-666b6646f7-9p5gk\" (UID: \"e2785a5b-ca3d-45f1-a8b9-7c1eac61afcb\") " pod="openstack/dnsmasq-dns-666b6646f7-9p5gk" Nov 24 09:06:01 crc kubenswrapper[4886]: I1124 09:06:01.408233 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ztdsg\" (UniqueName: \"kubernetes.io/projected/e2785a5b-ca3d-45f1-a8b9-7c1eac61afcb-kube-api-access-ztdsg\") pod \"dnsmasq-dns-666b6646f7-9p5gk\" (UID: \"e2785a5b-ca3d-45f1-a8b9-7c1eac61afcb\") " pod="openstack/dnsmasq-dns-666b6646f7-9p5gk" Nov 24 09:06:01 crc kubenswrapper[4886]: I1124 09:06:01.510560 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e2785a5b-ca3d-45f1-a8b9-7c1eac61afcb-config\") pod \"dnsmasq-dns-666b6646f7-9p5gk\" (UID: \"e2785a5b-ca3d-45f1-a8b9-7c1eac61afcb\") " pod="openstack/dnsmasq-dns-666b6646f7-9p5gk" Nov 24 09:06:01 crc kubenswrapper[4886]: I1124 09:06:01.510654 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e2785a5b-ca3d-45f1-a8b9-7c1eac61afcb-dns-svc\") pod \"dnsmasq-dns-666b6646f7-9p5gk\" (UID: \"e2785a5b-ca3d-45f1-a8b9-7c1eac61afcb\") " pod="openstack/dnsmasq-dns-666b6646f7-9p5gk" Nov 24 09:06:01 crc kubenswrapper[4886]: I1124 09:06:01.510741 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ztdsg\" (UniqueName: \"kubernetes.io/projected/e2785a5b-ca3d-45f1-a8b9-7c1eac61afcb-kube-api-access-ztdsg\") pod \"dnsmasq-dns-666b6646f7-9p5gk\" (UID: \"e2785a5b-ca3d-45f1-a8b9-7c1eac61afcb\") " pod="openstack/dnsmasq-dns-666b6646f7-9p5gk" Nov 24 09:06:01 crc kubenswrapper[4886]: I1124 09:06:01.511654 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e2785a5b-ca3d-45f1-a8b9-7c1eac61afcb-config\") pod \"dnsmasq-dns-666b6646f7-9p5gk\" (UID: \"e2785a5b-ca3d-45f1-a8b9-7c1eac61afcb\") " pod="openstack/dnsmasq-dns-666b6646f7-9p5gk" Nov 24 09:06:01 crc kubenswrapper[4886]: I1124 09:06:01.511736 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e2785a5b-ca3d-45f1-a8b9-7c1eac61afcb-dns-svc\") pod \"dnsmasq-dns-666b6646f7-9p5gk\" (UID: \"e2785a5b-ca3d-45f1-a8b9-7c1eac61afcb\") " pod="openstack/dnsmasq-dns-666b6646f7-9p5gk" Nov 24 09:06:01 crc kubenswrapper[4886]: I1124 09:06:01.549435 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ztdsg\" (UniqueName: \"kubernetes.io/projected/e2785a5b-ca3d-45f1-a8b9-7c1eac61afcb-kube-api-access-ztdsg\") pod \"dnsmasq-dns-666b6646f7-9p5gk\" (UID: \"e2785a5b-ca3d-45f1-a8b9-7c1eac61afcb\") " pod="openstack/dnsmasq-dns-666b6646f7-9p5gk" Nov 24 09:06:01 crc kubenswrapper[4886]: I1124 09:06:01.554337 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-9p5gk" Nov 24 09:06:01 crc kubenswrapper[4886]: I1124 09:06:01.630797 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-jpvhq"] Nov 24 09:06:01 crc kubenswrapper[4886]: I1124 09:06:01.659800 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-k45wp"] Nov 24 09:06:01 crc kubenswrapper[4886]: I1124 09:06:01.661682 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-k45wp" Nov 24 09:06:01 crc kubenswrapper[4886]: I1124 09:06:01.674846 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-k45wp"] Nov 24 09:06:01 crc kubenswrapper[4886]: I1124 09:06:01.784791 4886 patch_prober.go:28] interesting pod/machine-config-daemon-zc46q container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 09:06:01 crc kubenswrapper[4886]: I1124 09:06:01.784874 4886 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zc46q" podUID="23cb993e-0360-4449-b604-8ddd825a6502" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 09:06:01 crc kubenswrapper[4886]: I1124 09:06:01.823470 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0ce0238f-3de7-44fa-8e99-5345309b8c44-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-k45wp\" (UID: \"0ce0238f-3de7-44fa-8e99-5345309b8c44\") " pod="openstack/dnsmasq-dns-57d769cc4f-k45wp" Nov 24 09:06:01 crc kubenswrapper[4886]: I1124 09:06:01.823744 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0ce0238f-3de7-44fa-8e99-5345309b8c44-config\") pod \"dnsmasq-dns-57d769cc4f-k45wp\" (UID: \"0ce0238f-3de7-44fa-8e99-5345309b8c44\") " pod="openstack/dnsmasq-dns-57d769cc4f-k45wp" Nov 24 09:06:01 crc kubenswrapper[4886]: I1124 09:06:01.823887 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9pm47\" (UniqueName: \"kubernetes.io/projected/0ce0238f-3de7-44fa-8e99-5345309b8c44-kube-api-access-9pm47\") pod \"dnsmasq-dns-57d769cc4f-k45wp\" (UID: \"0ce0238f-3de7-44fa-8e99-5345309b8c44\") " pod="openstack/dnsmasq-dns-57d769cc4f-k45wp" Nov 24 09:06:01 crc kubenswrapper[4886]: I1124 09:06:01.927007 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9pm47\" (UniqueName: \"kubernetes.io/projected/0ce0238f-3de7-44fa-8e99-5345309b8c44-kube-api-access-9pm47\") pod \"dnsmasq-dns-57d769cc4f-k45wp\" (UID: \"0ce0238f-3de7-44fa-8e99-5345309b8c44\") " pod="openstack/dnsmasq-dns-57d769cc4f-k45wp" Nov 24 09:06:01 crc kubenswrapper[4886]: I1124 09:06:01.928078 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0ce0238f-3de7-44fa-8e99-5345309b8c44-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-k45wp\" (UID: \"0ce0238f-3de7-44fa-8e99-5345309b8c44\") " pod="openstack/dnsmasq-dns-57d769cc4f-k45wp" Nov 24 09:06:01 crc kubenswrapper[4886]: I1124 09:06:01.928201 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0ce0238f-3de7-44fa-8e99-5345309b8c44-config\") pod \"dnsmasq-dns-57d769cc4f-k45wp\" (UID: \"0ce0238f-3de7-44fa-8e99-5345309b8c44\") " pod="openstack/dnsmasq-dns-57d769cc4f-k45wp" Nov 24 09:06:01 crc kubenswrapper[4886]: I1124 09:06:01.929428 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0ce0238f-3de7-44fa-8e99-5345309b8c44-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-k45wp\" (UID: \"0ce0238f-3de7-44fa-8e99-5345309b8c44\") " pod="openstack/dnsmasq-dns-57d769cc4f-k45wp" Nov 24 09:06:01 crc kubenswrapper[4886]: I1124 09:06:01.929459 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0ce0238f-3de7-44fa-8e99-5345309b8c44-config\") pod \"dnsmasq-dns-57d769cc4f-k45wp\" (UID: \"0ce0238f-3de7-44fa-8e99-5345309b8c44\") " pod="openstack/dnsmasq-dns-57d769cc4f-k45wp" Nov 24 09:06:01 crc kubenswrapper[4886]: I1124 09:06:01.950817 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9pm47\" (UniqueName: \"kubernetes.io/projected/0ce0238f-3de7-44fa-8e99-5345309b8c44-kube-api-access-9pm47\") pod \"dnsmasq-dns-57d769cc4f-k45wp\" (UID: \"0ce0238f-3de7-44fa-8e99-5345309b8c44\") " pod="openstack/dnsmasq-dns-57d769cc4f-k45wp" Nov 24 09:06:01 crc kubenswrapper[4886]: I1124 09:06:01.988449 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-k45wp" Nov 24 09:06:02 crc kubenswrapper[4886]: I1124 09:06:02.344244 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-9p5gk"] Nov 24 09:06:02 crc kubenswrapper[4886]: I1124 09:06:02.452579 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-k45wp"] Nov 24 09:06:02 crc kubenswrapper[4886]: I1124 09:06:02.461939 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Nov 24 09:06:02 crc kubenswrapper[4886]: I1124 09:06:02.464204 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 24 09:06:02 crc kubenswrapper[4886]: I1124 09:06:02.467442 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-cljrm" Nov 24 09:06:02 crc kubenswrapper[4886]: I1124 09:06:02.467659 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Nov 24 09:06:02 crc kubenswrapper[4886]: I1124 09:06:02.468493 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Nov 24 09:06:02 crc kubenswrapper[4886]: I1124 09:06:02.468996 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Nov 24 09:06:02 crc kubenswrapper[4886]: I1124 09:06:02.469927 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Nov 24 09:06:02 crc kubenswrapper[4886]: I1124 09:06:02.471057 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Nov 24 09:06:02 crc kubenswrapper[4886]: I1124 09:06:02.471358 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Nov 24 09:06:02 crc kubenswrapper[4886]: I1124 09:06:02.482928 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 24 09:06:02 crc kubenswrapper[4886]: I1124 09:06:02.641195 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/510b7a7a-1206-44f7-bd72-a85590e7a1ac-server-conf\") pod \"rabbitmq-server-0\" (UID: \"510b7a7a-1206-44f7-bd72-a85590e7a1ac\") " pod="openstack/rabbitmq-server-0" Nov 24 09:06:02 crc kubenswrapper[4886]: I1124 09:06:02.641297 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/510b7a7a-1206-44f7-bd72-a85590e7a1ac-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"510b7a7a-1206-44f7-bd72-a85590e7a1ac\") " pod="openstack/rabbitmq-server-0" Nov 24 09:06:02 crc kubenswrapper[4886]: I1124 09:06:02.641334 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/510b7a7a-1206-44f7-bd72-a85590e7a1ac-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"510b7a7a-1206-44f7-bd72-a85590e7a1ac\") " pod="openstack/rabbitmq-server-0" Nov 24 09:06:02 crc kubenswrapper[4886]: I1124 09:06:02.641462 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/510b7a7a-1206-44f7-bd72-a85590e7a1ac-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"510b7a7a-1206-44f7-bd72-a85590e7a1ac\") " pod="openstack/rabbitmq-server-0" Nov 24 09:06:02 crc kubenswrapper[4886]: I1124 09:06:02.641522 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/510b7a7a-1206-44f7-bd72-a85590e7a1ac-config-data\") pod \"rabbitmq-server-0\" (UID: \"510b7a7a-1206-44f7-bd72-a85590e7a1ac\") " pod="openstack/rabbitmq-server-0" Nov 24 09:06:02 crc kubenswrapper[4886]: I1124 09:06:02.641721 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tcngm\" (UniqueName: \"kubernetes.io/projected/510b7a7a-1206-44f7-bd72-a85590e7a1ac-kube-api-access-tcngm\") pod \"rabbitmq-server-0\" (UID: \"510b7a7a-1206-44f7-bd72-a85590e7a1ac\") " pod="openstack/rabbitmq-server-0" Nov 24 09:06:02 crc kubenswrapper[4886]: I1124 09:06:02.641886 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/510b7a7a-1206-44f7-bd72-a85590e7a1ac-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"510b7a7a-1206-44f7-bd72-a85590e7a1ac\") " pod="openstack/rabbitmq-server-0" Nov 24 09:06:02 crc kubenswrapper[4886]: I1124 09:06:02.641932 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/510b7a7a-1206-44f7-bd72-a85590e7a1ac-pod-info\") pod \"rabbitmq-server-0\" (UID: \"510b7a7a-1206-44f7-bd72-a85590e7a1ac\") " pod="openstack/rabbitmq-server-0" Nov 24 09:06:02 crc kubenswrapper[4886]: I1124 09:06:02.642000 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/510b7a7a-1206-44f7-bd72-a85590e7a1ac-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"510b7a7a-1206-44f7-bd72-a85590e7a1ac\") " pod="openstack/rabbitmq-server-0" Nov 24 09:06:02 crc kubenswrapper[4886]: I1124 09:06:02.642075 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"rabbitmq-server-0\" (UID: \"510b7a7a-1206-44f7-bd72-a85590e7a1ac\") " pod="openstack/rabbitmq-server-0" Nov 24 09:06:02 crc kubenswrapper[4886]: I1124 09:06:02.642116 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/510b7a7a-1206-44f7-bd72-a85590e7a1ac-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"510b7a7a-1206-44f7-bd72-a85590e7a1ac\") " pod="openstack/rabbitmq-server-0" Nov 24 09:06:02 crc kubenswrapper[4886]: I1124 09:06:02.732643 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-k45wp" event={"ID":"0ce0238f-3de7-44fa-8e99-5345309b8c44","Type":"ContainerStarted","Data":"3bd3d7e233d17e25ef103151b9deeb2ead007499c22fe0dfa7b25ef541652d13"} Nov 24 09:06:02 crc kubenswrapper[4886]: I1124 09:06:02.734533 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-9p5gk" event={"ID":"e2785a5b-ca3d-45f1-a8b9-7c1eac61afcb","Type":"ContainerStarted","Data":"58c2385655d8667afcd98d772a1956d0caf730583b1cc237827eeb4d7bbfa5dd"} Nov 24 09:06:02 crc kubenswrapper[4886]: I1124 09:06:02.743596 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/510b7a7a-1206-44f7-bd72-a85590e7a1ac-config-data\") pod \"rabbitmq-server-0\" (UID: \"510b7a7a-1206-44f7-bd72-a85590e7a1ac\") " pod="openstack/rabbitmq-server-0" Nov 24 09:06:02 crc kubenswrapper[4886]: I1124 09:06:02.743665 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tcngm\" (UniqueName: \"kubernetes.io/projected/510b7a7a-1206-44f7-bd72-a85590e7a1ac-kube-api-access-tcngm\") pod \"rabbitmq-server-0\" (UID: \"510b7a7a-1206-44f7-bd72-a85590e7a1ac\") " pod="openstack/rabbitmq-server-0" Nov 24 09:06:02 crc kubenswrapper[4886]: I1124 09:06:02.743715 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/510b7a7a-1206-44f7-bd72-a85590e7a1ac-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"510b7a7a-1206-44f7-bd72-a85590e7a1ac\") " pod="openstack/rabbitmq-server-0" Nov 24 09:06:02 crc kubenswrapper[4886]: I1124 09:06:02.743732 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/510b7a7a-1206-44f7-bd72-a85590e7a1ac-pod-info\") pod \"rabbitmq-server-0\" (UID: \"510b7a7a-1206-44f7-bd72-a85590e7a1ac\") " pod="openstack/rabbitmq-server-0" Nov 24 09:06:02 crc kubenswrapper[4886]: I1124 09:06:02.743754 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/510b7a7a-1206-44f7-bd72-a85590e7a1ac-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"510b7a7a-1206-44f7-bd72-a85590e7a1ac\") " pod="openstack/rabbitmq-server-0" Nov 24 09:06:02 crc kubenswrapper[4886]: I1124 09:06:02.743803 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"rabbitmq-server-0\" (UID: \"510b7a7a-1206-44f7-bd72-a85590e7a1ac\") " pod="openstack/rabbitmq-server-0" Nov 24 09:06:02 crc kubenswrapper[4886]: I1124 09:06:02.743826 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/510b7a7a-1206-44f7-bd72-a85590e7a1ac-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"510b7a7a-1206-44f7-bd72-a85590e7a1ac\") " pod="openstack/rabbitmq-server-0" Nov 24 09:06:02 crc kubenswrapper[4886]: I1124 09:06:02.743904 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/510b7a7a-1206-44f7-bd72-a85590e7a1ac-server-conf\") pod \"rabbitmq-server-0\" (UID: \"510b7a7a-1206-44f7-bd72-a85590e7a1ac\") " pod="openstack/rabbitmq-server-0" Nov 24 09:06:02 crc kubenswrapper[4886]: I1124 09:06:02.743955 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/510b7a7a-1206-44f7-bd72-a85590e7a1ac-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"510b7a7a-1206-44f7-bd72-a85590e7a1ac\") " pod="openstack/rabbitmq-server-0" Nov 24 09:06:02 crc kubenswrapper[4886]: I1124 09:06:02.743984 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/510b7a7a-1206-44f7-bd72-a85590e7a1ac-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"510b7a7a-1206-44f7-bd72-a85590e7a1ac\") " pod="openstack/rabbitmq-server-0" Nov 24 09:06:02 crc kubenswrapper[4886]: I1124 09:06:02.744012 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/510b7a7a-1206-44f7-bd72-a85590e7a1ac-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"510b7a7a-1206-44f7-bd72-a85590e7a1ac\") " pod="openstack/rabbitmq-server-0" Nov 24 09:06:02 crc kubenswrapper[4886]: I1124 09:06:02.744303 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/510b7a7a-1206-44f7-bd72-a85590e7a1ac-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"510b7a7a-1206-44f7-bd72-a85590e7a1ac\") " pod="openstack/rabbitmq-server-0" Nov 24 09:06:02 crc kubenswrapper[4886]: I1124 09:06:02.744576 4886 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"rabbitmq-server-0\" (UID: \"510b7a7a-1206-44f7-bd72-a85590e7a1ac\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/rabbitmq-server-0" Nov 24 09:06:02 crc kubenswrapper[4886]: I1124 09:06:02.744688 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/510b7a7a-1206-44f7-bd72-a85590e7a1ac-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"510b7a7a-1206-44f7-bd72-a85590e7a1ac\") " pod="openstack/rabbitmq-server-0" Nov 24 09:06:02 crc kubenswrapper[4886]: I1124 09:06:02.745133 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/510b7a7a-1206-44f7-bd72-a85590e7a1ac-config-data\") pod \"rabbitmq-server-0\" (UID: \"510b7a7a-1206-44f7-bd72-a85590e7a1ac\") " pod="openstack/rabbitmq-server-0" Nov 24 09:06:02 crc kubenswrapper[4886]: I1124 09:06:02.745396 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/510b7a7a-1206-44f7-bd72-a85590e7a1ac-server-conf\") pod \"rabbitmq-server-0\" (UID: \"510b7a7a-1206-44f7-bd72-a85590e7a1ac\") " pod="openstack/rabbitmq-server-0" Nov 24 09:06:02 crc kubenswrapper[4886]: I1124 09:06:02.746400 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/510b7a7a-1206-44f7-bd72-a85590e7a1ac-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"510b7a7a-1206-44f7-bd72-a85590e7a1ac\") " pod="openstack/rabbitmq-server-0" Nov 24 09:06:02 crc kubenswrapper[4886]: I1124 09:06:02.764339 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/510b7a7a-1206-44f7-bd72-a85590e7a1ac-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"510b7a7a-1206-44f7-bd72-a85590e7a1ac\") " pod="openstack/rabbitmq-server-0" Nov 24 09:06:02 crc kubenswrapper[4886]: I1124 09:06:02.764448 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/510b7a7a-1206-44f7-bd72-a85590e7a1ac-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"510b7a7a-1206-44f7-bd72-a85590e7a1ac\") " pod="openstack/rabbitmq-server-0" Nov 24 09:06:02 crc kubenswrapper[4886]: I1124 09:06:02.764623 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/510b7a7a-1206-44f7-bd72-a85590e7a1ac-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"510b7a7a-1206-44f7-bd72-a85590e7a1ac\") " pod="openstack/rabbitmq-server-0" Nov 24 09:06:02 crc kubenswrapper[4886]: I1124 09:06:02.764670 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/510b7a7a-1206-44f7-bd72-a85590e7a1ac-pod-info\") pod \"rabbitmq-server-0\" (UID: \"510b7a7a-1206-44f7-bd72-a85590e7a1ac\") " pod="openstack/rabbitmq-server-0" Nov 24 09:06:02 crc kubenswrapper[4886]: I1124 09:06:02.768260 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tcngm\" (UniqueName: \"kubernetes.io/projected/510b7a7a-1206-44f7-bd72-a85590e7a1ac-kube-api-access-tcngm\") pod \"rabbitmq-server-0\" (UID: \"510b7a7a-1206-44f7-bd72-a85590e7a1ac\") " pod="openstack/rabbitmq-server-0" Nov 24 09:06:02 crc kubenswrapper[4886]: I1124 09:06:02.780616 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"rabbitmq-server-0\" (UID: \"510b7a7a-1206-44f7-bd72-a85590e7a1ac\") " pod="openstack/rabbitmq-server-0" Nov 24 09:06:02 crc kubenswrapper[4886]: I1124 09:06:02.787920 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 24 09:06:02 crc kubenswrapper[4886]: I1124 09:06:02.846333 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 24 09:06:02 crc kubenswrapper[4886]: I1124 09:06:02.848248 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 24 09:06:02 crc kubenswrapper[4886]: I1124 09:06:02.852750 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Nov 24 09:06:02 crc kubenswrapper[4886]: I1124 09:06:02.853026 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Nov 24 09:06:02 crc kubenswrapper[4886]: I1124 09:06:02.853215 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Nov 24 09:06:02 crc kubenswrapper[4886]: I1124 09:06:02.853347 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Nov 24 09:06:02 crc kubenswrapper[4886]: I1124 09:06:02.855376 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-dsj2b" Nov 24 09:06:02 crc kubenswrapper[4886]: I1124 09:06:02.855435 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Nov 24 09:06:02 crc kubenswrapper[4886]: I1124 09:06:02.855509 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Nov 24 09:06:02 crc kubenswrapper[4886]: I1124 09:06:02.901121 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 24 09:06:02 crc kubenswrapper[4886]: I1124 09:06:02.947078 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f10026aa-640c-4f36-9912-cd4177af074d-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"f10026aa-640c-4f36-9912-cd4177af074d\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 09:06:02 crc kubenswrapper[4886]: I1124 09:06:02.947146 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rln56\" (UniqueName: \"kubernetes.io/projected/f10026aa-640c-4f36-9912-cd4177af074d-kube-api-access-rln56\") pod \"rabbitmq-cell1-server-0\" (UID: \"f10026aa-640c-4f36-9912-cd4177af074d\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 09:06:02 crc kubenswrapper[4886]: I1124 09:06:02.947740 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f10026aa-640c-4f36-9912-cd4177af074d-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"f10026aa-640c-4f36-9912-cd4177af074d\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 09:06:02 crc kubenswrapper[4886]: I1124 09:06:02.947791 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f10026aa-640c-4f36-9912-cd4177af074d-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"f10026aa-640c-4f36-9912-cd4177af074d\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 09:06:02 crc kubenswrapper[4886]: I1124 09:06:02.947812 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"f10026aa-640c-4f36-9912-cd4177af074d\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 09:06:02 crc kubenswrapper[4886]: I1124 09:06:02.947833 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f10026aa-640c-4f36-9912-cd4177af074d-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"f10026aa-640c-4f36-9912-cd4177af074d\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 09:06:02 crc kubenswrapper[4886]: I1124 09:06:02.947879 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f10026aa-640c-4f36-9912-cd4177af074d-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"f10026aa-640c-4f36-9912-cd4177af074d\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 09:06:02 crc kubenswrapper[4886]: I1124 09:06:02.947903 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f10026aa-640c-4f36-9912-cd4177af074d-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"f10026aa-640c-4f36-9912-cd4177af074d\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 09:06:02 crc kubenswrapper[4886]: I1124 09:06:02.948559 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f10026aa-640c-4f36-9912-cd4177af074d-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"f10026aa-640c-4f36-9912-cd4177af074d\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 09:06:02 crc kubenswrapper[4886]: I1124 09:06:02.948595 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f10026aa-640c-4f36-9912-cd4177af074d-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"f10026aa-640c-4f36-9912-cd4177af074d\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 09:06:02 crc kubenswrapper[4886]: I1124 09:06:02.948850 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f10026aa-640c-4f36-9912-cd4177af074d-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"f10026aa-640c-4f36-9912-cd4177af074d\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 09:06:03 crc kubenswrapper[4886]: I1124 09:06:03.050836 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f10026aa-640c-4f36-9912-cd4177af074d-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"f10026aa-640c-4f36-9912-cd4177af074d\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 09:06:03 crc kubenswrapper[4886]: I1124 09:06:03.050921 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"f10026aa-640c-4f36-9912-cd4177af074d\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 09:06:03 crc kubenswrapper[4886]: I1124 09:06:03.050959 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f10026aa-640c-4f36-9912-cd4177af074d-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"f10026aa-640c-4f36-9912-cd4177af074d\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 09:06:03 crc kubenswrapper[4886]: I1124 09:06:03.051055 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f10026aa-640c-4f36-9912-cd4177af074d-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"f10026aa-640c-4f36-9912-cd4177af074d\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 09:06:03 crc kubenswrapper[4886]: I1124 09:06:03.051089 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f10026aa-640c-4f36-9912-cd4177af074d-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"f10026aa-640c-4f36-9912-cd4177af074d\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 09:06:03 crc kubenswrapper[4886]: I1124 09:06:03.051131 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f10026aa-640c-4f36-9912-cd4177af074d-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"f10026aa-640c-4f36-9912-cd4177af074d\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 09:06:03 crc kubenswrapper[4886]: I1124 09:06:03.051184 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f10026aa-640c-4f36-9912-cd4177af074d-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"f10026aa-640c-4f36-9912-cd4177af074d\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 09:06:03 crc kubenswrapper[4886]: I1124 09:06:03.051235 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f10026aa-640c-4f36-9912-cd4177af074d-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"f10026aa-640c-4f36-9912-cd4177af074d\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 09:06:03 crc kubenswrapper[4886]: I1124 09:06:03.051287 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f10026aa-640c-4f36-9912-cd4177af074d-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"f10026aa-640c-4f36-9912-cd4177af074d\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 09:06:03 crc kubenswrapper[4886]: I1124 09:06:03.051335 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rln56\" (UniqueName: \"kubernetes.io/projected/f10026aa-640c-4f36-9912-cd4177af074d-kube-api-access-rln56\") pod \"rabbitmq-cell1-server-0\" (UID: \"f10026aa-640c-4f36-9912-cd4177af074d\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 09:06:03 crc kubenswrapper[4886]: I1124 09:06:03.051377 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f10026aa-640c-4f36-9912-cd4177af074d-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"f10026aa-640c-4f36-9912-cd4177af074d\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 09:06:03 crc kubenswrapper[4886]: I1124 09:06:03.051685 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f10026aa-640c-4f36-9912-cd4177af074d-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"f10026aa-640c-4f36-9912-cd4177af074d\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 09:06:03 crc kubenswrapper[4886]: I1124 09:06:03.052317 4886 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"f10026aa-640c-4f36-9912-cd4177af074d\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/rabbitmq-cell1-server-0" Nov 24 09:06:03 crc kubenswrapper[4886]: I1124 09:06:03.052899 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f10026aa-640c-4f36-9912-cd4177af074d-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"f10026aa-640c-4f36-9912-cd4177af074d\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 09:06:03 crc kubenswrapper[4886]: I1124 09:06:03.053257 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f10026aa-640c-4f36-9912-cd4177af074d-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"f10026aa-640c-4f36-9912-cd4177af074d\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 09:06:03 crc kubenswrapper[4886]: I1124 09:06:03.053301 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f10026aa-640c-4f36-9912-cd4177af074d-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"f10026aa-640c-4f36-9912-cd4177af074d\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 09:06:03 crc kubenswrapper[4886]: I1124 09:06:03.053433 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f10026aa-640c-4f36-9912-cd4177af074d-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"f10026aa-640c-4f36-9912-cd4177af074d\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 09:06:03 crc kubenswrapper[4886]: I1124 09:06:03.057789 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f10026aa-640c-4f36-9912-cd4177af074d-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"f10026aa-640c-4f36-9912-cd4177af074d\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 09:06:03 crc kubenswrapper[4886]: I1124 09:06:03.058626 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f10026aa-640c-4f36-9912-cd4177af074d-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"f10026aa-640c-4f36-9912-cd4177af074d\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 09:06:03 crc kubenswrapper[4886]: I1124 09:06:03.060303 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f10026aa-640c-4f36-9912-cd4177af074d-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"f10026aa-640c-4f36-9912-cd4177af074d\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 09:06:03 crc kubenswrapper[4886]: I1124 09:06:03.063633 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f10026aa-640c-4f36-9912-cd4177af074d-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"f10026aa-640c-4f36-9912-cd4177af074d\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 09:06:03 crc kubenswrapper[4886]: I1124 09:06:03.076538 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rln56\" (UniqueName: \"kubernetes.io/projected/f10026aa-640c-4f36-9912-cd4177af074d-kube-api-access-rln56\") pod \"rabbitmq-cell1-server-0\" (UID: \"f10026aa-640c-4f36-9912-cd4177af074d\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 09:06:03 crc kubenswrapper[4886]: I1124 09:06:03.089018 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"f10026aa-640c-4f36-9912-cd4177af074d\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 09:06:03 crc kubenswrapper[4886]: I1124 09:06:03.187841 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 24 09:06:04 crc kubenswrapper[4886]: I1124 09:06:04.170217 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Nov 24 09:06:04 crc kubenswrapper[4886]: I1124 09:06:04.173685 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Nov 24 09:06:04 crc kubenswrapper[4886]: I1124 09:06:04.177133 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Nov 24 09:06:04 crc kubenswrapper[4886]: I1124 09:06:04.177218 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Nov 24 09:06:04 crc kubenswrapper[4886]: I1124 09:06:04.177574 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Nov 24 09:06:04 crc kubenswrapper[4886]: I1124 09:06:04.184758 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-8zztv" Nov 24 09:06:04 crc kubenswrapper[4886]: I1124 09:06:04.186977 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Nov 24 09:06:04 crc kubenswrapper[4886]: I1124 09:06:04.194847 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Nov 24 09:06:04 crc kubenswrapper[4886]: I1124 09:06:04.295797 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"openstack-galera-0\" (UID: \"3ec7bf38-594d-4606-ab2c-76f4fc8b6a29\") " pod="openstack/openstack-galera-0" Nov 24 09:06:04 crc kubenswrapper[4886]: I1124 09:06:04.295897 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ec7bf38-594d-4606-ab2c-76f4fc8b6a29-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"3ec7bf38-594d-4606-ab2c-76f4fc8b6a29\") " pod="openstack/openstack-galera-0" Nov 24 09:06:04 crc kubenswrapper[4886]: I1124 09:06:04.295923 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h8wtd\" (UniqueName: \"kubernetes.io/projected/3ec7bf38-594d-4606-ab2c-76f4fc8b6a29-kube-api-access-h8wtd\") pod \"openstack-galera-0\" (UID: \"3ec7bf38-594d-4606-ab2c-76f4fc8b6a29\") " pod="openstack/openstack-galera-0" Nov 24 09:06:04 crc kubenswrapper[4886]: I1124 09:06:04.295990 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/3ec7bf38-594d-4606-ab2c-76f4fc8b6a29-config-data-default\") pod \"openstack-galera-0\" (UID: \"3ec7bf38-594d-4606-ab2c-76f4fc8b6a29\") " pod="openstack/openstack-galera-0" Nov 24 09:06:04 crc kubenswrapper[4886]: I1124 09:06:04.296221 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/3ec7bf38-594d-4606-ab2c-76f4fc8b6a29-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"3ec7bf38-594d-4606-ab2c-76f4fc8b6a29\") " pod="openstack/openstack-galera-0" Nov 24 09:06:04 crc kubenswrapper[4886]: I1124 09:06:04.296273 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/3ec7bf38-594d-4606-ab2c-76f4fc8b6a29-config-data-generated\") pod \"openstack-galera-0\" (UID: \"3ec7bf38-594d-4606-ab2c-76f4fc8b6a29\") " pod="openstack/openstack-galera-0" Nov 24 09:06:04 crc kubenswrapper[4886]: I1124 09:06:04.296318 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3ec7bf38-594d-4606-ab2c-76f4fc8b6a29-operator-scripts\") pod \"openstack-galera-0\" (UID: \"3ec7bf38-594d-4606-ab2c-76f4fc8b6a29\") " pod="openstack/openstack-galera-0" Nov 24 09:06:04 crc kubenswrapper[4886]: I1124 09:06:04.296351 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/3ec7bf38-594d-4606-ab2c-76f4fc8b6a29-kolla-config\") pod \"openstack-galera-0\" (UID: \"3ec7bf38-594d-4606-ab2c-76f4fc8b6a29\") " pod="openstack/openstack-galera-0" Nov 24 09:06:04 crc kubenswrapper[4886]: I1124 09:06:04.397541 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/3ec7bf38-594d-4606-ab2c-76f4fc8b6a29-config-data-default\") pod \"openstack-galera-0\" (UID: \"3ec7bf38-594d-4606-ab2c-76f4fc8b6a29\") " pod="openstack/openstack-galera-0" Nov 24 09:06:04 crc kubenswrapper[4886]: I1124 09:06:04.397622 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/3ec7bf38-594d-4606-ab2c-76f4fc8b6a29-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"3ec7bf38-594d-4606-ab2c-76f4fc8b6a29\") " pod="openstack/openstack-galera-0" Nov 24 09:06:04 crc kubenswrapper[4886]: I1124 09:06:04.397646 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/3ec7bf38-594d-4606-ab2c-76f4fc8b6a29-config-data-generated\") pod \"openstack-galera-0\" (UID: \"3ec7bf38-594d-4606-ab2c-76f4fc8b6a29\") " pod="openstack/openstack-galera-0" Nov 24 09:06:04 crc kubenswrapper[4886]: I1124 09:06:04.397660 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3ec7bf38-594d-4606-ab2c-76f4fc8b6a29-operator-scripts\") pod \"openstack-galera-0\" (UID: \"3ec7bf38-594d-4606-ab2c-76f4fc8b6a29\") " pod="openstack/openstack-galera-0" Nov 24 09:06:04 crc kubenswrapper[4886]: I1124 09:06:04.397683 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/3ec7bf38-594d-4606-ab2c-76f4fc8b6a29-kolla-config\") pod \"openstack-galera-0\" (UID: \"3ec7bf38-594d-4606-ab2c-76f4fc8b6a29\") " pod="openstack/openstack-galera-0" Nov 24 09:06:04 crc kubenswrapper[4886]: I1124 09:06:04.397716 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"openstack-galera-0\" (UID: \"3ec7bf38-594d-4606-ab2c-76f4fc8b6a29\") " pod="openstack/openstack-galera-0" Nov 24 09:06:04 crc kubenswrapper[4886]: I1124 09:06:04.397754 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ec7bf38-594d-4606-ab2c-76f4fc8b6a29-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"3ec7bf38-594d-4606-ab2c-76f4fc8b6a29\") " pod="openstack/openstack-galera-0" Nov 24 09:06:04 crc kubenswrapper[4886]: I1124 09:06:04.397774 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h8wtd\" (UniqueName: \"kubernetes.io/projected/3ec7bf38-594d-4606-ab2c-76f4fc8b6a29-kube-api-access-h8wtd\") pod \"openstack-galera-0\" (UID: \"3ec7bf38-594d-4606-ab2c-76f4fc8b6a29\") " pod="openstack/openstack-galera-0" Nov 24 09:06:04 crc kubenswrapper[4886]: I1124 09:06:04.398860 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/3ec7bf38-594d-4606-ab2c-76f4fc8b6a29-config-data-default\") pod \"openstack-galera-0\" (UID: \"3ec7bf38-594d-4606-ab2c-76f4fc8b6a29\") " pod="openstack/openstack-galera-0" Nov 24 09:06:04 crc kubenswrapper[4886]: I1124 09:06:04.399440 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/3ec7bf38-594d-4606-ab2c-76f4fc8b6a29-kolla-config\") pod \"openstack-galera-0\" (UID: \"3ec7bf38-594d-4606-ab2c-76f4fc8b6a29\") " pod="openstack/openstack-galera-0" Nov 24 09:06:04 crc kubenswrapper[4886]: I1124 09:06:04.399629 4886 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"openstack-galera-0\" (UID: \"3ec7bf38-594d-4606-ab2c-76f4fc8b6a29\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/openstack-galera-0" Nov 24 09:06:04 crc kubenswrapper[4886]: I1124 09:06:04.399712 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/3ec7bf38-594d-4606-ab2c-76f4fc8b6a29-config-data-generated\") pod \"openstack-galera-0\" (UID: \"3ec7bf38-594d-4606-ab2c-76f4fc8b6a29\") " pod="openstack/openstack-galera-0" Nov 24 09:06:04 crc kubenswrapper[4886]: I1124 09:06:04.401132 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3ec7bf38-594d-4606-ab2c-76f4fc8b6a29-operator-scripts\") pod \"openstack-galera-0\" (UID: \"3ec7bf38-594d-4606-ab2c-76f4fc8b6a29\") " pod="openstack/openstack-galera-0" Nov 24 09:06:04 crc kubenswrapper[4886]: I1124 09:06:04.406279 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ec7bf38-594d-4606-ab2c-76f4fc8b6a29-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"3ec7bf38-594d-4606-ab2c-76f4fc8b6a29\") " pod="openstack/openstack-galera-0" Nov 24 09:06:04 crc kubenswrapper[4886]: I1124 09:06:04.428936 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/3ec7bf38-594d-4606-ab2c-76f4fc8b6a29-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"3ec7bf38-594d-4606-ab2c-76f4fc8b6a29\") " pod="openstack/openstack-galera-0" Nov 24 09:06:04 crc kubenswrapper[4886]: I1124 09:06:04.440113 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"openstack-galera-0\" (UID: \"3ec7bf38-594d-4606-ab2c-76f4fc8b6a29\") " pod="openstack/openstack-galera-0" Nov 24 09:06:04 crc kubenswrapper[4886]: I1124 09:06:04.444881 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h8wtd\" (UniqueName: \"kubernetes.io/projected/3ec7bf38-594d-4606-ab2c-76f4fc8b6a29-kube-api-access-h8wtd\") pod \"openstack-galera-0\" (UID: \"3ec7bf38-594d-4606-ab2c-76f4fc8b6a29\") " pod="openstack/openstack-galera-0" Nov 24 09:06:04 crc kubenswrapper[4886]: I1124 09:06:04.504579 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Nov 24 09:06:05 crc kubenswrapper[4886]: I1124 09:06:05.423055 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Nov 24 09:06:05 crc kubenswrapper[4886]: I1124 09:06:05.425564 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Nov 24 09:06:05 crc kubenswrapper[4886]: I1124 09:06:05.429046 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-4hcpj" Nov 24 09:06:05 crc kubenswrapper[4886]: I1124 09:06:05.429354 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Nov 24 09:06:05 crc kubenswrapper[4886]: I1124 09:06:05.429407 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Nov 24 09:06:05 crc kubenswrapper[4886]: I1124 09:06:05.429574 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Nov 24 09:06:05 crc kubenswrapper[4886]: I1124 09:06:05.444079 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Nov 24 09:06:05 crc kubenswrapper[4886]: I1124 09:06:05.518293 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"openstack-cell1-galera-0\" (UID: \"11ea3612-3583-4b82-9047-d11cd751adcd\") " pod="openstack/openstack-cell1-galera-0" Nov 24 09:06:05 crc kubenswrapper[4886]: I1124 09:06:05.518436 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/11ea3612-3583-4b82-9047-d11cd751adcd-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"11ea3612-3583-4b82-9047-d11cd751adcd\") " pod="openstack/openstack-cell1-galera-0" Nov 24 09:06:05 crc kubenswrapper[4886]: I1124 09:06:05.518464 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11ea3612-3583-4b82-9047-d11cd751adcd-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"11ea3612-3583-4b82-9047-d11cd751adcd\") " pod="openstack/openstack-cell1-galera-0" Nov 24 09:06:05 crc kubenswrapper[4886]: I1124 09:06:05.518523 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/11ea3612-3583-4b82-9047-d11cd751adcd-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"11ea3612-3583-4b82-9047-d11cd751adcd\") " pod="openstack/openstack-cell1-galera-0" Nov 24 09:06:05 crc kubenswrapper[4886]: I1124 09:06:05.518564 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/11ea3612-3583-4b82-9047-d11cd751adcd-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"11ea3612-3583-4b82-9047-d11cd751adcd\") " pod="openstack/openstack-cell1-galera-0" Nov 24 09:06:05 crc kubenswrapper[4886]: I1124 09:06:05.518615 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jq2fd\" (UniqueName: \"kubernetes.io/projected/11ea3612-3583-4b82-9047-d11cd751adcd-kube-api-access-jq2fd\") pod \"openstack-cell1-galera-0\" (UID: \"11ea3612-3583-4b82-9047-d11cd751adcd\") " pod="openstack/openstack-cell1-galera-0" Nov 24 09:06:05 crc kubenswrapper[4886]: I1124 09:06:05.518681 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/11ea3612-3583-4b82-9047-d11cd751adcd-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"11ea3612-3583-4b82-9047-d11cd751adcd\") " pod="openstack/openstack-cell1-galera-0" Nov 24 09:06:05 crc kubenswrapper[4886]: I1124 09:06:05.518716 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/11ea3612-3583-4b82-9047-d11cd751adcd-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"11ea3612-3583-4b82-9047-d11cd751adcd\") " pod="openstack/openstack-cell1-galera-0" Nov 24 09:06:05 crc kubenswrapper[4886]: I1124 09:06:05.620879 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"openstack-cell1-galera-0\" (UID: \"11ea3612-3583-4b82-9047-d11cd751adcd\") " pod="openstack/openstack-cell1-galera-0" Nov 24 09:06:05 crc kubenswrapper[4886]: I1124 09:06:05.620993 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/11ea3612-3583-4b82-9047-d11cd751adcd-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"11ea3612-3583-4b82-9047-d11cd751adcd\") " pod="openstack/openstack-cell1-galera-0" Nov 24 09:06:05 crc kubenswrapper[4886]: I1124 09:06:05.621032 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11ea3612-3583-4b82-9047-d11cd751adcd-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"11ea3612-3583-4b82-9047-d11cd751adcd\") " pod="openstack/openstack-cell1-galera-0" Nov 24 09:06:05 crc kubenswrapper[4886]: I1124 09:06:05.621073 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/11ea3612-3583-4b82-9047-d11cd751adcd-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"11ea3612-3583-4b82-9047-d11cd751adcd\") " pod="openstack/openstack-cell1-galera-0" Nov 24 09:06:05 crc kubenswrapper[4886]: I1124 09:06:05.621110 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/11ea3612-3583-4b82-9047-d11cd751adcd-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"11ea3612-3583-4b82-9047-d11cd751adcd\") " pod="openstack/openstack-cell1-galera-0" Nov 24 09:06:05 crc kubenswrapper[4886]: I1124 09:06:05.621132 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jq2fd\" (UniqueName: \"kubernetes.io/projected/11ea3612-3583-4b82-9047-d11cd751adcd-kube-api-access-jq2fd\") pod \"openstack-cell1-galera-0\" (UID: \"11ea3612-3583-4b82-9047-d11cd751adcd\") " pod="openstack/openstack-cell1-galera-0" Nov 24 09:06:05 crc kubenswrapper[4886]: I1124 09:06:05.621195 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/11ea3612-3583-4b82-9047-d11cd751adcd-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"11ea3612-3583-4b82-9047-d11cd751adcd\") " pod="openstack/openstack-cell1-galera-0" Nov 24 09:06:05 crc kubenswrapper[4886]: I1124 09:06:05.621244 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/11ea3612-3583-4b82-9047-d11cd751adcd-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"11ea3612-3583-4b82-9047-d11cd751adcd\") " pod="openstack/openstack-cell1-galera-0" Nov 24 09:06:05 crc kubenswrapper[4886]: I1124 09:06:05.621357 4886 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"openstack-cell1-galera-0\" (UID: \"11ea3612-3583-4b82-9047-d11cd751adcd\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/openstack-cell1-galera-0" Nov 24 09:06:05 crc kubenswrapper[4886]: I1124 09:06:05.622916 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/11ea3612-3583-4b82-9047-d11cd751adcd-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"11ea3612-3583-4b82-9047-d11cd751adcd\") " pod="openstack/openstack-cell1-galera-0" Nov 24 09:06:05 crc kubenswrapper[4886]: I1124 09:06:05.623033 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/11ea3612-3583-4b82-9047-d11cd751adcd-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"11ea3612-3583-4b82-9047-d11cd751adcd\") " pod="openstack/openstack-cell1-galera-0" Nov 24 09:06:05 crc kubenswrapper[4886]: I1124 09:06:05.624178 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/11ea3612-3583-4b82-9047-d11cd751adcd-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"11ea3612-3583-4b82-9047-d11cd751adcd\") " pod="openstack/openstack-cell1-galera-0" Nov 24 09:06:05 crc kubenswrapper[4886]: I1124 09:06:05.624550 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/11ea3612-3583-4b82-9047-d11cd751adcd-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"11ea3612-3583-4b82-9047-d11cd751adcd\") " pod="openstack/openstack-cell1-galera-0" Nov 24 09:06:05 crc kubenswrapper[4886]: I1124 09:06:05.637549 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11ea3612-3583-4b82-9047-d11cd751adcd-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"11ea3612-3583-4b82-9047-d11cd751adcd\") " pod="openstack/openstack-cell1-galera-0" Nov 24 09:06:05 crc kubenswrapper[4886]: I1124 09:06:05.637584 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/11ea3612-3583-4b82-9047-d11cd751adcd-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"11ea3612-3583-4b82-9047-d11cd751adcd\") " pod="openstack/openstack-cell1-galera-0" Nov 24 09:06:05 crc kubenswrapper[4886]: I1124 09:06:05.651119 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jq2fd\" (UniqueName: \"kubernetes.io/projected/11ea3612-3583-4b82-9047-d11cd751adcd-kube-api-access-jq2fd\") pod \"openstack-cell1-galera-0\" (UID: \"11ea3612-3583-4b82-9047-d11cd751adcd\") " pod="openstack/openstack-cell1-galera-0" Nov 24 09:06:05 crc kubenswrapper[4886]: I1124 09:06:05.651535 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"openstack-cell1-galera-0\" (UID: \"11ea3612-3583-4b82-9047-d11cd751adcd\") " pod="openstack/openstack-cell1-galera-0" Nov 24 09:06:05 crc kubenswrapper[4886]: I1124 09:06:05.748377 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Nov 24 09:06:05 crc kubenswrapper[4886]: I1124 09:06:05.759639 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Nov 24 09:06:05 crc kubenswrapper[4886]: I1124 09:06:05.779007 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Nov 24 09:06:05 crc kubenswrapper[4886]: I1124 09:06:05.783995 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Nov 24 09:06:05 crc kubenswrapper[4886]: I1124 09:06:05.785823 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Nov 24 09:06:05 crc kubenswrapper[4886]: I1124 09:06:05.791428 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Nov 24 09:06:05 crc kubenswrapper[4886]: I1124 09:06:05.791730 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-4w6ch" Nov 24 09:06:05 crc kubenswrapper[4886]: I1124 09:06:05.827079 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7db518ac-866a-47c8-a5fb-264625a1c1fd-config-data\") pod \"memcached-0\" (UID: \"7db518ac-866a-47c8-a5fb-264625a1c1fd\") " pod="openstack/memcached-0" Nov 24 09:06:05 crc kubenswrapper[4886]: I1124 09:06:05.827506 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/7db518ac-866a-47c8-a5fb-264625a1c1fd-kolla-config\") pod \"memcached-0\" (UID: \"7db518ac-866a-47c8-a5fb-264625a1c1fd\") " pod="openstack/memcached-0" Nov 24 09:06:05 crc kubenswrapper[4886]: I1124 09:06:05.827660 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7db518ac-866a-47c8-a5fb-264625a1c1fd-combined-ca-bundle\") pod \"memcached-0\" (UID: \"7db518ac-866a-47c8-a5fb-264625a1c1fd\") " pod="openstack/memcached-0" Nov 24 09:06:05 crc kubenswrapper[4886]: I1124 09:06:05.828132 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/7db518ac-866a-47c8-a5fb-264625a1c1fd-memcached-tls-certs\") pod \"memcached-0\" (UID: \"7db518ac-866a-47c8-a5fb-264625a1c1fd\") " pod="openstack/memcached-0" Nov 24 09:06:05 crc kubenswrapper[4886]: I1124 09:06:05.828999 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n7ln5\" (UniqueName: \"kubernetes.io/projected/7db518ac-866a-47c8-a5fb-264625a1c1fd-kube-api-access-n7ln5\") pod \"memcached-0\" (UID: \"7db518ac-866a-47c8-a5fb-264625a1c1fd\") " pod="openstack/memcached-0" Nov 24 09:06:05 crc kubenswrapper[4886]: I1124 09:06:05.932088 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/7db518ac-866a-47c8-a5fb-264625a1c1fd-memcached-tls-certs\") pod \"memcached-0\" (UID: \"7db518ac-866a-47c8-a5fb-264625a1c1fd\") " pod="openstack/memcached-0" Nov 24 09:06:05 crc kubenswrapper[4886]: I1124 09:06:05.932519 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n7ln5\" (UniqueName: \"kubernetes.io/projected/7db518ac-866a-47c8-a5fb-264625a1c1fd-kube-api-access-n7ln5\") pod \"memcached-0\" (UID: \"7db518ac-866a-47c8-a5fb-264625a1c1fd\") " pod="openstack/memcached-0" Nov 24 09:06:05 crc kubenswrapper[4886]: I1124 09:06:05.932710 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7db518ac-866a-47c8-a5fb-264625a1c1fd-config-data\") pod \"memcached-0\" (UID: \"7db518ac-866a-47c8-a5fb-264625a1c1fd\") " pod="openstack/memcached-0" Nov 24 09:06:05 crc kubenswrapper[4886]: I1124 09:06:05.932801 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/7db518ac-866a-47c8-a5fb-264625a1c1fd-kolla-config\") pod \"memcached-0\" (UID: \"7db518ac-866a-47c8-a5fb-264625a1c1fd\") " pod="openstack/memcached-0" Nov 24 09:06:05 crc kubenswrapper[4886]: I1124 09:06:05.932885 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7db518ac-866a-47c8-a5fb-264625a1c1fd-combined-ca-bundle\") pod \"memcached-0\" (UID: \"7db518ac-866a-47c8-a5fb-264625a1c1fd\") " pod="openstack/memcached-0" Nov 24 09:06:05 crc kubenswrapper[4886]: I1124 09:06:05.933979 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/7db518ac-866a-47c8-a5fb-264625a1c1fd-kolla-config\") pod \"memcached-0\" (UID: \"7db518ac-866a-47c8-a5fb-264625a1c1fd\") " pod="openstack/memcached-0" Nov 24 09:06:05 crc kubenswrapper[4886]: I1124 09:06:05.934101 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7db518ac-866a-47c8-a5fb-264625a1c1fd-config-data\") pod \"memcached-0\" (UID: \"7db518ac-866a-47c8-a5fb-264625a1c1fd\") " pod="openstack/memcached-0" Nov 24 09:06:05 crc kubenswrapper[4886]: I1124 09:06:05.937873 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/7db518ac-866a-47c8-a5fb-264625a1c1fd-memcached-tls-certs\") pod \"memcached-0\" (UID: \"7db518ac-866a-47c8-a5fb-264625a1c1fd\") " pod="openstack/memcached-0" Nov 24 09:06:05 crc kubenswrapper[4886]: I1124 09:06:05.956404 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7db518ac-866a-47c8-a5fb-264625a1c1fd-combined-ca-bundle\") pod \"memcached-0\" (UID: \"7db518ac-866a-47c8-a5fb-264625a1c1fd\") " pod="openstack/memcached-0" Nov 24 09:06:05 crc kubenswrapper[4886]: I1124 09:06:05.964048 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n7ln5\" (UniqueName: \"kubernetes.io/projected/7db518ac-866a-47c8-a5fb-264625a1c1fd-kube-api-access-n7ln5\") pod \"memcached-0\" (UID: \"7db518ac-866a-47c8-a5fb-264625a1c1fd\") " pod="openstack/memcached-0" Nov 24 09:06:06 crc kubenswrapper[4886]: I1124 09:06:06.106204 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Nov 24 09:06:08 crc kubenswrapper[4886]: I1124 09:06:08.049932 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Nov 24 09:06:08 crc kubenswrapper[4886]: I1124 09:06:08.053954 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 24 09:06:08 crc kubenswrapper[4886]: I1124 09:06:08.057476 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-8gpgf" Nov 24 09:06:08 crc kubenswrapper[4886]: I1124 09:06:08.064686 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 24 09:06:08 crc kubenswrapper[4886]: I1124 09:06:08.173602 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ptwg4\" (UniqueName: \"kubernetes.io/projected/c4dbd151-5916-42f8-9555-adf76d2480bf-kube-api-access-ptwg4\") pod \"kube-state-metrics-0\" (UID: \"c4dbd151-5916-42f8-9555-adf76d2480bf\") " pod="openstack/kube-state-metrics-0" Nov 24 09:06:08 crc kubenswrapper[4886]: I1124 09:06:08.281212 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ptwg4\" (UniqueName: \"kubernetes.io/projected/c4dbd151-5916-42f8-9555-adf76d2480bf-kube-api-access-ptwg4\") pod \"kube-state-metrics-0\" (UID: \"c4dbd151-5916-42f8-9555-adf76d2480bf\") " pod="openstack/kube-state-metrics-0" Nov 24 09:06:08 crc kubenswrapper[4886]: I1124 09:06:08.340283 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ptwg4\" (UniqueName: \"kubernetes.io/projected/c4dbd151-5916-42f8-9555-adf76d2480bf-kube-api-access-ptwg4\") pod \"kube-state-metrics-0\" (UID: \"c4dbd151-5916-42f8-9555-adf76d2480bf\") " pod="openstack/kube-state-metrics-0" Nov 24 09:06:08 crc kubenswrapper[4886]: I1124 09:06:08.381271 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 24 09:06:10 crc kubenswrapper[4886]: I1124 09:06:10.042442 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 24 09:06:11 crc kubenswrapper[4886]: I1124 09:06:11.009746 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-rzmth"] Nov 24 09:06:11 crc kubenswrapper[4886]: I1124 09:06:11.011049 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-rzmth" Nov 24 09:06:11 crc kubenswrapper[4886]: I1124 09:06:11.016544 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-vcwg9" Nov 24 09:06:11 crc kubenswrapper[4886]: I1124 09:06:11.016814 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Nov 24 09:06:11 crc kubenswrapper[4886]: I1124 09:06:11.017006 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Nov 24 09:06:11 crc kubenswrapper[4886]: I1124 09:06:11.036750 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-rzmth"] Nov 24 09:06:11 crc kubenswrapper[4886]: I1124 09:06:11.057571 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-vclvw"] Nov 24 09:06:11 crc kubenswrapper[4886]: I1124 09:06:11.059761 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-vclvw" Nov 24 09:06:11 crc kubenswrapper[4886]: I1124 09:06:11.072736 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-vclvw"] Nov 24 09:06:11 crc kubenswrapper[4886]: I1124 09:06:11.149355 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/b7951685-e0e7-4524-ba49-b720357aa59c-var-run-ovn\") pod \"ovn-controller-rzmth\" (UID: \"b7951685-e0e7-4524-ba49-b720357aa59c\") " pod="openstack/ovn-controller-rzmth" Nov 24 09:06:11 crc kubenswrapper[4886]: I1124 09:06:11.149412 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/b7951685-e0e7-4524-ba49-b720357aa59c-var-run\") pod \"ovn-controller-rzmth\" (UID: \"b7951685-e0e7-4524-ba49-b720357aa59c\") " pod="openstack/ovn-controller-rzmth" Nov 24 09:06:11 crc kubenswrapper[4886]: I1124 09:06:11.149456 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ttdx4\" (UniqueName: \"kubernetes.io/projected/6111b0c6-fec0-4738-8a86-e433a2b5c673-kube-api-access-ttdx4\") pod \"ovn-controller-ovs-vclvw\" (UID: \"6111b0c6-fec0-4738-8a86-e433a2b5c673\") " pod="openstack/ovn-controller-ovs-vclvw" Nov 24 09:06:11 crc kubenswrapper[4886]: I1124 09:06:11.149494 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/b7951685-e0e7-4524-ba49-b720357aa59c-var-log-ovn\") pod \"ovn-controller-rzmth\" (UID: \"b7951685-e0e7-4524-ba49-b720357aa59c\") " pod="openstack/ovn-controller-rzmth" Nov 24 09:06:11 crc kubenswrapper[4886]: I1124 09:06:11.149529 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/6111b0c6-fec0-4738-8a86-e433a2b5c673-etc-ovs\") pod \"ovn-controller-ovs-vclvw\" (UID: \"6111b0c6-fec0-4738-8a86-e433a2b5c673\") " pod="openstack/ovn-controller-ovs-vclvw" Nov 24 09:06:11 crc kubenswrapper[4886]: I1124 09:06:11.149580 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9t2nm\" (UniqueName: \"kubernetes.io/projected/b7951685-e0e7-4524-ba49-b720357aa59c-kube-api-access-9t2nm\") pod \"ovn-controller-rzmth\" (UID: \"b7951685-e0e7-4524-ba49-b720357aa59c\") " pod="openstack/ovn-controller-rzmth" Nov 24 09:06:11 crc kubenswrapper[4886]: I1124 09:06:11.149605 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6111b0c6-fec0-4738-8a86-e433a2b5c673-scripts\") pod \"ovn-controller-ovs-vclvw\" (UID: \"6111b0c6-fec0-4738-8a86-e433a2b5c673\") " pod="openstack/ovn-controller-ovs-vclvw" Nov 24 09:06:11 crc kubenswrapper[4886]: I1124 09:06:11.149632 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b7951685-e0e7-4524-ba49-b720357aa59c-combined-ca-bundle\") pod \"ovn-controller-rzmth\" (UID: \"b7951685-e0e7-4524-ba49-b720357aa59c\") " pod="openstack/ovn-controller-rzmth" Nov 24 09:06:11 crc kubenswrapper[4886]: I1124 09:06:11.149697 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/6111b0c6-fec0-4738-8a86-e433a2b5c673-var-run\") pod \"ovn-controller-ovs-vclvw\" (UID: \"6111b0c6-fec0-4738-8a86-e433a2b5c673\") " pod="openstack/ovn-controller-ovs-vclvw" Nov 24 09:06:11 crc kubenswrapper[4886]: I1124 09:06:11.149745 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b7951685-e0e7-4524-ba49-b720357aa59c-scripts\") pod \"ovn-controller-rzmth\" (UID: \"b7951685-e0e7-4524-ba49-b720357aa59c\") " pod="openstack/ovn-controller-rzmth" Nov 24 09:06:11 crc kubenswrapper[4886]: I1124 09:06:11.149770 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/b7951685-e0e7-4524-ba49-b720357aa59c-ovn-controller-tls-certs\") pod \"ovn-controller-rzmth\" (UID: \"b7951685-e0e7-4524-ba49-b720357aa59c\") " pod="openstack/ovn-controller-rzmth" Nov 24 09:06:11 crc kubenswrapper[4886]: I1124 09:06:11.149799 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/6111b0c6-fec0-4738-8a86-e433a2b5c673-var-lib\") pod \"ovn-controller-ovs-vclvw\" (UID: \"6111b0c6-fec0-4738-8a86-e433a2b5c673\") " pod="openstack/ovn-controller-ovs-vclvw" Nov 24 09:06:11 crc kubenswrapper[4886]: I1124 09:06:11.150483 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/6111b0c6-fec0-4738-8a86-e433a2b5c673-var-log\") pod \"ovn-controller-ovs-vclvw\" (UID: \"6111b0c6-fec0-4738-8a86-e433a2b5c673\") " pod="openstack/ovn-controller-ovs-vclvw" Nov 24 09:06:11 crc kubenswrapper[4886]: I1124 09:06:11.253058 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b7951685-e0e7-4524-ba49-b720357aa59c-scripts\") pod \"ovn-controller-rzmth\" (UID: \"b7951685-e0e7-4524-ba49-b720357aa59c\") " pod="openstack/ovn-controller-rzmth" Nov 24 09:06:11 crc kubenswrapper[4886]: I1124 09:06:11.253127 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/b7951685-e0e7-4524-ba49-b720357aa59c-ovn-controller-tls-certs\") pod \"ovn-controller-rzmth\" (UID: \"b7951685-e0e7-4524-ba49-b720357aa59c\") " pod="openstack/ovn-controller-rzmth" Nov 24 09:06:11 crc kubenswrapper[4886]: I1124 09:06:11.253174 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/6111b0c6-fec0-4738-8a86-e433a2b5c673-var-lib\") pod \"ovn-controller-ovs-vclvw\" (UID: \"6111b0c6-fec0-4738-8a86-e433a2b5c673\") " pod="openstack/ovn-controller-ovs-vclvw" Nov 24 09:06:11 crc kubenswrapper[4886]: I1124 09:06:11.253218 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/6111b0c6-fec0-4738-8a86-e433a2b5c673-var-log\") pod \"ovn-controller-ovs-vclvw\" (UID: \"6111b0c6-fec0-4738-8a86-e433a2b5c673\") " pod="openstack/ovn-controller-ovs-vclvw" Nov 24 09:06:11 crc kubenswrapper[4886]: I1124 09:06:11.253259 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/b7951685-e0e7-4524-ba49-b720357aa59c-var-run-ovn\") pod \"ovn-controller-rzmth\" (UID: \"b7951685-e0e7-4524-ba49-b720357aa59c\") " pod="openstack/ovn-controller-rzmth" Nov 24 09:06:11 crc kubenswrapper[4886]: I1124 09:06:11.253280 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/b7951685-e0e7-4524-ba49-b720357aa59c-var-run\") pod \"ovn-controller-rzmth\" (UID: \"b7951685-e0e7-4524-ba49-b720357aa59c\") " pod="openstack/ovn-controller-rzmth" Nov 24 09:06:11 crc kubenswrapper[4886]: I1124 09:06:11.253310 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ttdx4\" (UniqueName: \"kubernetes.io/projected/6111b0c6-fec0-4738-8a86-e433a2b5c673-kube-api-access-ttdx4\") pod \"ovn-controller-ovs-vclvw\" (UID: \"6111b0c6-fec0-4738-8a86-e433a2b5c673\") " pod="openstack/ovn-controller-ovs-vclvw" Nov 24 09:06:11 crc kubenswrapper[4886]: I1124 09:06:11.253341 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/b7951685-e0e7-4524-ba49-b720357aa59c-var-log-ovn\") pod \"ovn-controller-rzmth\" (UID: \"b7951685-e0e7-4524-ba49-b720357aa59c\") " pod="openstack/ovn-controller-rzmth" Nov 24 09:06:11 crc kubenswrapper[4886]: I1124 09:06:11.253377 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/6111b0c6-fec0-4738-8a86-e433a2b5c673-etc-ovs\") pod \"ovn-controller-ovs-vclvw\" (UID: \"6111b0c6-fec0-4738-8a86-e433a2b5c673\") " pod="openstack/ovn-controller-ovs-vclvw" Nov 24 09:06:11 crc kubenswrapper[4886]: I1124 09:06:11.253426 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9t2nm\" (UniqueName: \"kubernetes.io/projected/b7951685-e0e7-4524-ba49-b720357aa59c-kube-api-access-9t2nm\") pod \"ovn-controller-rzmth\" (UID: \"b7951685-e0e7-4524-ba49-b720357aa59c\") " pod="openstack/ovn-controller-rzmth" Nov 24 09:06:11 crc kubenswrapper[4886]: I1124 09:06:11.253450 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6111b0c6-fec0-4738-8a86-e433a2b5c673-scripts\") pod \"ovn-controller-ovs-vclvw\" (UID: \"6111b0c6-fec0-4738-8a86-e433a2b5c673\") " pod="openstack/ovn-controller-ovs-vclvw" Nov 24 09:06:11 crc kubenswrapper[4886]: I1124 09:06:11.253474 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b7951685-e0e7-4524-ba49-b720357aa59c-combined-ca-bundle\") pod \"ovn-controller-rzmth\" (UID: \"b7951685-e0e7-4524-ba49-b720357aa59c\") " pod="openstack/ovn-controller-rzmth" Nov 24 09:06:11 crc kubenswrapper[4886]: I1124 09:06:11.253520 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/6111b0c6-fec0-4738-8a86-e433a2b5c673-var-run\") pod \"ovn-controller-ovs-vclvw\" (UID: \"6111b0c6-fec0-4738-8a86-e433a2b5c673\") " pod="openstack/ovn-controller-ovs-vclvw" Nov 24 09:06:11 crc kubenswrapper[4886]: I1124 09:06:11.254048 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/6111b0c6-fec0-4738-8a86-e433a2b5c673-var-log\") pod \"ovn-controller-ovs-vclvw\" (UID: \"6111b0c6-fec0-4738-8a86-e433a2b5c673\") " pod="openstack/ovn-controller-ovs-vclvw" Nov 24 09:06:11 crc kubenswrapper[4886]: I1124 09:06:11.254092 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/6111b0c6-fec0-4738-8a86-e433a2b5c673-var-run\") pod \"ovn-controller-ovs-vclvw\" (UID: \"6111b0c6-fec0-4738-8a86-e433a2b5c673\") " pod="openstack/ovn-controller-ovs-vclvw" Nov 24 09:06:11 crc kubenswrapper[4886]: I1124 09:06:11.254045 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/b7951685-e0e7-4524-ba49-b720357aa59c-var-run-ovn\") pod \"ovn-controller-rzmth\" (UID: \"b7951685-e0e7-4524-ba49-b720357aa59c\") " pod="openstack/ovn-controller-rzmth" Nov 24 09:06:11 crc kubenswrapper[4886]: I1124 09:06:11.254289 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/6111b0c6-fec0-4738-8a86-e433a2b5c673-etc-ovs\") pod \"ovn-controller-ovs-vclvw\" (UID: \"6111b0c6-fec0-4738-8a86-e433a2b5c673\") " pod="openstack/ovn-controller-ovs-vclvw" Nov 24 09:06:11 crc kubenswrapper[4886]: I1124 09:06:11.255017 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/b7951685-e0e7-4524-ba49-b720357aa59c-var-run\") pod \"ovn-controller-rzmth\" (UID: \"b7951685-e0e7-4524-ba49-b720357aa59c\") " pod="openstack/ovn-controller-rzmth" Nov 24 09:06:11 crc kubenswrapper[4886]: I1124 09:06:11.255105 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/b7951685-e0e7-4524-ba49-b720357aa59c-var-log-ovn\") pod \"ovn-controller-rzmth\" (UID: \"b7951685-e0e7-4524-ba49-b720357aa59c\") " pod="openstack/ovn-controller-rzmth" Nov 24 09:06:11 crc kubenswrapper[4886]: I1124 09:06:11.255760 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b7951685-e0e7-4524-ba49-b720357aa59c-scripts\") pod \"ovn-controller-rzmth\" (UID: \"b7951685-e0e7-4524-ba49-b720357aa59c\") " pod="openstack/ovn-controller-rzmth" Nov 24 09:06:11 crc kubenswrapper[4886]: I1124 09:06:11.257207 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/6111b0c6-fec0-4738-8a86-e433a2b5c673-var-lib\") pod \"ovn-controller-ovs-vclvw\" (UID: \"6111b0c6-fec0-4738-8a86-e433a2b5c673\") " pod="openstack/ovn-controller-ovs-vclvw" Nov 24 09:06:11 crc kubenswrapper[4886]: I1124 09:06:11.260006 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6111b0c6-fec0-4738-8a86-e433a2b5c673-scripts\") pod \"ovn-controller-ovs-vclvw\" (UID: \"6111b0c6-fec0-4738-8a86-e433a2b5c673\") " pod="openstack/ovn-controller-ovs-vclvw" Nov 24 09:06:11 crc kubenswrapper[4886]: I1124 09:06:11.267182 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b7951685-e0e7-4524-ba49-b720357aa59c-combined-ca-bundle\") pod \"ovn-controller-rzmth\" (UID: \"b7951685-e0e7-4524-ba49-b720357aa59c\") " pod="openstack/ovn-controller-rzmth" Nov 24 09:06:11 crc kubenswrapper[4886]: I1124 09:06:11.267245 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/b7951685-e0e7-4524-ba49-b720357aa59c-ovn-controller-tls-certs\") pod \"ovn-controller-rzmth\" (UID: \"b7951685-e0e7-4524-ba49-b720357aa59c\") " pod="openstack/ovn-controller-rzmth" Nov 24 09:06:11 crc kubenswrapper[4886]: I1124 09:06:11.278223 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ttdx4\" (UniqueName: \"kubernetes.io/projected/6111b0c6-fec0-4738-8a86-e433a2b5c673-kube-api-access-ttdx4\") pod \"ovn-controller-ovs-vclvw\" (UID: \"6111b0c6-fec0-4738-8a86-e433a2b5c673\") " pod="openstack/ovn-controller-ovs-vclvw" Nov 24 09:06:11 crc kubenswrapper[4886]: I1124 09:06:11.280864 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9t2nm\" (UniqueName: \"kubernetes.io/projected/b7951685-e0e7-4524-ba49-b720357aa59c-kube-api-access-9t2nm\") pod \"ovn-controller-rzmth\" (UID: \"b7951685-e0e7-4524-ba49-b720357aa59c\") " pod="openstack/ovn-controller-rzmth" Nov 24 09:06:11 crc kubenswrapper[4886]: I1124 09:06:11.334985 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-rzmth" Nov 24 09:06:11 crc kubenswrapper[4886]: I1124 09:06:11.406865 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-vclvw" Nov 24 09:06:11 crc kubenswrapper[4886]: I1124 09:06:11.628146 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Nov 24 09:06:11 crc kubenswrapper[4886]: I1124 09:06:11.631730 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Nov 24 09:06:11 crc kubenswrapper[4886]: I1124 09:06:11.634676 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Nov 24 09:06:11 crc kubenswrapper[4886]: I1124 09:06:11.635113 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-c7v2c" Nov 24 09:06:11 crc kubenswrapper[4886]: I1124 09:06:11.637987 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Nov 24 09:06:11 crc kubenswrapper[4886]: I1124 09:06:11.638801 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Nov 24 09:06:11 crc kubenswrapper[4886]: I1124 09:06:11.638907 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Nov 24 09:06:11 crc kubenswrapper[4886]: I1124 09:06:11.649218 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Nov 24 09:06:11 crc kubenswrapper[4886]: I1124 09:06:11.777127 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"ovsdbserver-nb-0\" (UID: \"43ae2d6a-6a35-4cf5-81fd-76c9a41c6db1\") " pod="openstack/ovsdbserver-nb-0" Nov 24 09:06:11 crc kubenswrapper[4886]: I1124 09:06:11.777208 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vt6jp\" (UniqueName: \"kubernetes.io/projected/43ae2d6a-6a35-4cf5-81fd-76c9a41c6db1-kube-api-access-vt6jp\") pod \"ovsdbserver-nb-0\" (UID: \"43ae2d6a-6a35-4cf5-81fd-76c9a41c6db1\") " pod="openstack/ovsdbserver-nb-0" Nov 24 09:06:11 crc kubenswrapper[4886]: I1124 09:06:11.777243 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/43ae2d6a-6a35-4cf5-81fd-76c9a41c6db1-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"43ae2d6a-6a35-4cf5-81fd-76c9a41c6db1\") " pod="openstack/ovsdbserver-nb-0" Nov 24 09:06:11 crc kubenswrapper[4886]: I1124 09:06:11.777275 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/43ae2d6a-6a35-4cf5-81fd-76c9a41c6db1-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"43ae2d6a-6a35-4cf5-81fd-76c9a41c6db1\") " pod="openstack/ovsdbserver-nb-0" Nov 24 09:06:11 crc kubenswrapper[4886]: I1124 09:06:11.777305 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/43ae2d6a-6a35-4cf5-81fd-76c9a41c6db1-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"43ae2d6a-6a35-4cf5-81fd-76c9a41c6db1\") " pod="openstack/ovsdbserver-nb-0" Nov 24 09:06:11 crc kubenswrapper[4886]: I1124 09:06:11.777409 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/43ae2d6a-6a35-4cf5-81fd-76c9a41c6db1-config\") pod \"ovsdbserver-nb-0\" (UID: \"43ae2d6a-6a35-4cf5-81fd-76c9a41c6db1\") " pod="openstack/ovsdbserver-nb-0" Nov 24 09:06:11 crc kubenswrapper[4886]: I1124 09:06:11.777681 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43ae2d6a-6a35-4cf5-81fd-76c9a41c6db1-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"43ae2d6a-6a35-4cf5-81fd-76c9a41c6db1\") " pod="openstack/ovsdbserver-nb-0" Nov 24 09:06:11 crc kubenswrapper[4886]: I1124 09:06:11.777715 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/43ae2d6a-6a35-4cf5-81fd-76c9a41c6db1-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"43ae2d6a-6a35-4cf5-81fd-76c9a41c6db1\") " pod="openstack/ovsdbserver-nb-0" Nov 24 09:06:11 crc kubenswrapper[4886]: I1124 09:06:11.879470 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"ovsdbserver-nb-0\" (UID: \"43ae2d6a-6a35-4cf5-81fd-76c9a41c6db1\") " pod="openstack/ovsdbserver-nb-0" Nov 24 09:06:11 crc kubenswrapper[4886]: I1124 09:06:11.879554 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vt6jp\" (UniqueName: \"kubernetes.io/projected/43ae2d6a-6a35-4cf5-81fd-76c9a41c6db1-kube-api-access-vt6jp\") pod \"ovsdbserver-nb-0\" (UID: \"43ae2d6a-6a35-4cf5-81fd-76c9a41c6db1\") " pod="openstack/ovsdbserver-nb-0" Nov 24 09:06:11 crc kubenswrapper[4886]: I1124 09:06:11.879590 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/43ae2d6a-6a35-4cf5-81fd-76c9a41c6db1-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"43ae2d6a-6a35-4cf5-81fd-76c9a41c6db1\") " pod="openstack/ovsdbserver-nb-0" Nov 24 09:06:11 crc kubenswrapper[4886]: I1124 09:06:11.879617 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/43ae2d6a-6a35-4cf5-81fd-76c9a41c6db1-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"43ae2d6a-6a35-4cf5-81fd-76c9a41c6db1\") " pod="openstack/ovsdbserver-nb-0" Nov 24 09:06:11 crc kubenswrapper[4886]: I1124 09:06:11.879647 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/43ae2d6a-6a35-4cf5-81fd-76c9a41c6db1-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"43ae2d6a-6a35-4cf5-81fd-76c9a41c6db1\") " pod="openstack/ovsdbserver-nb-0" Nov 24 09:06:11 crc kubenswrapper[4886]: I1124 09:06:11.879709 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/43ae2d6a-6a35-4cf5-81fd-76c9a41c6db1-config\") pod \"ovsdbserver-nb-0\" (UID: \"43ae2d6a-6a35-4cf5-81fd-76c9a41c6db1\") " pod="openstack/ovsdbserver-nb-0" Nov 24 09:06:11 crc kubenswrapper[4886]: I1124 09:06:11.879768 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43ae2d6a-6a35-4cf5-81fd-76c9a41c6db1-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"43ae2d6a-6a35-4cf5-81fd-76c9a41c6db1\") " pod="openstack/ovsdbserver-nb-0" Nov 24 09:06:11 crc kubenswrapper[4886]: I1124 09:06:11.879789 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/43ae2d6a-6a35-4cf5-81fd-76c9a41c6db1-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"43ae2d6a-6a35-4cf5-81fd-76c9a41c6db1\") " pod="openstack/ovsdbserver-nb-0" Nov 24 09:06:11 crc kubenswrapper[4886]: I1124 09:06:11.881134 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/43ae2d6a-6a35-4cf5-81fd-76c9a41c6db1-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"43ae2d6a-6a35-4cf5-81fd-76c9a41c6db1\") " pod="openstack/ovsdbserver-nb-0" Nov 24 09:06:11 crc kubenswrapper[4886]: I1124 09:06:11.881302 4886 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"ovsdbserver-nb-0\" (UID: \"43ae2d6a-6a35-4cf5-81fd-76c9a41c6db1\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/ovsdbserver-nb-0" Nov 24 09:06:11 crc kubenswrapper[4886]: I1124 09:06:11.881825 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/43ae2d6a-6a35-4cf5-81fd-76c9a41c6db1-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"43ae2d6a-6a35-4cf5-81fd-76c9a41c6db1\") " pod="openstack/ovsdbserver-nb-0" Nov 24 09:06:11 crc kubenswrapper[4886]: I1124 09:06:11.883286 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/43ae2d6a-6a35-4cf5-81fd-76c9a41c6db1-config\") pod \"ovsdbserver-nb-0\" (UID: \"43ae2d6a-6a35-4cf5-81fd-76c9a41c6db1\") " pod="openstack/ovsdbserver-nb-0" Nov 24 09:06:11 crc kubenswrapper[4886]: I1124 09:06:11.885837 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/43ae2d6a-6a35-4cf5-81fd-76c9a41c6db1-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"43ae2d6a-6a35-4cf5-81fd-76c9a41c6db1\") " pod="openstack/ovsdbserver-nb-0" Nov 24 09:06:11 crc kubenswrapper[4886]: I1124 09:06:11.886984 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43ae2d6a-6a35-4cf5-81fd-76c9a41c6db1-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"43ae2d6a-6a35-4cf5-81fd-76c9a41c6db1\") " pod="openstack/ovsdbserver-nb-0" Nov 24 09:06:11 crc kubenswrapper[4886]: I1124 09:06:11.892733 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/43ae2d6a-6a35-4cf5-81fd-76c9a41c6db1-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"43ae2d6a-6a35-4cf5-81fd-76c9a41c6db1\") " pod="openstack/ovsdbserver-nb-0" Nov 24 09:06:11 crc kubenswrapper[4886]: I1124 09:06:11.910488 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vt6jp\" (UniqueName: \"kubernetes.io/projected/43ae2d6a-6a35-4cf5-81fd-76c9a41c6db1-kube-api-access-vt6jp\") pod \"ovsdbserver-nb-0\" (UID: \"43ae2d6a-6a35-4cf5-81fd-76c9a41c6db1\") " pod="openstack/ovsdbserver-nb-0" Nov 24 09:06:11 crc kubenswrapper[4886]: I1124 09:06:11.922443 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"ovsdbserver-nb-0\" (UID: \"43ae2d6a-6a35-4cf5-81fd-76c9a41c6db1\") " pod="openstack/ovsdbserver-nb-0" Nov 24 09:06:11 crc kubenswrapper[4886]: I1124 09:06:11.969441 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Nov 24 09:06:14 crc kubenswrapper[4886]: I1124 09:06:14.448576 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Nov 24 09:06:14 crc kubenswrapper[4886]: I1124 09:06:14.451146 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Nov 24 09:06:14 crc kubenswrapper[4886]: I1124 09:06:14.453811 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Nov 24 09:06:14 crc kubenswrapper[4886]: I1124 09:06:14.453954 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-l46ml" Nov 24 09:06:14 crc kubenswrapper[4886]: I1124 09:06:14.456002 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Nov 24 09:06:14 crc kubenswrapper[4886]: I1124 09:06:14.457314 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Nov 24 09:06:14 crc kubenswrapper[4886]: I1124 09:06:14.474091 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Nov 24 09:06:14 crc kubenswrapper[4886]: I1124 09:06:14.535721 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/495262a2-0785-4f84-aeb5-00eff9c76e9a-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"495262a2-0785-4f84-aeb5-00eff9c76e9a\") " pod="openstack/ovsdbserver-sb-0" Nov 24 09:06:14 crc kubenswrapper[4886]: I1124 09:06:14.535804 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/495262a2-0785-4f84-aeb5-00eff9c76e9a-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"495262a2-0785-4f84-aeb5-00eff9c76e9a\") " pod="openstack/ovsdbserver-sb-0" Nov 24 09:06:14 crc kubenswrapper[4886]: I1124 09:06:14.535874 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/495262a2-0785-4f84-aeb5-00eff9c76e9a-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"495262a2-0785-4f84-aeb5-00eff9c76e9a\") " pod="openstack/ovsdbserver-sb-0" Nov 24 09:06:14 crc kubenswrapper[4886]: I1124 09:06:14.535915 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"ovsdbserver-sb-0\" (UID: \"495262a2-0785-4f84-aeb5-00eff9c76e9a\") " pod="openstack/ovsdbserver-sb-0" Nov 24 09:06:14 crc kubenswrapper[4886]: I1124 09:06:14.535952 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/495262a2-0785-4f84-aeb5-00eff9c76e9a-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"495262a2-0785-4f84-aeb5-00eff9c76e9a\") " pod="openstack/ovsdbserver-sb-0" Nov 24 09:06:14 crc kubenswrapper[4886]: I1124 09:06:14.536001 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/495262a2-0785-4f84-aeb5-00eff9c76e9a-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"495262a2-0785-4f84-aeb5-00eff9c76e9a\") " pod="openstack/ovsdbserver-sb-0" Nov 24 09:06:14 crc kubenswrapper[4886]: I1124 09:06:14.536042 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nxhxh\" (UniqueName: \"kubernetes.io/projected/495262a2-0785-4f84-aeb5-00eff9c76e9a-kube-api-access-nxhxh\") pod \"ovsdbserver-sb-0\" (UID: \"495262a2-0785-4f84-aeb5-00eff9c76e9a\") " pod="openstack/ovsdbserver-sb-0" Nov 24 09:06:14 crc kubenswrapper[4886]: I1124 09:06:14.536068 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/495262a2-0785-4f84-aeb5-00eff9c76e9a-config\") pod \"ovsdbserver-sb-0\" (UID: \"495262a2-0785-4f84-aeb5-00eff9c76e9a\") " pod="openstack/ovsdbserver-sb-0" Nov 24 09:06:14 crc kubenswrapper[4886]: I1124 09:06:14.638018 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/495262a2-0785-4f84-aeb5-00eff9c76e9a-config\") pod \"ovsdbserver-sb-0\" (UID: \"495262a2-0785-4f84-aeb5-00eff9c76e9a\") " pod="openstack/ovsdbserver-sb-0" Nov 24 09:06:14 crc kubenswrapper[4886]: I1124 09:06:14.638642 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/495262a2-0785-4f84-aeb5-00eff9c76e9a-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"495262a2-0785-4f84-aeb5-00eff9c76e9a\") " pod="openstack/ovsdbserver-sb-0" Nov 24 09:06:14 crc kubenswrapper[4886]: I1124 09:06:14.638699 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/495262a2-0785-4f84-aeb5-00eff9c76e9a-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"495262a2-0785-4f84-aeb5-00eff9c76e9a\") " pod="openstack/ovsdbserver-sb-0" Nov 24 09:06:14 crc kubenswrapper[4886]: I1124 09:06:14.638769 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/495262a2-0785-4f84-aeb5-00eff9c76e9a-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"495262a2-0785-4f84-aeb5-00eff9c76e9a\") " pod="openstack/ovsdbserver-sb-0" Nov 24 09:06:14 crc kubenswrapper[4886]: I1124 09:06:14.638809 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"ovsdbserver-sb-0\" (UID: \"495262a2-0785-4f84-aeb5-00eff9c76e9a\") " pod="openstack/ovsdbserver-sb-0" Nov 24 09:06:14 crc kubenswrapper[4886]: I1124 09:06:14.638851 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/495262a2-0785-4f84-aeb5-00eff9c76e9a-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"495262a2-0785-4f84-aeb5-00eff9c76e9a\") " pod="openstack/ovsdbserver-sb-0" Nov 24 09:06:14 crc kubenswrapper[4886]: I1124 09:06:14.638910 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/495262a2-0785-4f84-aeb5-00eff9c76e9a-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"495262a2-0785-4f84-aeb5-00eff9c76e9a\") " pod="openstack/ovsdbserver-sb-0" Nov 24 09:06:14 crc kubenswrapper[4886]: I1124 09:06:14.638952 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nxhxh\" (UniqueName: \"kubernetes.io/projected/495262a2-0785-4f84-aeb5-00eff9c76e9a-kube-api-access-nxhxh\") pod \"ovsdbserver-sb-0\" (UID: \"495262a2-0785-4f84-aeb5-00eff9c76e9a\") " pod="openstack/ovsdbserver-sb-0" Nov 24 09:06:14 crc kubenswrapper[4886]: I1124 09:06:14.639915 4886 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"ovsdbserver-sb-0\" (UID: \"495262a2-0785-4f84-aeb5-00eff9c76e9a\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/ovsdbserver-sb-0" Nov 24 09:06:14 crc kubenswrapper[4886]: I1124 09:06:14.640190 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/495262a2-0785-4f84-aeb5-00eff9c76e9a-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"495262a2-0785-4f84-aeb5-00eff9c76e9a\") " pod="openstack/ovsdbserver-sb-0" Nov 24 09:06:14 crc kubenswrapper[4886]: I1124 09:06:14.641099 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/495262a2-0785-4f84-aeb5-00eff9c76e9a-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"495262a2-0785-4f84-aeb5-00eff9c76e9a\") " pod="openstack/ovsdbserver-sb-0" Nov 24 09:06:14 crc kubenswrapper[4886]: I1124 09:06:14.641113 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/495262a2-0785-4f84-aeb5-00eff9c76e9a-config\") pod \"ovsdbserver-sb-0\" (UID: \"495262a2-0785-4f84-aeb5-00eff9c76e9a\") " pod="openstack/ovsdbserver-sb-0" Nov 24 09:06:14 crc kubenswrapper[4886]: I1124 09:06:14.646401 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/495262a2-0785-4f84-aeb5-00eff9c76e9a-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"495262a2-0785-4f84-aeb5-00eff9c76e9a\") " pod="openstack/ovsdbserver-sb-0" Nov 24 09:06:14 crc kubenswrapper[4886]: I1124 09:06:14.646403 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/495262a2-0785-4f84-aeb5-00eff9c76e9a-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"495262a2-0785-4f84-aeb5-00eff9c76e9a\") " pod="openstack/ovsdbserver-sb-0" Nov 24 09:06:14 crc kubenswrapper[4886]: I1124 09:06:14.661025 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/495262a2-0785-4f84-aeb5-00eff9c76e9a-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"495262a2-0785-4f84-aeb5-00eff9c76e9a\") " pod="openstack/ovsdbserver-sb-0" Nov 24 09:06:14 crc kubenswrapper[4886]: I1124 09:06:14.664328 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nxhxh\" (UniqueName: \"kubernetes.io/projected/495262a2-0785-4f84-aeb5-00eff9c76e9a-kube-api-access-nxhxh\") pod \"ovsdbserver-sb-0\" (UID: \"495262a2-0785-4f84-aeb5-00eff9c76e9a\") " pod="openstack/ovsdbserver-sb-0" Nov 24 09:06:14 crc kubenswrapper[4886]: I1124 09:06:14.675094 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"ovsdbserver-sb-0\" (UID: \"495262a2-0785-4f84-aeb5-00eff9c76e9a\") " pod="openstack/ovsdbserver-sb-0" Nov 24 09:06:14 crc kubenswrapper[4886]: I1124 09:06:14.873469 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Nov 24 09:06:14 crc kubenswrapper[4886]: I1124 09:06:14.883385 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"510b7a7a-1206-44f7-bd72-a85590e7a1ac","Type":"ContainerStarted","Data":"be19a30b49d26ec8f50d17b151a8436d3164406eac8524c25fa18c0d620d188d"} Nov 24 09:06:14 crc kubenswrapper[4886]: I1124 09:06:14.986168 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 24 09:06:15 crc kubenswrapper[4886]: I1124 09:06:15.060299 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Nov 24 09:06:15 crc kubenswrapper[4886]: E1124 09:06:15.551020 4886 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Nov 24 09:06:15 crc kubenswrapper[4886]: E1124 09:06:15.551341 4886 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kl88m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-78dd6ddcc-jpvhq_openstack(50cb1d88-1e47-46e0-9594-d6e2f3660d9d): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 24 09:06:15 crc kubenswrapper[4886]: E1124 09:06:15.552462 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-78dd6ddcc-jpvhq" podUID="50cb1d88-1e47-46e0-9594-d6e2f3660d9d" Nov 24 09:06:15 crc kubenswrapper[4886]: I1124 09:06:15.897868 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"3ec7bf38-594d-4606-ab2c-76f4fc8b6a29","Type":"ContainerStarted","Data":"0759c4e8168df816a43279e305d109405df25a4a52bde174b01e48f5e392baa1"} Nov 24 09:06:15 crc kubenswrapper[4886]: I1124 09:06:15.900365 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"f10026aa-640c-4f36-9912-cd4177af074d","Type":"ContainerStarted","Data":"12d895eeaf622caf75f1f4cf667982a72c621c9b6a710373f6c5d692ef5588be"} Nov 24 09:06:16 crc kubenswrapper[4886]: I1124 09:06:16.010316 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Nov 24 09:06:16 crc kubenswrapper[4886]: W1124 09:06:16.039669 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod11ea3612_3583_4b82_9047_d11cd751adcd.slice/crio-f90e8812127a7bc234e06ac5a6e36621c737e14a298a2cec3f31bc03f6e4b6c6 WatchSource:0}: Error finding container f90e8812127a7bc234e06ac5a6e36621c737e14a298a2cec3f31bc03f6e4b6c6: Status 404 returned error can't find the container with id f90e8812127a7bc234e06ac5a6e36621c737e14a298a2cec3f31bc03f6e4b6c6 Nov 24 09:06:16 crc kubenswrapper[4886]: I1124 09:06:16.252945 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Nov 24 09:06:16 crc kubenswrapper[4886]: I1124 09:06:16.374292 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-rzmth"] Nov 24 09:06:16 crc kubenswrapper[4886]: I1124 09:06:16.380914 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 24 09:06:16 crc kubenswrapper[4886]: W1124 09:06:16.407573 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb7951685_e0e7_4524_ba49_b720357aa59c.slice/crio-add8eb7f2969b3e95fba6717ab6711707f48ef59ecea28bdcbefb5b3d47d2447 WatchSource:0}: Error finding container add8eb7f2969b3e95fba6717ab6711707f48ef59ecea28bdcbefb5b3d47d2447: Status 404 returned error can't find the container with id add8eb7f2969b3e95fba6717ab6711707f48ef59ecea28bdcbefb5b3d47d2447 Nov 24 09:06:16 crc kubenswrapper[4886]: W1124 09:06:16.419930 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc4dbd151_5916_42f8_9555_adf76d2480bf.slice/crio-bfe200e2ebb6377374ae537725aff3e0b9cd28c95eae2dcf3ee4cd2ebe7c960d WatchSource:0}: Error finding container bfe200e2ebb6377374ae537725aff3e0b9cd28c95eae2dcf3ee4cd2ebe7c960d: Status 404 returned error can't find the container with id bfe200e2ebb6377374ae537725aff3e0b9cd28c95eae2dcf3ee4cd2ebe7c960d Nov 24 09:06:16 crc kubenswrapper[4886]: I1124 09:06:16.430864 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-jpvhq" Nov 24 09:06:16 crc kubenswrapper[4886]: I1124 09:06:16.478509 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/50cb1d88-1e47-46e0-9594-d6e2f3660d9d-config\") pod \"50cb1d88-1e47-46e0-9594-d6e2f3660d9d\" (UID: \"50cb1d88-1e47-46e0-9594-d6e2f3660d9d\") " Nov 24 09:06:16 crc kubenswrapper[4886]: I1124 09:06:16.478699 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kl88m\" (UniqueName: \"kubernetes.io/projected/50cb1d88-1e47-46e0-9594-d6e2f3660d9d-kube-api-access-kl88m\") pod \"50cb1d88-1e47-46e0-9594-d6e2f3660d9d\" (UID: \"50cb1d88-1e47-46e0-9594-d6e2f3660d9d\") " Nov 24 09:06:16 crc kubenswrapper[4886]: I1124 09:06:16.478940 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/50cb1d88-1e47-46e0-9594-d6e2f3660d9d-dns-svc\") pod \"50cb1d88-1e47-46e0-9594-d6e2f3660d9d\" (UID: \"50cb1d88-1e47-46e0-9594-d6e2f3660d9d\") " Nov 24 09:06:16 crc kubenswrapper[4886]: I1124 09:06:16.479750 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/50cb1d88-1e47-46e0-9594-d6e2f3660d9d-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "50cb1d88-1e47-46e0-9594-d6e2f3660d9d" (UID: "50cb1d88-1e47-46e0-9594-d6e2f3660d9d"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 09:06:16 crc kubenswrapper[4886]: I1124 09:06:16.480199 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/50cb1d88-1e47-46e0-9594-d6e2f3660d9d-config" (OuterVolumeSpecName: "config") pod "50cb1d88-1e47-46e0-9594-d6e2f3660d9d" (UID: "50cb1d88-1e47-46e0-9594-d6e2f3660d9d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 09:06:16 crc kubenswrapper[4886]: I1124 09:06:16.487402 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/50cb1d88-1e47-46e0-9594-d6e2f3660d9d-kube-api-access-kl88m" (OuterVolumeSpecName: "kube-api-access-kl88m") pod "50cb1d88-1e47-46e0-9594-d6e2f3660d9d" (UID: "50cb1d88-1e47-46e0-9594-d6e2f3660d9d"). InnerVolumeSpecName "kube-api-access-kl88m". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 09:06:16 crc kubenswrapper[4886]: I1124 09:06:16.521487 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Nov 24 09:06:16 crc kubenswrapper[4886]: I1124 09:06:16.581493 4886 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/50cb1d88-1e47-46e0-9594-d6e2f3660d9d-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 24 09:06:16 crc kubenswrapper[4886]: I1124 09:06:16.581537 4886 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/50cb1d88-1e47-46e0-9594-d6e2f3660d9d-config\") on node \"crc\" DevicePath \"\"" Nov 24 09:06:16 crc kubenswrapper[4886]: I1124 09:06:16.581554 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kl88m\" (UniqueName: \"kubernetes.io/projected/50cb1d88-1e47-46e0-9594-d6e2f3660d9d-kube-api-access-kl88m\") on node \"crc\" DevicePath \"\"" Nov 24 09:06:16 crc kubenswrapper[4886]: I1124 09:06:16.716453 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-vclvw"] Nov 24 09:06:16 crc kubenswrapper[4886]: W1124 09:06:16.729525 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6111b0c6_fec0_4738_8a86_e433a2b5c673.slice/crio-4547a21580dbafdf07e33397fcbff5676810147c9ce22799bce4e9a94ce7474e WatchSource:0}: Error finding container 4547a21580dbafdf07e33397fcbff5676810147c9ce22799bce4e9a94ce7474e: Status 404 returned error can't find the container with id 4547a21580dbafdf07e33397fcbff5676810147c9ce22799bce4e9a94ce7474e Nov 24 09:06:16 crc kubenswrapper[4886]: I1124 09:06:16.915222 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-vclvw" event={"ID":"6111b0c6-fec0-4738-8a86-e433a2b5c673","Type":"ContainerStarted","Data":"4547a21580dbafdf07e33397fcbff5676810147c9ce22799bce4e9a94ce7474e"} Nov 24 09:06:16 crc kubenswrapper[4886]: I1124 09:06:16.918556 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-rzmth" event={"ID":"b7951685-e0e7-4524-ba49-b720357aa59c","Type":"ContainerStarted","Data":"add8eb7f2969b3e95fba6717ab6711707f48ef59ecea28bdcbefb5b3d47d2447"} Nov 24 09:06:16 crc kubenswrapper[4886]: I1124 09:06:16.921123 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"495262a2-0785-4f84-aeb5-00eff9c76e9a","Type":"ContainerStarted","Data":"6f6ddea957f4897d2058b146e99f18c7714040251fa1d8bcc4fc213f24d69f3e"} Nov 24 09:06:16 crc kubenswrapper[4886]: I1124 09:06:16.924086 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"c4dbd151-5916-42f8-9555-adf76d2480bf","Type":"ContainerStarted","Data":"bfe200e2ebb6377374ae537725aff3e0b9cd28c95eae2dcf3ee4cd2ebe7c960d"} Nov 24 09:06:16 crc kubenswrapper[4886]: I1124 09:06:16.927583 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"7db518ac-866a-47c8-a5fb-264625a1c1fd","Type":"ContainerStarted","Data":"ac63347893a2ff5b78773ddb2873c1a9cf0d2e64855a09b4b6f96c3b8ad34101"} Nov 24 09:06:16 crc kubenswrapper[4886]: I1124 09:06:16.930636 4886 generic.go:334] "Generic (PLEG): container finished" podID="0ce0238f-3de7-44fa-8e99-5345309b8c44" containerID="26d3f2cb1b5105a1923875bb00474f6d8ed4589d8f2fd1dcaa3634aa7a414d08" exitCode=0 Nov 24 09:06:16 crc kubenswrapper[4886]: I1124 09:06:16.930762 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-k45wp" event={"ID":"0ce0238f-3de7-44fa-8e99-5345309b8c44","Type":"ContainerDied","Data":"26d3f2cb1b5105a1923875bb00474f6d8ed4589d8f2fd1dcaa3634aa7a414d08"} Nov 24 09:06:16 crc kubenswrapper[4886]: I1124 09:06:16.936266 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-jpvhq" event={"ID":"50cb1d88-1e47-46e0-9594-d6e2f3660d9d","Type":"ContainerDied","Data":"c4c8905a09c01309e29fec2ac69277aa74b1149630bb8f558abdd7a98d8952cb"} Nov 24 09:06:16 crc kubenswrapper[4886]: I1124 09:06:16.936401 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-jpvhq" Nov 24 09:06:16 crc kubenswrapper[4886]: I1124 09:06:16.940391 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"11ea3612-3583-4b82-9047-d11cd751adcd","Type":"ContainerStarted","Data":"f90e8812127a7bc234e06ac5a6e36621c737e14a298a2cec3f31bc03f6e4b6c6"} Nov 24 09:06:16 crc kubenswrapper[4886]: I1124 09:06:16.942761 4886 generic.go:334] "Generic (PLEG): container finished" podID="e2785a5b-ca3d-45f1-a8b9-7c1eac61afcb" containerID="185e70573db2790e7432cc637a1d32e82821fadbeae68e928b9d9ed85229cd76" exitCode=0 Nov 24 09:06:16 crc kubenswrapper[4886]: I1124 09:06:16.942808 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-9p5gk" event={"ID":"e2785a5b-ca3d-45f1-a8b9-7c1eac61afcb","Type":"ContainerDied","Data":"185e70573db2790e7432cc637a1d32e82821fadbeae68e928b9d9ed85229cd76"} Nov 24 09:06:17 crc kubenswrapper[4886]: I1124 09:06:17.016443 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-jpvhq"] Nov 24 09:06:17 crc kubenswrapper[4886]: I1124 09:06:17.025076 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-jpvhq"] Nov 24 09:06:17 crc kubenswrapper[4886]: I1124 09:06:17.600224 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Nov 24 09:06:18 crc kubenswrapper[4886]: I1124 09:06:18.869172 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="50cb1d88-1e47-46e0-9594-d6e2f3660d9d" path="/var/lib/kubelet/pods/50cb1d88-1e47-46e0-9594-d6e2f3660d9d/volumes" Nov 24 09:06:19 crc kubenswrapper[4886]: W1124 09:06:19.132898 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod43ae2d6a_6a35_4cf5_81fd_76c9a41c6db1.slice/crio-f6955aad7623c42c937936321654db8a52df7452b3d63f1eeb468fc3f99728ab WatchSource:0}: Error finding container f6955aad7623c42c937936321654db8a52df7452b3d63f1eeb468fc3f99728ab: Status 404 returned error can't find the container with id f6955aad7623c42c937936321654db8a52df7452b3d63f1eeb468fc3f99728ab Nov 24 09:06:19 crc kubenswrapper[4886]: I1124 09:06:19.968635 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"43ae2d6a-6a35-4cf5-81fd-76c9a41c6db1","Type":"ContainerStarted","Data":"f6955aad7623c42c937936321654db8a52df7452b3d63f1eeb468fc3f99728ab"} Nov 24 09:06:25 crc kubenswrapper[4886]: I1124 09:06:25.022855 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"7db518ac-866a-47c8-a5fb-264625a1c1fd","Type":"ContainerStarted","Data":"7de9dba7dd77629c2f0fa07a47bd34fb522e77a2fc9df2d0ecce3f8e119a5ee4"} Nov 24 09:06:25 crc kubenswrapper[4886]: I1124 09:06:25.023927 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Nov 24 09:06:25 crc kubenswrapper[4886]: I1124 09:06:25.026229 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-k45wp" event={"ID":"0ce0238f-3de7-44fa-8e99-5345309b8c44","Type":"ContainerStarted","Data":"20bd5c0539ec2cb750f212f1008cf01d76dcdb39bb30bccd224032d7f092cb85"} Nov 24 09:06:25 crc kubenswrapper[4886]: I1124 09:06:25.026412 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-57d769cc4f-k45wp" Nov 24 09:06:25 crc kubenswrapper[4886]: I1124 09:06:25.028214 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"11ea3612-3583-4b82-9047-d11cd751adcd","Type":"ContainerStarted","Data":"00b311c168db0e67788aeb1a04a7307a15c404a333058e499ee46066f28e384e"} Nov 24 09:06:25 crc kubenswrapper[4886]: I1124 09:06:25.162028 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=13.350074929 podStartE2EDuration="20.161982327s" podCreationTimestamp="2025-11-24 09:06:05 +0000 UTC" firstStartedPulling="2025-11-24 09:06:16.275634469 +0000 UTC m=+1032.162372604" lastFinishedPulling="2025-11-24 09:06:23.087541867 +0000 UTC m=+1038.974280002" observedRunningTime="2025-11-24 09:06:25.140529794 +0000 UTC m=+1041.027267929" watchObservedRunningTime="2025-11-24 09:06:25.161982327 +0000 UTC m=+1041.048720462" Nov 24 09:06:25 crc kubenswrapper[4886]: I1124 09:06:25.172940 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-57d769cc4f-k45wp" podStartSLOduration=10.912748536 podStartE2EDuration="24.17291104s" podCreationTimestamp="2025-11-24 09:06:01 +0000 UTC" firstStartedPulling="2025-11-24 09:06:02.460318496 +0000 UTC m=+1018.347056631" lastFinishedPulling="2025-11-24 09:06:15.720481 +0000 UTC m=+1031.607219135" observedRunningTime="2025-11-24 09:06:25.165258631 +0000 UTC m=+1041.051996786" watchObservedRunningTime="2025-11-24 09:06:25.17291104 +0000 UTC m=+1041.059649175" Nov 24 09:06:26 crc kubenswrapper[4886]: I1124 09:06:26.040711 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-vclvw" event={"ID":"6111b0c6-fec0-4738-8a86-e433a2b5c673","Type":"ContainerStarted","Data":"a8db7185241ffdf31874c5032d0dc84be62f90c2d3df9046f3adb681ef324e5a"} Nov 24 09:06:26 crc kubenswrapper[4886]: I1124 09:06:26.044827 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"3ec7bf38-594d-4606-ab2c-76f4fc8b6a29","Type":"ContainerStarted","Data":"939e63977497c76d0380b203a10dcd4ccb366dbe2505bcdd619a986975c80611"} Nov 24 09:06:26 crc kubenswrapper[4886]: I1124 09:06:26.049511 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-9p5gk" event={"ID":"e2785a5b-ca3d-45f1-a8b9-7c1eac61afcb","Type":"ContainerStarted","Data":"435df7a6ee64a6dffe88e7c5f43c652f570b5f30ccd47ff2e9c69e6b7e3fbf72"} Nov 24 09:06:26 crc kubenswrapper[4886]: I1124 09:06:26.050476 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-666b6646f7-9p5gk" Nov 24 09:06:26 crc kubenswrapper[4886]: I1124 09:06:26.054015 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"495262a2-0785-4f84-aeb5-00eff9c76e9a","Type":"ContainerStarted","Data":"cbb9db5303cd7b445b859be4ae81e754ac454d38fe3ff66dae52e031d5a56dd9"} Nov 24 09:06:26 crc kubenswrapper[4886]: I1124 09:06:26.057994 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"43ae2d6a-6a35-4cf5-81fd-76c9a41c6db1","Type":"ContainerStarted","Data":"4f91fb822e72347b2b09c3e41364ae61753a56eefddc37603c665e1c22e89318"} Nov 24 09:06:26 crc kubenswrapper[4886]: I1124 09:06:26.060814 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-rzmth" event={"ID":"b7951685-e0e7-4524-ba49-b720357aa59c","Type":"ContainerStarted","Data":"ae342cb9ced1a7b38c3ff8ab45ab3fe67fd5b951eeeb72dcf0e6ff256a3368e8"} Nov 24 09:06:26 crc kubenswrapper[4886]: I1124 09:06:26.062539 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-rzmth" Nov 24 09:06:26 crc kubenswrapper[4886]: I1124 09:06:26.141179 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-rzmth" podStartSLOduration=8.393917044 podStartE2EDuration="16.141132036s" podCreationTimestamp="2025-11-24 09:06:10 +0000 UTC" firstStartedPulling="2025-11-24 09:06:16.409691034 +0000 UTC m=+1032.296429169" lastFinishedPulling="2025-11-24 09:06:24.156906026 +0000 UTC m=+1040.043644161" observedRunningTime="2025-11-24 09:06:26.137319907 +0000 UTC m=+1042.024058062" watchObservedRunningTime="2025-11-24 09:06:26.141132036 +0000 UTC m=+1042.027870171" Nov 24 09:06:26 crc kubenswrapper[4886]: I1124 09:06:26.143550 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-666b6646f7-9p5gk" podStartSLOduration=11.833243268 podStartE2EDuration="25.143531375s" podCreationTimestamp="2025-11-24 09:06:01 +0000 UTC" firstStartedPulling="2025-11-24 09:06:02.405486238 +0000 UTC m=+1018.292224373" lastFinishedPulling="2025-11-24 09:06:15.715774345 +0000 UTC m=+1031.602512480" observedRunningTime="2025-11-24 09:06:26.118054066 +0000 UTC m=+1042.004792201" watchObservedRunningTime="2025-11-24 09:06:26.143531375 +0000 UTC m=+1042.030269510" Nov 24 09:06:27 crc kubenswrapper[4886]: I1124 09:06:27.071245 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"f10026aa-640c-4f36-9912-cd4177af074d","Type":"ContainerStarted","Data":"e65c615b4f50616bd7bd98b9b0df7821f9a9633ce9315706f4f12417f031030d"} Nov 24 09:06:27 crc kubenswrapper[4886]: I1124 09:06:27.091497 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"510b7a7a-1206-44f7-bd72-a85590e7a1ac","Type":"ContainerStarted","Data":"63f08eea34ce9c4a5864e0f2001dccb0b8447f6d4df9007985b597db35ca4b60"} Nov 24 09:06:27 crc kubenswrapper[4886]: I1124 09:06:27.106246 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"c4dbd151-5916-42f8-9555-adf76d2480bf","Type":"ContainerStarted","Data":"2d72f50960632eae7569ce326a40fd66d91d0ce713c3955b5a409d075fe344e4"} Nov 24 09:06:27 crc kubenswrapper[4886]: I1124 09:06:27.107227 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Nov 24 09:06:27 crc kubenswrapper[4886]: I1124 09:06:27.111092 4886 generic.go:334] "Generic (PLEG): container finished" podID="6111b0c6-fec0-4738-8a86-e433a2b5c673" containerID="a8db7185241ffdf31874c5032d0dc84be62f90c2d3df9046f3adb681ef324e5a" exitCode=0 Nov 24 09:06:27 crc kubenswrapper[4886]: I1124 09:06:27.112139 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-vclvw" event={"ID":"6111b0c6-fec0-4738-8a86-e433a2b5c673","Type":"ContainerDied","Data":"a8db7185241ffdf31874c5032d0dc84be62f90c2d3df9046f3adb681ef324e5a"} Nov 24 09:06:27 crc kubenswrapper[4886]: I1124 09:06:27.166776 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=9.444009141 podStartE2EDuration="19.166752614s" podCreationTimestamp="2025-11-24 09:06:08 +0000 UTC" firstStartedPulling="2025-11-24 09:06:16.423063927 +0000 UTC m=+1032.309802062" lastFinishedPulling="2025-11-24 09:06:26.1458074 +0000 UTC m=+1042.032545535" observedRunningTime="2025-11-24 09:06:27.157205781 +0000 UTC m=+1043.043943916" watchObservedRunningTime="2025-11-24 09:06:27.166752614 +0000 UTC m=+1043.053490749" Nov 24 09:06:31 crc kubenswrapper[4886]: I1124 09:06:31.107838 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Nov 24 09:06:31 crc kubenswrapper[4886]: I1124 09:06:31.555386 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-666b6646f7-9p5gk" Nov 24 09:06:31 crc kubenswrapper[4886]: I1124 09:06:31.784219 4886 patch_prober.go:28] interesting pod/machine-config-daemon-zc46q container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 09:06:31 crc kubenswrapper[4886]: I1124 09:06:31.784766 4886 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zc46q" podUID="23cb993e-0360-4449-b604-8ddd825a6502" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 09:06:31 crc kubenswrapper[4886]: I1124 09:06:31.784830 4886 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-zc46q" Nov 24 09:06:31 crc kubenswrapper[4886]: I1124 09:06:31.787963 4886 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c28e4d2681964faf5e8db0a7f606c313301cd5d8f7fd6af733f6e4caf7367ebc"} pod="openshift-machine-config-operator/machine-config-daemon-zc46q" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 24 09:06:31 crc kubenswrapper[4886]: I1124 09:06:31.788089 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-zc46q" podUID="23cb993e-0360-4449-b604-8ddd825a6502" containerName="machine-config-daemon" containerID="cri-o://c28e4d2681964faf5e8db0a7f606c313301cd5d8f7fd6af733f6e4caf7367ebc" gracePeriod=600 Nov 24 09:06:31 crc kubenswrapper[4886]: I1124 09:06:31.990367 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-57d769cc4f-k45wp" Nov 24 09:06:32 crc kubenswrapper[4886]: I1124 09:06:32.093733 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-9p5gk"] Nov 24 09:06:32 crc kubenswrapper[4886]: I1124 09:06:32.159030 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-666b6646f7-9p5gk" podUID="e2785a5b-ca3d-45f1-a8b9-7c1eac61afcb" containerName="dnsmasq-dns" containerID="cri-o://435df7a6ee64a6dffe88e7c5f43c652f570b5f30ccd47ff2e9c69e6b7e3fbf72" gracePeriod=10 Nov 24 09:06:33 crc kubenswrapper[4886]: I1124 09:06:33.178046 4886 generic.go:334] "Generic (PLEG): container finished" podID="23cb993e-0360-4449-b604-8ddd825a6502" containerID="c28e4d2681964faf5e8db0a7f606c313301cd5d8f7fd6af733f6e4caf7367ebc" exitCode=0 Nov 24 09:06:33 crc kubenswrapper[4886]: I1124 09:06:33.178282 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zc46q" event={"ID":"23cb993e-0360-4449-b604-8ddd825a6502","Type":"ContainerDied","Data":"c28e4d2681964faf5e8db0a7f606c313301cd5d8f7fd6af733f6e4caf7367ebc"} Nov 24 09:06:33 crc kubenswrapper[4886]: I1124 09:06:33.178534 4886 scope.go:117] "RemoveContainer" containerID="2e3a75d48f5b6c64a0453de51e83f56ff421f563e7ead2b6374e43297260b2ce" Nov 24 09:06:33 crc kubenswrapper[4886]: I1124 09:06:33.187334 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-vclvw" event={"ID":"6111b0c6-fec0-4738-8a86-e433a2b5c673","Type":"ContainerStarted","Data":"4b55791ed97e3af8082337e2989fc15599327034c33dab6377e15d9d8bfc3621"} Nov 24 09:06:33 crc kubenswrapper[4886]: I1124 09:06:33.190616 4886 generic.go:334] "Generic (PLEG): container finished" podID="11ea3612-3583-4b82-9047-d11cd751adcd" containerID="00b311c168db0e67788aeb1a04a7307a15c404a333058e499ee46066f28e384e" exitCode=0 Nov 24 09:06:33 crc kubenswrapper[4886]: I1124 09:06:33.190714 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"11ea3612-3583-4b82-9047-d11cd751adcd","Type":"ContainerDied","Data":"00b311c168db0e67788aeb1a04a7307a15c404a333058e499ee46066f28e384e"} Nov 24 09:06:33 crc kubenswrapper[4886]: I1124 09:06:33.196822 4886 generic.go:334] "Generic (PLEG): container finished" podID="3ec7bf38-594d-4606-ab2c-76f4fc8b6a29" containerID="939e63977497c76d0380b203a10dcd4ccb366dbe2505bcdd619a986975c80611" exitCode=0 Nov 24 09:06:33 crc kubenswrapper[4886]: I1124 09:06:33.196949 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"3ec7bf38-594d-4606-ab2c-76f4fc8b6a29","Type":"ContainerDied","Data":"939e63977497c76d0380b203a10dcd4ccb366dbe2505bcdd619a986975c80611"} Nov 24 09:06:33 crc kubenswrapper[4886]: I1124 09:06:33.200263 4886 generic.go:334] "Generic (PLEG): container finished" podID="e2785a5b-ca3d-45f1-a8b9-7c1eac61afcb" containerID="435df7a6ee64a6dffe88e7c5f43c652f570b5f30ccd47ff2e9c69e6b7e3fbf72" exitCode=0 Nov 24 09:06:33 crc kubenswrapper[4886]: I1124 09:06:33.200318 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-9p5gk" event={"ID":"e2785a5b-ca3d-45f1-a8b9-7c1eac61afcb","Type":"ContainerDied","Data":"435df7a6ee64a6dffe88e7c5f43c652f570b5f30ccd47ff2e9c69e6b7e3fbf72"} Nov 24 09:06:34 crc kubenswrapper[4886]: I1124 09:06:34.539230 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-hqh7m"] Nov 24 09:06:34 crc kubenswrapper[4886]: I1124 09:06:34.540721 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-hqh7m" Nov 24 09:06:34 crc kubenswrapper[4886]: I1124 09:06:34.544004 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Nov 24 09:06:34 crc kubenswrapper[4886]: I1124 09:06:34.546127 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-hqh7m"] Nov 24 09:06:34 crc kubenswrapper[4886]: I1124 09:06:34.590370 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/abe55c7e-0682-4591-bd60-59ee1de24094-ovs-rundir\") pod \"ovn-controller-metrics-hqh7m\" (UID: \"abe55c7e-0682-4591-bd60-59ee1de24094\") " pod="openstack/ovn-controller-metrics-hqh7m" Nov 24 09:06:34 crc kubenswrapper[4886]: I1124 09:06:34.590419 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/abe55c7e-0682-4591-bd60-59ee1de24094-combined-ca-bundle\") pod \"ovn-controller-metrics-hqh7m\" (UID: \"abe55c7e-0682-4591-bd60-59ee1de24094\") " pod="openstack/ovn-controller-metrics-hqh7m" Nov 24 09:06:34 crc kubenswrapper[4886]: I1124 09:06:34.590488 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/abe55c7e-0682-4591-bd60-59ee1de24094-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-hqh7m\" (UID: \"abe55c7e-0682-4591-bd60-59ee1de24094\") " pod="openstack/ovn-controller-metrics-hqh7m" Nov 24 09:06:34 crc kubenswrapper[4886]: I1124 09:06:34.590516 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hj48s\" (UniqueName: \"kubernetes.io/projected/abe55c7e-0682-4591-bd60-59ee1de24094-kube-api-access-hj48s\") pod \"ovn-controller-metrics-hqh7m\" (UID: \"abe55c7e-0682-4591-bd60-59ee1de24094\") " pod="openstack/ovn-controller-metrics-hqh7m" Nov 24 09:06:34 crc kubenswrapper[4886]: I1124 09:06:34.590611 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/abe55c7e-0682-4591-bd60-59ee1de24094-ovn-rundir\") pod \"ovn-controller-metrics-hqh7m\" (UID: \"abe55c7e-0682-4591-bd60-59ee1de24094\") " pod="openstack/ovn-controller-metrics-hqh7m" Nov 24 09:06:34 crc kubenswrapper[4886]: I1124 09:06:34.590661 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/abe55c7e-0682-4591-bd60-59ee1de24094-config\") pod \"ovn-controller-metrics-hqh7m\" (UID: \"abe55c7e-0682-4591-bd60-59ee1de24094\") " pod="openstack/ovn-controller-metrics-hqh7m" Nov 24 09:06:34 crc kubenswrapper[4886]: I1124 09:06:34.692774 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hj48s\" (UniqueName: \"kubernetes.io/projected/abe55c7e-0682-4591-bd60-59ee1de24094-kube-api-access-hj48s\") pod \"ovn-controller-metrics-hqh7m\" (UID: \"abe55c7e-0682-4591-bd60-59ee1de24094\") " pod="openstack/ovn-controller-metrics-hqh7m" Nov 24 09:06:34 crc kubenswrapper[4886]: I1124 09:06:34.692877 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/abe55c7e-0682-4591-bd60-59ee1de24094-ovn-rundir\") pod \"ovn-controller-metrics-hqh7m\" (UID: \"abe55c7e-0682-4591-bd60-59ee1de24094\") " pod="openstack/ovn-controller-metrics-hqh7m" Nov 24 09:06:34 crc kubenswrapper[4886]: I1124 09:06:34.692949 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/abe55c7e-0682-4591-bd60-59ee1de24094-config\") pod \"ovn-controller-metrics-hqh7m\" (UID: \"abe55c7e-0682-4591-bd60-59ee1de24094\") " pod="openstack/ovn-controller-metrics-hqh7m" Nov 24 09:06:34 crc kubenswrapper[4886]: I1124 09:06:34.693039 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/abe55c7e-0682-4591-bd60-59ee1de24094-ovs-rundir\") pod \"ovn-controller-metrics-hqh7m\" (UID: \"abe55c7e-0682-4591-bd60-59ee1de24094\") " pod="openstack/ovn-controller-metrics-hqh7m" Nov 24 09:06:34 crc kubenswrapper[4886]: I1124 09:06:34.693066 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/abe55c7e-0682-4591-bd60-59ee1de24094-combined-ca-bundle\") pod \"ovn-controller-metrics-hqh7m\" (UID: \"abe55c7e-0682-4591-bd60-59ee1de24094\") " pod="openstack/ovn-controller-metrics-hqh7m" Nov 24 09:06:34 crc kubenswrapper[4886]: I1124 09:06:34.693099 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/abe55c7e-0682-4591-bd60-59ee1de24094-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-hqh7m\" (UID: \"abe55c7e-0682-4591-bd60-59ee1de24094\") " pod="openstack/ovn-controller-metrics-hqh7m" Nov 24 09:06:34 crc kubenswrapper[4886]: I1124 09:06:34.694050 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/abe55c7e-0682-4591-bd60-59ee1de24094-ovs-rundir\") pod \"ovn-controller-metrics-hqh7m\" (UID: \"abe55c7e-0682-4591-bd60-59ee1de24094\") " pod="openstack/ovn-controller-metrics-hqh7m" Nov 24 09:06:34 crc kubenswrapper[4886]: I1124 09:06:34.694114 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/abe55c7e-0682-4591-bd60-59ee1de24094-ovn-rundir\") pod \"ovn-controller-metrics-hqh7m\" (UID: \"abe55c7e-0682-4591-bd60-59ee1de24094\") " pod="openstack/ovn-controller-metrics-hqh7m" Nov 24 09:06:34 crc kubenswrapper[4886]: I1124 09:06:34.694275 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/abe55c7e-0682-4591-bd60-59ee1de24094-config\") pod \"ovn-controller-metrics-hqh7m\" (UID: \"abe55c7e-0682-4591-bd60-59ee1de24094\") " pod="openstack/ovn-controller-metrics-hqh7m" Nov 24 09:06:34 crc kubenswrapper[4886]: I1124 09:06:34.702124 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/abe55c7e-0682-4591-bd60-59ee1de24094-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-hqh7m\" (UID: \"abe55c7e-0682-4591-bd60-59ee1de24094\") " pod="openstack/ovn-controller-metrics-hqh7m" Nov 24 09:06:34 crc kubenswrapper[4886]: I1124 09:06:34.707885 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/abe55c7e-0682-4591-bd60-59ee1de24094-combined-ca-bundle\") pod \"ovn-controller-metrics-hqh7m\" (UID: \"abe55c7e-0682-4591-bd60-59ee1de24094\") " pod="openstack/ovn-controller-metrics-hqh7m" Nov 24 09:06:34 crc kubenswrapper[4886]: I1124 09:06:34.722151 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hj48s\" (UniqueName: \"kubernetes.io/projected/abe55c7e-0682-4591-bd60-59ee1de24094-kube-api-access-hj48s\") pod \"ovn-controller-metrics-hqh7m\" (UID: \"abe55c7e-0682-4591-bd60-59ee1de24094\") " pod="openstack/ovn-controller-metrics-hqh7m" Nov 24 09:06:34 crc kubenswrapper[4886]: I1124 09:06:34.800860 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-v6txk"] Nov 24 09:06:34 crc kubenswrapper[4886]: I1124 09:06:34.803317 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bf47b49b7-v6txk" Nov 24 09:06:34 crc kubenswrapper[4886]: I1124 09:06:34.809925 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Nov 24 09:06:34 crc kubenswrapper[4886]: I1124 09:06:34.823570 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-v6txk"] Nov 24 09:06:34 crc kubenswrapper[4886]: I1124 09:06:34.894914 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-hqh7m" Nov 24 09:06:34 crc kubenswrapper[4886]: I1124 09:06:34.897111 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jfzb7\" (UniqueName: \"kubernetes.io/projected/dea859e7-66b4-401e-9495-b49b060fcc1a-kube-api-access-jfzb7\") pod \"dnsmasq-dns-5bf47b49b7-v6txk\" (UID: \"dea859e7-66b4-401e-9495-b49b060fcc1a\") " pod="openstack/dnsmasq-dns-5bf47b49b7-v6txk" Nov 24 09:06:34 crc kubenswrapper[4886]: I1124 09:06:34.897178 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/dea859e7-66b4-401e-9495-b49b060fcc1a-ovsdbserver-nb\") pod \"dnsmasq-dns-5bf47b49b7-v6txk\" (UID: \"dea859e7-66b4-401e-9495-b49b060fcc1a\") " pod="openstack/dnsmasq-dns-5bf47b49b7-v6txk" Nov 24 09:06:34 crc kubenswrapper[4886]: I1124 09:06:34.897829 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dea859e7-66b4-401e-9495-b49b060fcc1a-dns-svc\") pod \"dnsmasq-dns-5bf47b49b7-v6txk\" (UID: \"dea859e7-66b4-401e-9495-b49b060fcc1a\") " pod="openstack/dnsmasq-dns-5bf47b49b7-v6txk" Nov 24 09:06:34 crc kubenswrapper[4886]: I1124 09:06:34.897882 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dea859e7-66b4-401e-9495-b49b060fcc1a-config\") pod \"dnsmasq-dns-5bf47b49b7-v6txk\" (UID: \"dea859e7-66b4-401e-9495-b49b060fcc1a\") " pod="openstack/dnsmasq-dns-5bf47b49b7-v6txk" Nov 24 09:06:34 crc kubenswrapper[4886]: I1124 09:06:34.949402 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-v6txk"] Nov 24 09:06:34 crc kubenswrapper[4886]: E1124 09:06:34.950309 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[config dns-svc kube-api-access-jfzb7 ovsdbserver-nb], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/dnsmasq-dns-5bf47b49b7-v6txk" podUID="dea859e7-66b4-401e-9495-b49b060fcc1a" Nov 24 09:06:34 crc kubenswrapper[4886]: I1124 09:06:34.977775 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-8554648995-lz6bg"] Nov 24 09:06:34 crc kubenswrapper[4886]: I1124 09:06:34.988173 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-lz6bg" Nov 24 09:06:34 crc kubenswrapper[4886]: I1124 09:06:34.990602 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Nov 24 09:06:34 crc kubenswrapper[4886]: I1124 09:06:34.997115 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8554648995-lz6bg"] Nov 24 09:06:34 crc kubenswrapper[4886]: I1124 09:06:34.999396 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/647ee34f-ec2c-4feb-b583-4e37a20f96a3-config\") pod \"dnsmasq-dns-8554648995-lz6bg\" (UID: \"647ee34f-ec2c-4feb-b583-4e37a20f96a3\") " pod="openstack/dnsmasq-dns-8554648995-lz6bg" Nov 24 09:06:34 crc kubenswrapper[4886]: I1124 09:06:34.999469 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/647ee34f-ec2c-4feb-b583-4e37a20f96a3-ovsdbserver-nb\") pod \"dnsmasq-dns-8554648995-lz6bg\" (UID: \"647ee34f-ec2c-4feb-b583-4e37a20f96a3\") " pod="openstack/dnsmasq-dns-8554648995-lz6bg" Nov 24 09:06:34 crc kubenswrapper[4886]: I1124 09:06:34.999589 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dea859e7-66b4-401e-9495-b49b060fcc1a-dns-svc\") pod \"dnsmasq-dns-5bf47b49b7-v6txk\" (UID: \"dea859e7-66b4-401e-9495-b49b060fcc1a\") " pod="openstack/dnsmasq-dns-5bf47b49b7-v6txk" Nov 24 09:06:34 crc kubenswrapper[4886]: I1124 09:06:34.999638 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dea859e7-66b4-401e-9495-b49b060fcc1a-config\") pod \"dnsmasq-dns-5bf47b49b7-v6txk\" (UID: \"dea859e7-66b4-401e-9495-b49b060fcc1a\") " pod="openstack/dnsmasq-dns-5bf47b49b7-v6txk" Nov 24 09:06:34 crc kubenswrapper[4886]: I1124 09:06:34.999677 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/647ee34f-ec2c-4feb-b583-4e37a20f96a3-ovsdbserver-sb\") pod \"dnsmasq-dns-8554648995-lz6bg\" (UID: \"647ee34f-ec2c-4feb-b583-4e37a20f96a3\") " pod="openstack/dnsmasq-dns-8554648995-lz6bg" Nov 24 09:06:34 crc kubenswrapper[4886]: I1124 09:06:34.999731 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jfzb7\" (UniqueName: \"kubernetes.io/projected/dea859e7-66b4-401e-9495-b49b060fcc1a-kube-api-access-jfzb7\") pod \"dnsmasq-dns-5bf47b49b7-v6txk\" (UID: \"dea859e7-66b4-401e-9495-b49b060fcc1a\") " pod="openstack/dnsmasq-dns-5bf47b49b7-v6txk" Nov 24 09:06:34 crc kubenswrapper[4886]: I1124 09:06:34.999761 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9rjqn\" (UniqueName: \"kubernetes.io/projected/647ee34f-ec2c-4feb-b583-4e37a20f96a3-kube-api-access-9rjqn\") pod \"dnsmasq-dns-8554648995-lz6bg\" (UID: \"647ee34f-ec2c-4feb-b583-4e37a20f96a3\") " pod="openstack/dnsmasq-dns-8554648995-lz6bg" Nov 24 09:06:34 crc kubenswrapper[4886]: I1124 09:06:34.999802 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/dea859e7-66b4-401e-9495-b49b060fcc1a-ovsdbserver-nb\") pod \"dnsmasq-dns-5bf47b49b7-v6txk\" (UID: \"dea859e7-66b4-401e-9495-b49b060fcc1a\") " pod="openstack/dnsmasq-dns-5bf47b49b7-v6txk" Nov 24 09:06:34 crc kubenswrapper[4886]: I1124 09:06:34.999834 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/647ee34f-ec2c-4feb-b583-4e37a20f96a3-dns-svc\") pod \"dnsmasq-dns-8554648995-lz6bg\" (UID: \"647ee34f-ec2c-4feb-b583-4e37a20f96a3\") " pod="openstack/dnsmasq-dns-8554648995-lz6bg" Nov 24 09:06:35 crc kubenswrapper[4886]: I1124 09:06:35.001491 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/dea859e7-66b4-401e-9495-b49b060fcc1a-ovsdbserver-nb\") pod \"dnsmasq-dns-5bf47b49b7-v6txk\" (UID: \"dea859e7-66b4-401e-9495-b49b060fcc1a\") " pod="openstack/dnsmasq-dns-5bf47b49b7-v6txk" Nov 24 09:06:35 crc kubenswrapper[4886]: I1124 09:06:35.001534 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dea859e7-66b4-401e-9495-b49b060fcc1a-dns-svc\") pod \"dnsmasq-dns-5bf47b49b7-v6txk\" (UID: \"dea859e7-66b4-401e-9495-b49b060fcc1a\") " pod="openstack/dnsmasq-dns-5bf47b49b7-v6txk" Nov 24 09:06:35 crc kubenswrapper[4886]: I1124 09:06:35.001686 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dea859e7-66b4-401e-9495-b49b060fcc1a-config\") pod \"dnsmasq-dns-5bf47b49b7-v6txk\" (UID: \"dea859e7-66b4-401e-9495-b49b060fcc1a\") " pod="openstack/dnsmasq-dns-5bf47b49b7-v6txk" Nov 24 09:06:35 crc kubenswrapper[4886]: I1124 09:06:35.036637 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jfzb7\" (UniqueName: \"kubernetes.io/projected/dea859e7-66b4-401e-9495-b49b060fcc1a-kube-api-access-jfzb7\") pod \"dnsmasq-dns-5bf47b49b7-v6txk\" (UID: \"dea859e7-66b4-401e-9495-b49b060fcc1a\") " pod="openstack/dnsmasq-dns-5bf47b49b7-v6txk" Nov 24 09:06:35 crc kubenswrapper[4886]: I1124 09:06:35.102268 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/647ee34f-ec2c-4feb-b583-4e37a20f96a3-ovsdbserver-sb\") pod \"dnsmasq-dns-8554648995-lz6bg\" (UID: \"647ee34f-ec2c-4feb-b583-4e37a20f96a3\") " pod="openstack/dnsmasq-dns-8554648995-lz6bg" Nov 24 09:06:35 crc kubenswrapper[4886]: I1124 09:06:35.102425 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9rjqn\" (UniqueName: \"kubernetes.io/projected/647ee34f-ec2c-4feb-b583-4e37a20f96a3-kube-api-access-9rjqn\") pod \"dnsmasq-dns-8554648995-lz6bg\" (UID: \"647ee34f-ec2c-4feb-b583-4e37a20f96a3\") " pod="openstack/dnsmasq-dns-8554648995-lz6bg" Nov 24 09:06:35 crc kubenswrapper[4886]: I1124 09:06:35.102475 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/647ee34f-ec2c-4feb-b583-4e37a20f96a3-dns-svc\") pod \"dnsmasq-dns-8554648995-lz6bg\" (UID: \"647ee34f-ec2c-4feb-b583-4e37a20f96a3\") " pod="openstack/dnsmasq-dns-8554648995-lz6bg" Nov 24 09:06:35 crc kubenswrapper[4886]: I1124 09:06:35.102544 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/647ee34f-ec2c-4feb-b583-4e37a20f96a3-config\") pod \"dnsmasq-dns-8554648995-lz6bg\" (UID: \"647ee34f-ec2c-4feb-b583-4e37a20f96a3\") " pod="openstack/dnsmasq-dns-8554648995-lz6bg" Nov 24 09:06:35 crc kubenswrapper[4886]: I1124 09:06:35.102579 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/647ee34f-ec2c-4feb-b583-4e37a20f96a3-ovsdbserver-nb\") pod \"dnsmasq-dns-8554648995-lz6bg\" (UID: \"647ee34f-ec2c-4feb-b583-4e37a20f96a3\") " pod="openstack/dnsmasq-dns-8554648995-lz6bg" Nov 24 09:06:35 crc kubenswrapper[4886]: I1124 09:06:35.104073 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/647ee34f-ec2c-4feb-b583-4e37a20f96a3-dns-svc\") pod \"dnsmasq-dns-8554648995-lz6bg\" (UID: \"647ee34f-ec2c-4feb-b583-4e37a20f96a3\") " pod="openstack/dnsmasq-dns-8554648995-lz6bg" Nov 24 09:06:35 crc kubenswrapper[4886]: I1124 09:06:35.104313 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/647ee34f-ec2c-4feb-b583-4e37a20f96a3-ovsdbserver-nb\") pod \"dnsmasq-dns-8554648995-lz6bg\" (UID: \"647ee34f-ec2c-4feb-b583-4e37a20f96a3\") " pod="openstack/dnsmasq-dns-8554648995-lz6bg" Nov 24 09:06:35 crc kubenswrapper[4886]: I1124 09:06:35.104426 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/647ee34f-ec2c-4feb-b583-4e37a20f96a3-ovsdbserver-sb\") pod \"dnsmasq-dns-8554648995-lz6bg\" (UID: \"647ee34f-ec2c-4feb-b583-4e37a20f96a3\") " pod="openstack/dnsmasq-dns-8554648995-lz6bg" Nov 24 09:06:35 crc kubenswrapper[4886]: I1124 09:06:35.105505 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/647ee34f-ec2c-4feb-b583-4e37a20f96a3-config\") pod \"dnsmasq-dns-8554648995-lz6bg\" (UID: \"647ee34f-ec2c-4feb-b583-4e37a20f96a3\") " pod="openstack/dnsmasq-dns-8554648995-lz6bg" Nov 24 09:06:35 crc kubenswrapper[4886]: I1124 09:06:35.122831 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9rjqn\" (UniqueName: \"kubernetes.io/projected/647ee34f-ec2c-4feb-b583-4e37a20f96a3-kube-api-access-9rjqn\") pod \"dnsmasq-dns-8554648995-lz6bg\" (UID: \"647ee34f-ec2c-4feb-b583-4e37a20f96a3\") " pod="openstack/dnsmasq-dns-8554648995-lz6bg" Nov 24 09:06:35 crc kubenswrapper[4886]: I1124 09:06:35.242425 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bf47b49b7-v6txk" Nov 24 09:06:35 crc kubenswrapper[4886]: I1124 09:06:35.264660 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bf47b49b7-v6txk" Nov 24 09:06:35 crc kubenswrapper[4886]: I1124 09:06:35.312091 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/dea859e7-66b4-401e-9495-b49b060fcc1a-ovsdbserver-nb\") pod \"dea859e7-66b4-401e-9495-b49b060fcc1a\" (UID: \"dea859e7-66b4-401e-9495-b49b060fcc1a\") " Nov 24 09:06:35 crc kubenswrapper[4886]: I1124 09:06:35.312238 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-lz6bg" Nov 24 09:06:35 crc kubenswrapper[4886]: I1124 09:06:35.312271 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jfzb7\" (UniqueName: \"kubernetes.io/projected/dea859e7-66b4-401e-9495-b49b060fcc1a-kube-api-access-jfzb7\") pod \"dea859e7-66b4-401e-9495-b49b060fcc1a\" (UID: \"dea859e7-66b4-401e-9495-b49b060fcc1a\") " Nov 24 09:06:35 crc kubenswrapper[4886]: I1124 09:06:35.312423 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dea859e7-66b4-401e-9495-b49b060fcc1a-dns-svc\") pod \"dea859e7-66b4-401e-9495-b49b060fcc1a\" (UID: \"dea859e7-66b4-401e-9495-b49b060fcc1a\") " Nov 24 09:06:35 crc kubenswrapper[4886]: I1124 09:06:35.312447 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dea859e7-66b4-401e-9495-b49b060fcc1a-config\") pod \"dea859e7-66b4-401e-9495-b49b060fcc1a\" (UID: \"dea859e7-66b4-401e-9495-b49b060fcc1a\") " Nov 24 09:06:35 crc kubenswrapper[4886]: I1124 09:06:35.313728 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dea859e7-66b4-401e-9495-b49b060fcc1a-config" (OuterVolumeSpecName: "config") pod "dea859e7-66b4-401e-9495-b49b060fcc1a" (UID: "dea859e7-66b4-401e-9495-b49b060fcc1a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 09:06:35 crc kubenswrapper[4886]: I1124 09:06:35.314125 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dea859e7-66b4-401e-9495-b49b060fcc1a-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "dea859e7-66b4-401e-9495-b49b060fcc1a" (UID: "dea859e7-66b4-401e-9495-b49b060fcc1a"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 09:06:35 crc kubenswrapper[4886]: I1124 09:06:35.314209 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dea859e7-66b4-401e-9495-b49b060fcc1a-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "dea859e7-66b4-401e-9495-b49b060fcc1a" (UID: "dea859e7-66b4-401e-9495-b49b060fcc1a"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 09:06:35 crc kubenswrapper[4886]: I1124 09:06:35.365480 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dea859e7-66b4-401e-9495-b49b060fcc1a-kube-api-access-jfzb7" (OuterVolumeSpecName: "kube-api-access-jfzb7") pod "dea859e7-66b4-401e-9495-b49b060fcc1a" (UID: "dea859e7-66b4-401e-9495-b49b060fcc1a"). InnerVolumeSpecName "kube-api-access-jfzb7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 09:06:35 crc kubenswrapper[4886]: I1124 09:06:35.417313 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jfzb7\" (UniqueName: \"kubernetes.io/projected/dea859e7-66b4-401e-9495-b49b060fcc1a-kube-api-access-jfzb7\") on node \"crc\" DevicePath \"\"" Nov 24 09:06:35 crc kubenswrapper[4886]: I1124 09:06:35.417352 4886 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dea859e7-66b4-401e-9495-b49b060fcc1a-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 24 09:06:35 crc kubenswrapper[4886]: I1124 09:06:35.417361 4886 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dea859e7-66b4-401e-9495-b49b060fcc1a-config\") on node \"crc\" DevicePath \"\"" Nov 24 09:06:35 crc kubenswrapper[4886]: I1124 09:06:35.417371 4886 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/dea859e7-66b4-401e-9495-b49b060fcc1a-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 24 09:06:35 crc kubenswrapper[4886]: I1124 09:06:35.489612 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-9p5gk" Nov 24 09:06:35 crc kubenswrapper[4886]: I1124 09:06:35.518091 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e2785a5b-ca3d-45f1-a8b9-7c1eac61afcb-dns-svc\") pod \"e2785a5b-ca3d-45f1-a8b9-7c1eac61afcb\" (UID: \"e2785a5b-ca3d-45f1-a8b9-7c1eac61afcb\") " Nov 24 09:06:35 crc kubenswrapper[4886]: I1124 09:06:35.518149 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ztdsg\" (UniqueName: \"kubernetes.io/projected/e2785a5b-ca3d-45f1-a8b9-7c1eac61afcb-kube-api-access-ztdsg\") pod \"e2785a5b-ca3d-45f1-a8b9-7c1eac61afcb\" (UID: \"e2785a5b-ca3d-45f1-a8b9-7c1eac61afcb\") " Nov 24 09:06:35 crc kubenswrapper[4886]: I1124 09:06:35.518374 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e2785a5b-ca3d-45f1-a8b9-7c1eac61afcb-config\") pod \"e2785a5b-ca3d-45f1-a8b9-7c1eac61afcb\" (UID: \"e2785a5b-ca3d-45f1-a8b9-7c1eac61afcb\") " Nov 24 09:06:35 crc kubenswrapper[4886]: I1124 09:06:35.529490 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e2785a5b-ca3d-45f1-a8b9-7c1eac61afcb-kube-api-access-ztdsg" (OuterVolumeSpecName: "kube-api-access-ztdsg") pod "e2785a5b-ca3d-45f1-a8b9-7c1eac61afcb" (UID: "e2785a5b-ca3d-45f1-a8b9-7c1eac61afcb"). InnerVolumeSpecName "kube-api-access-ztdsg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 09:06:35 crc kubenswrapper[4886]: I1124 09:06:35.624891 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ztdsg\" (UniqueName: \"kubernetes.io/projected/e2785a5b-ca3d-45f1-a8b9-7c1eac61afcb-kube-api-access-ztdsg\") on node \"crc\" DevicePath \"\"" Nov 24 09:06:35 crc kubenswrapper[4886]: I1124 09:06:35.663070 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e2785a5b-ca3d-45f1-a8b9-7c1eac61afcb-config" (OuterVolumeSpecName: "config") pod "e2785a5b-ca3d-45f1-a8b9-7c1eac61afcb" (UID: "e2785a5b-ca3d-45f1-a8b9-7c1eac61afcb"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 09:06:35 crc kubenswrapper[4886]: I1124 09:06:35.687935 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e2785a5b-ca3d-45f1-a8b9-7c1eac61afcb-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "e2785a5b-ca3d-45f1-a8b9-7c1eac61afcb" (UID: "e2785a5b-ca3d-45f1-a8b9-7c1eac61afcb"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 09:06:35 crc kubenswrapper[4886]: I1124 09:06:35.733787 4886 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e2785a5b-ca3d-45f1-a8b9-7c1eac61afcb-config\") on node \"crc\" DevicePath \"\"" Nov 24 09:06:35 crc kubenswrapper[4886]: I1124 09:06:35.733836 4886 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e2785a5b-ca3d-45f1-a8b9-7c1eac61afcb-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 24 09:06:35 crc kubenswrapper[4886]: I1124 09:06:35.747943 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-hqh7m"] Nov 24 09:06:35 crc kubenswrapper[4886]: W1124 09:06:35.774151 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podabe55c7e_0682_4591_bd60_59ee1de24094.slice/crio-22cc8285eeeda9ec78956d50b4d993f91eddab5d9dc848b42be5c8d23480d8d2 WatchSource:0}: Error finding container 22cc8285eeeda9ec78956d50b4d993f91eddab5d9dc848b42be5c8d23480d8d2: Status 404 returned error can't find the container with id 22cc8285eeeda9ec78956d50b4d993f91eddab5d9dc848b42be5c8d23480d8d2 Nov 24 09:06:36 crc kubenswrapper[4886]: I1124 09:06:36.106803 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8554648995-lz6bg"] Nov 24 09:06:36 crc kubenswrapper[4886]: W1124 09:06:36.117867 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod647ee34f_ec2c_4feb_b583_4e37a20f96a3.slice/crio-d685dc4fa428ee8c5303c0d429b31f27513d1a20b9879a26d9cdd0adbdfb4649 WatchSource:0}: Error finding container d685dc4fa428ee8c5303c0d429b31f27513d1a20b9879a26d9cdd0adbdfb4649: Status 404 returned error can't find the container with id d685dc4fa428ee8c5303c0d429b31f27513d1a20b9879a26d9cdd0adbdfb4649 Nov 24 09:06:36 crc kubenswrapper[4886]: I1124 09:06:36.255312 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-hqh7m" event={"ID":"abe55c7e-0682-4591-bd60-59ee1de24094","Type":"ContainerStarted","Data":"f6cdb63e10397606fc011b920060d888bb46d7025076b246d5b46561871b58a9"} Nov 24 09:06:36 crc kubenswrapper[4886]: I1124 09:06:36.255369 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-hqh7m" event={"ID":"abe55c7e-0682-4591-bd60-59ee1de24094","Type":"ContainerStarted","Data":"22cc8285eeeda9ec78956d50b4d993f91eddab5d9dc848b42be5c8d23480d8d2"} Nov 24 09:06:36 crc kubenswrapper[4886]: I1124 09:06:36.258972 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-9p5gk" Nov 24 09:06:36 crc kubenswrapper[4886]: I1124 09:06:36.258980 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-9p5gk" event={"ID":"e2785a5b-ca3d-45f1-a8b9-7c1eac61afcb","Type":"ContainerDied","Data":"58c2385655d8667afcd98d772a1956d0caf730583b1cc237827eeb4d7bbfa5dd"} Nov 24 09:06:36 crc kubenswrapper[4886]: I1124 09:06:36.259067 4886 scope.go:117] "RemoveContainer" containerID="435df7a6ee64a6dffe88e7c5f43c652f570b5f30ccd47ff2e9c69e6b7e3fbf72" Nov 24 09:06:36 crc kubenswrapper[4886]: I1124 09:06:36.262283 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-vclvw" event={"ID":"6111b0c6-fec0-4738-8a86-e433a2b5c673","Type":"ContainerStarted","Data":"767296b95b12fe85180d6c91b66dea3a695b9d105da073b58aea23c32fbbbdbd"} Nov 24 09:06:36 crc kubenswrapper[4886]: I1124 09:06:36.263349 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-vclvw" Nov 24 09:06:36 crc kubenswrapper[4886]: I1124 09:06:36.263385 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-vclvw" Nov 24 09:06:36 crc kubenswrapper[4886]: I1124 09:06:36.264806 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-lz6bg" event={"ID":"647ee34f-ec2c-4feb-b583-4e37a20f96a3","Type":"ContainerStarted","Data":"d685dc4fa428ee8c5303c0d429b31f27513d1a20b9879a26d9cdd0adbdfb4649"} Nov 24 09:06:36 crc kubenswrapper[4886]: I1124 09:06:36.266623 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"11ea3612-3583-4b82-9047-d11cd751adcd","Type":"ContainerStarted","Data":"1437aa17343ee0634f14f004ac849e8baef660440c965a823a124f0a4b55bcc7"} Nov 24 09:06:36 crc kubenswrapper[4886]: I1124 09:06:36.269748 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"3ec7bf38-594d-4606-ab2c-76f4fc8b6a29","Type":"ContainerStarted","Data":"cd244dfa3707e9f109abe7ea548efa29302b17d033f98437ceb5879aee16d42d"} Nov 24 09:06:36 crc kubenswrapper[4886]: I1124 09:06:36.273984 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zc46q" event={"ID":"23cb993e-0360-4449-b604-8ddd825a6502","Type":"ContainerStarted","Data":"36e22a101132c390ac35de718c60f14be6675ff8618943dfbe4e49f19370e8c5"} Nov 24 09:06:36 crc kubenswrapper[4886]: I1124 09:06:36.281704 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-hqh7m" podStartSLOduration=2.281682561 podStartE2EDuration="2.281682561s" podCreationTimestamp="2025-11-24 09:06:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 09:06:36.274632379 +0000 UTC m=+1052.161370514" watchObservedRunningTime="2025-11-24 09:06:36.281682561 +0000 UTC m=+1052.168420696" Nov 24 09:06:36 crc kubenswrapper[4886]: I1124 09:06:36.288339 4886 scope.go:117] "RemoveContainer" containerID="185e70573db2790e7432cc637a1d32e82821fadbeae68e928b9d9ed85229cd76" Nov 24 09:06:36 crc kubenswrapper[4886]: I1124 09:06:36.289751 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"495262a2-0785-4f84-aeb5-00eff9c76e9a","Type":"ContainerStarted","Data":"5621fc7b330e439a4332ceadfe2aa3473c61304c7e3c5ae53a3505256abb689c"} Nov 24 09:06:36 crc kubenswrapper[4886]: I1124 09:06:36.294751 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bf47b49b7-v6txk" Nov 24 09:06:36 crc kubenswrapper[4886]: I1124 09:06:36.296262 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"43ae2d6a-6a35-4cf5-81fd-76c9a41c6db1","Type":"ContainerStarted","Data":"8da56df531fe97a61487ce5db9cb73cc3b46795f7239647f5b0094546d7cf66a"} Nov 24 09:06:36 crc kubenswrapper[4886]: I1124 09:06:36.371616 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-vclvw" podStartSLOduration=18.083822843 podStartE2EDuration="25.371590083s" podCreationTimestamp="2025-11-24 09:06:11 +0000 UTC" firstStartedPulling="2025-11-24 09:06:16.733867547 +0000 UTC m=+1032.620605692" lastFinishedPulling="2025-11-24 09:06:24.021634787 +0000 UTC m=+1039.908372932" observedRunningTime="2025-11-24 09:06:36.359661772 +0000 UTC m=+1052.246399907" watchObservedRunningTime="2025-11-24 09:06:36.371590083 +0000 UTC m=+1052.258328218" Nov 24 09:06:36 crc kubenswrapper[4886]: I1124 09:06:36.374548 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=25.330602863 podStartE2EDuration="32.374535597s" podCreationTimestamp="2025-11-24 09:06:04 +0000 UTC" firstStartedPulling="2025-11-24 09:06:16.043537271 +0000 UTC m=+1031.930275406" lastFinishedPulling="2025-11-24 09:06:23.087470005 +0000 UTC m=+1038.974208140" observedRunningTime="2025-11-24 09:06:36.314316414 +0000 UTC m=+1052.201054569" watchObservedRunningTime="2025-11-24 09:06:36.374535597 +0000 UTC m=+1052.261273732" Nov 24 09:06:36 crc kubenswrapper[4886]: I1124 09:06:36.410336 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=25.214989531 podStartE2EDuration="33.410318751s" podCreationTimestamp="2025-11-24 09:06:03 +0000 UTC" firstStartedPulling="2025-11-24 09:06:15.542137648 +0000 UTC m=+1031.428875783" lastFinishedPulling="2025-11-24 09:06:23.737466868 +0000 UTC m=+1039.624205003" observedRunningTime="2025-11-24 09:06:36.40785242 +0000 UTC m=+1052.294590565" watchObservedRunningTime="2025-11-24 09:06:36.410318751 +0000 UTC m=+1052.297056886" Nov 24 09:06:36 crc kubenswrapper[4886]: I1124 09:06:36.429392 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-9p5gk"] Nov 24 09:06:36 crc kubenswrapper[4886]: I1124 09:06:36.439924 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-9p5gk"] Nov 24 09:06:36 crc kubenswrapper[4886]: I1124 09:06:36.471128 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-v6txk"] Nov 24 09:06:36 crc kubenswrapper[4886]: I1124 09:06:36.503292 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-v6txk"] Nov 24 09:06:36 crc kubenswrapper[4886]: I1124 09:06:36.511643 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=4.862625586 podStartE2EDuration="23.511545566s" podCreationTimestamp="2025-11-24 09:06:13 +0000 UTC" firstStartedPulling="2025-11-24 09:06:16.536949074 +0000 UTC m=+1032.423687199" lastFinishedPulling="2025-11-24 09:06:35.185869044 +0000 UTC m=+1051.072607179" observedRunningTime="2025-11-24 09:06:36.482239158 +0000 UTC m=+1052.368977303" watchObservedRunningTime="2025-11-24 09:06:36.511545566 +0000 UTC m=+1052.398283691" Nov 24 09:06:36 crc kubenswrapper[4886]: I1124 09:06:36.523662 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=10.530011088 podStartE2EDuration="26.523640352s" podCreationTimestamp="2025-11-24 09:06:10 +0000 UTC" firstStartedPulling="2025-11-24 09:06:19.265209558 +0000 UTC m=+1035.151947693" lastFinishedPulling="2025-11-24 09:06:35.258838822 +0000 UTC m=+1051.145576957" observedRunningTime="2025-11-24 09:06:36.504515705 +0000 UTC m=+1052.391253830" watchObservedRunningTime="2025-11-24 09:06:36.523640352 +0000 UTC m=+1052.410378487" Nov 24 09:06:36 crc kubenswrapper[4886]: I1124 09:06:36.863006 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dea859e7-66b4-401e-9495-b49b060fcc1a" path="/var/lib/kubelet/pods/dea859e7-66b4-401e-9495-b49b060fcc1a/volumes" Nov 24 09:06:36 crc kubenswrapper[4886]: I1124 09:06:36.863946 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e2785a5b-ca3d-45f1-a8b9-7c1eac61afcb" path="/var/lib/kubelet/pods/e2785a5b-ca3d-45f1-a8b9-7c1eac61afcb/volumes" Nov 24 09:06:36 crc kubenswrapper[4886]: I1124 09:06:36.970404 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Nov 24 09:06:37 crc kubenswrapper[4886]: I1124 09:06:37.304289 4886 generic.go:334] "Generic (PLEG): container finished" podID="647ee34f-ec2c-4feb-b583-4e37a20f96a3" containerID="d02970b5d77e8a6494dc2c364a9d7086f28c6cdb5b9c67df5bb9a950197803e6" exitCode=0 Nov 24 09:06:37 crc kubenswrapper[4886]: I1124 09:06:37.304398 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-lz6bg" event={"ID":"647ee34f-ec2c-4feb-b583-4e37a20f96a3","Type":"ContainerDied","Data":"d02970b5d77e8a6494dc2c364a9d7086f28c6cdb5b9c67df5bb9a950197803e6"} Nov 24 09:06:38 crc kubenswrapper[4886]: I1124 09:06:38.324076 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-lz6bg" event={"ID":"647ee34f-ec2c-4feb-b583-4e37a20f96a3","Type":"ContainerStarted","Data":"fa52e26d532fa5a97d26c9f138eda149e042e7d5ce5359a514ed5c20f0e7b513"} Nov 24 09:06:38 crc kubenswrapper[4886]: I1124 09:06:38.406719 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Nov 24 09:06:38 crc kubenswrapper[4886]: I1124 09:06:38.409885 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-8554648995-lz6bg" podStartSLOduration=4.409861258 podStartE2EDuration="4.409861258s" podCreationTimestamp="2025-11-24 09:06:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 09:06:38.403545027 +0000 UTC m=+1054.290283162" watchObservedRunningTime="2025-11-24 09:06:38.409861258 +0000 UTC m=+1054.296599393" Nov 24 09:06:38 crc kubenswrapper[4886]: I1124 09:06:38.513609 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8554648995-lz6bg"] Nov 24 09:06:38 crc kubenswrapper[4886]: I1124 09:06:38.642799 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-ts9fz"] Nov 24 09:06:38 crc kubenswrapper[4886]: E1124 09:06:38.643261 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2785a5b-ca3d-45f1-a8b9-7c1eac61afcb" containerName="dnsmasq-dns" Nov 24 09:06:38 crc kubenswrapper[4886]: I1124 09:06:38.643281 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2785a5b-ca3d-45f1-a8b9-7c1eac61afcb" containerName="dnsmasq-dns" Nov 24 09:06:38 crc kubenswrapper[4886]: E1124 09:06:38.643350 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2785a5b-ca3d-45f1-a8b9-7c1eac61afcb" containerName="init" Nov 24 09:06:38 crc kubenswrapper[4886]: I1124 09:06:38.643359 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2785a5b-ca3d-45f1-a8b9-7c1eac61afcb" containerName="init" Nov 24 09:06:38 crc kubenswrapper[4886]: I1124 09:06:38.643569 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="e2785a5b-ca3d-45f1-a8b9-7c1eac61afcb" containerName="dnsmasq-dns" Nov 24 09:06:38 crc kubenswrapper[4886]: I1124 09:06:38.644687 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-ts9fz" Nov 24 09:06:38 crc kubenswrapper[4886]: I1124 09:06:38.657863 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-ts9fz"] Nov 24 09:06:38 crc kubenswrapper[4886]: I1124 09:06:38.706288 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/028c4a78-6e79-412a-954c-abf1cdf4d5a2-config\") pod \"dnsmasq-dns-b8fbc5445-ts9fz\" (UID: \"028c4a78-6e79-412a-954c-abf1cdf4d5a2\") " pod="openstack/dnsmasq-dns-b8fbc5445-ts9fz" Nov 24 09:06:38 crc kubenswrapper[4886]: I1124 09:06:38.706372 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/028c4a78-6e79-412a-954c-abf1cdf4d5a2-ovsdbserver-sb\") pod \"dnsmasq-dns-b8fbc5445-ts9fz\" (UID: \"028c4a78-6e79-412a-954c-abf1cdf4d5a2\") " pod="openstack/dnsmasq-dns-b8fbc5445-ts9fz" Nov 24 09:06:38 crc kubenswrapper[4886]: I1124 09:06:38.706436 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/028c4a78-6e79-412a-954c-abf1cdf4d5a2-dns-svc\") pod \"dnsmasq-dns-b8fbc5445-ts9fz\" (UID: \"028c4a78-6e79-412a-954c-abf1cdf4d5a2\") " pod="openstack/dnsmasq-dns-b8fbc5445-ts9fz" Nov 24 09:06:38 crc kubenswrapper[4886]: I1124 09:06:38.706806 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/028c4a78-6e79-412a-954c-abf1cdf4d5a2-ovsdbserver-nb\") pod \"dnsmasq-dns-b8fbc5445-ts9fz\" (UID: \"028c4a78-6e79-412a-954c-abf1cdf4d5a2\") " pod="openstack/dnsmasq-dns-b8fbc5445-ts9fz" Nov 24 09:06:38 crc kubenswrapper[4886]: I1124 09:06:38.706899 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-894zv\" (UniqueName: \"kubernetes.io/projected/028c4a78-6e79-412a-954c-abf1cdf4d5a2-kube-api-access-894zv\") pod \"dnsmasq-dns-b8fbc5445-ts9fz\" (UID: \"028c4a78-6e79-412a-954c-abf1cdf4d5a2\") " pod="openstack/dnsmasq-dns-b8fbc5445-ts9fz" Nov 24 09:06:38 crc kubenswrapper[4886]: I1124 09:06:38.808700 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/028c4a78-6e79-412a-954c-abf1cdf4d5a2-ovsdbserver-sb\") pod \"dnsmasq-dns-b8fbc5445-ts9fz\" (UID: \"028c4a78-6e79-412a-954c-abf1cdf4d5a2\") " pod="openstack/dnsmasq-dns-b8fbc5445-ts9fz" Nov 24 09:06:38 crc kubenswrapper[4886]: I1124 09:06:38.808807 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/028c4a78-6e79-412a-954c-abf1cdf4d5a2-dns-svc\") pod \"dnsmasq-dns-b8fbc5445-ts9fz\" (UID: \"028c4a78-6e79-412a-954c-abf1cdf4d5a2\") " pod="openstack/dnsmasq-dns-b8fbc5445-ts9fz" Nov 24 09:06:38 crc kubenswrapper[4886]: I1124 09:06:38.808864 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/028c4a78-6e79-412a-954c-abf1cdf4d5a2-ovsdbserver-nb\") pod \"dnsmasq-dns-b8fbc5445-ts9fz\" (UID: \"028c4a78-6e79-412a-954c-abf1cdf4d5a2\") " pod="openstack/dnsmasq-dns-b8fbc5445-ts9fz" Nov 24 09:06:38 crc kubenswrapper[4886]: I1124 09:06:38.808886 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-894zv\" (UniqueName: \"kubernetes.io/projected/028c4a78-6e79-412a-954c-abf1cdf4d5a2-kube-api-access-894zv\") pod \"dnsmasq-dns-b8fbc5445-ts9fz\" (UID: \"028c4a78-6e79-412a-954c-abf1cdf4d5a2\") " pod="openstack/dnsmasq-dns-b8fbc5445-ts9fz" Nov 24 09:06:38 crc kubenswrapper[4886]: I1124 09:06:38.808907 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/028c4a78-6e79-412a-954c-abf1cdf4d5a2-config\") pod \"dnsmasq-dns-b8fbc5445-ts9fz\" (UID: \"028c4a78-6e79-412a-954c-abf1cdf4d5a2\") " pod="openstack/dnsmasq-dns-b8fbc5445-ts9fz" Nov 24 09:06:38 crc kubenswrapper[4886]: I1124 09:06:38.809865 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/028c4a78-6e79-412a-954c-abf1cdf4d5a2-config\") pod \"dnsmasq-dns-b8fbc5445-ts9fz\" (UID: \"028c4a78-6e79-412a-954c-abf1cdf4d5a2\") " pod="openstack/dnsmasq-dns-b8fbc5445-ts9fz" Nov 24 09:06:38 crc kubenswrapper[4886]: I1124 09:06:38.810597 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/028c4a78-6e79-412a-954c-abf1cdf4d5a2-ovsdbserver-sb\") pod \"dnsmasq-dns-b8fbc5445-ts9fz\" (UID: \"028c4a78-6e79-412a-954c-abf1cdf4d5a2\") " pod="openstack/dnsmasq-dns-b8fbc5445-ts9fz" Nov 24 09:06:38 crc kubenswrapper[4886]: I1124 09:06:38.811303 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/028c4a78-6e79-412a-954c-abf1cdf4d5a2-dns-svc\") pod \"dnsmasq-dns-b8fbc5445-ts9fz\" (UID: \"028c4a78-6e79-412a-954c-abf1cdf4d5a2\") " pod="openstack/dnsmasq-dns-b8fbc5445-ts9fz" Nov 24 09:06:38 crc kubenswrapper[4886]: I1124 09:06:38.811891 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/028c4a78-6e79-412a-954c-abf1cdf4d5a2-ovsdbserver-nb\") pod \"dnsmasq-dns-b8fbc5445-ts9fz\" (UID: \"028c4a78-6e79-412a-954c-abf1cdf4d5a2\") " pod="openstack/dnsmasq-dns-b8fbc5445-ts9fz" Nov 24 09:06:38 crc kubenswrapper[4886]: I1124 09:06:38.835384 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-894zv\" (UniqueName: \"kubernetes.io/projected/028c4a78-6e79-412a-954c-abf1cdf4d5a2-kube-api-access-894zv\") pod \"dnsmasq-dns-b8fbc5445-ts9fz\" (UID: \"028c4a78-6e79-412a-954c-abf1cdf4d5a2\") " pod="openstack/dnsmasq-dns-b8fbc5445-ts9fz" Nov 24 09:06:38 crc kubenswrapper[4886]: I1124 09:06:38.874035 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Nov 24 09:06:38 crc kubenswrapper[4886]: I1124 09:06:38.938624 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Nov 24 09:06:38 crc kubenswrapper[4886]: I1124 09:06:38.969660 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Nov 24 09:06:38 crc kubenswrapper[4886]: I1124 09:06:38.981714 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-ts9fz" Nov 24 09:06:39 crc kubenswrapper[4886]: I1124 09:06:39.016050 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Nov 24 09:06:39 crc kubenswrapper[4886]: I1124 09:06:39.332562 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Nov 24 09:06:39 crc kubenswrapper[4886]: I1124 09:06:39.333253 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-8554648995-lz6bg" Nov 24 09:06:39 crc kubenswrapper[4886]: I1124 09:06:39.381069 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Nov 24 09:06:39 crc kubenswrapper[4886]: I1124 09:06:39.381800 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Nov 24 09:06:39 crc kubenswrapper[4886]: I1124 09:06:39.491054 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-ts9fz"] Nov 24 09:06:39 crc kubenswrapper[4886]: I1124 09:06:39.727544 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Nov 24 09:06:39 crc kubenswrapper[4886]: I1124 09:06:39.746031 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Nov 24 09:06:39 crc kubenswrapper[4886]: I1124 09:06:39.752844 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Nov 24 09:06:39 crc kubenswrapper[4886]: I1124 09:06:39.752896 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Nov 24 09:06:39 crc kubenswrapper[4886]: I1124 09:06:39.752844 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-sksg2" Nov 24 09:06:39 crc kubenswrapper[4886]: I1124 09:06:39.753254 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Nov 24 09:06:39 crc kubenswrapper[4886]: I1124 09:06:39.777663 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Nov 24 09:06:39 crc kubenswrapper[4886]: I1124 09:06:39.791238 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Nov 24 09:06:39 crc kubenswrapper[4886]: I1124 09:06:39.793186 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Nov 24 09:06:39 crc kubenswrapper[4886]: I1124 09:06:39.798397 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Nov 24 09:06:39 crc kubenswrapper[4886]: I1124 09:06:39.798746 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Nov 24 09:06:39 crc kubenswrapper[4886]: I1124 09:06:39.798992 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-z6f5c" Nov 24 09:06:39 crc kubenswrapper[4886]: I1124 09:06:39.799120 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Nov 24 09:06:39 crc kubenswrapper[4886]: I1124 09:06:39.806205 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Nov 24 09:06:39 crc kubenswrapper[4886]: I1124 09:06:39.829474 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xch8k\" (UniqueName: \"kubernetes.io/projected/28013454-2b4a-4d68-87fa-272095c8a651-kube-api-access-xch8k\") pod \"ovn-northd-0\" (UID: \"28013454-2b4a-4d68-87fa-272095c8a651\") " pod="openstack/ovn-northd-0" Nov 24 09:06:39 crc kubenswrapper[4886]: I1124 09:06:39.829660 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/28013454-2b4a-4d68-87fa-272095c8a651-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"28013454-2b4a-4d68-87fa-272095c8a651\") " pod="openstack/ovn-northd-0" Nov 24 09:06:39 crc kubenswrapper[4886]: I1124 09:06:39.829749 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/28013454-2b4a-4d68-87fa-272095c8a651-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"28013454-2b4a-4d68-87fa-272095c8a651\") " pod="openstack/ovn-northd-0" Nov 24 09:06:39 crc kubenswrapper[4886]: I1124 09:06:39.829872 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/28013454-2b4a-4d68-87fa-272095c8a651-scripts\") pod \"ovn-northd-0\" (UID: \"28013454-2b4a-4d68-87fa-272095c8a651\") " pod="openstack/ovn-northd-0" Nov 24 09:06:39 crc kubenswrapper[4886]: I1124 09:06:39.829958 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/28013454-2b4a-4d68-87fa-272095c8a651-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"28013454-2b4a-4d68-87fa-272095c8a651\") " pod="openstack/ovn-northd-0" Nov 24 09:06:39 crc kubenswrapper[4886]: I1124 09:06:39.830036 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/28013454-2b4a-4d68-87fa-272095c8a651-config\") pod \"ovn-northd-0\" (UID: \"28013454-2b4a-4d68-87fa-272095c8a651\") " pod="openstack/ovn-northd-0" Nov 24 09:06:39 crc kubenswrapper[4886]: I1124 09:06:39.830124 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28013454-2b4a-4d68-87fa-272095c8a651-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"28013454-2b4a-4d68-87fa-272095c8a651\") " pod="openstack/ovn-northd-0" Nov 24 09:06:39 crc kubenswrapper[4886]: I1124 09:06:39.932334 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/65b7f4e6-3f5e-419b-9761-c0fc78a4632d-cache\") pod \"swift-storage-0\" (UID: \"65b7f4e6-3f5e-419b-9761-c0fc78a4632d\") " pod="openstack/swift-storage-0" Nov 24 09:06:39 crc kubenswrapper[4886]: I1124 09:06:39.932446 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xch8k\" (UniqueName: \"kubernetes.io/projected/28013454-2b4a-4d68-87fa-272095c8a651-kube-api-access-xch8k\") pod \"ovn-northd-0\" (UID: \"28013454-2b4a-4d68-87fa-272095c8a651\") " pod="openstack/ovn-northd-0" Nov 24 09:06:39 crc kubenswrapper[4886]: I1124 09:06:39.932490 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"swift-storage-0\" (UID: \"65b7f4e6-3f5e-419b-9761-c0fc78a4632d\") " pod="openstack/swift-storage-0" Nov 24 09:06:39 crc kubenswrapper[4886]: I1124 09:06:39.932546 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/28013454-2b4a-4d68-87fa-272095c8a651-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"28013454-2b4a-4d68-87fa-272095c8a651\") " pod="openstack/ovn-northd-0" Nov 24 09:06:39 crc kubenswrapper[4886]: I1124 09:06:39.932581 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/28013454-2b4a-4d68-87fa-272095c8a651-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"28013454-2b4a-4d68-87fa-272095c8a651\") " pod="openstack/ovn-northd-0" Nov 24 09:06:39 crc kubenswrapper[4886]: I1124 09:06:39.932619 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/65b7f4e6-3f5e-419b-9761-c0fc78a4632d-lock\") pod \"swift-storage-0\" (UID: \"65b7f4e6-3f5e-419b-9761-c0fc78a4632d\") " pod="openstack/swift-storage-0" Nov 24 09:06:39 crc kubenswrapper[4886]: I1124 09:06:39.932676 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/65b7f4e6-3f5e-419b-9761-c0fc78a4632d-etc-swift\") pod \"swift-storage-0\" (UID: \"65b7f4e6-3f5e-419b-9761-c0fc78a4632d\") " pod="openstack/swift-storage-0" Nov 24 09:06:39 crc kubenswrapper[4886]: I1124 09:06:39.932707 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/28013454-2b4a-4d68-87fa-272095c8a651-scripts\") pod \"ovn-northd-0\" (UID: \"28013454-2b4a-4d68-87fa-272095c8a651\") " pod="openstack/ovn-northd-0" Nov 24 09:06:39 crc kubenswrapper[4886]: I1124 09:06:39.932738 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/28013454-2b4a-4d68-87fa-272095c8a651-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"28013454-2b4a-4d68-87fa-272095c8a651\") " pod="openstack/ovn-northd-0" Nov 24 09:06:39 crc kubenswrapper[4886]: I1124 09:06:39.932775 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/28013454-2b4a-4d68-87fa-272095c8a651-config\") pod \"ovn-northd-0\" (UID: \"28013454-2b4a-4d68-87fa-272095c8a651\") " pod="openstack/ovn-northd-0" Nov 24 09:06:39 crc kubenswrapper[4886]: I1124 09:06:39.932802 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cl6km\" (UniqueName: \"kubernetes.io/projected/65b7f4e6-3f5e-419b-9761-c0fc78a4632d-kube-api-access-cl6km\") pod \"swift-storage-0\" (UID: \"65b7f4e6-3f5e-419b-9761-c0fc78a4632d\") " pod="openstack/swift-storage-0" Nov 24 09:06:39 crc kubenswrapper[4886]: I1124 09:06:39.932844 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28013454-2b4a-4d68-87fa-272095c8a651-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"28013454-2b4a-4d68-87fa-272095c8a651\") " pod="openstack/ovn-northd-0" Nov 24 09:06:39 crc kubenswrapper[4886]: I1124 09:06:39.933363 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/28013454-2b4a-4d68-87fa-272095c8a651-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"28013454-2b4a-4d68-87fa-272095c8a651\") " pod="openstack/ovn-northd-0" Nov 24 09:06:39 crc kubenswrapper[4886]: I1124 09:06:39.934285 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/28013454-2b4a-4d68-87fa-272095c8a651-config\") pod \"ovn-northd-0\" (UID: \"28013454-2b4a-4d68-87fa-272095c8a651\") " pod="openstack/ovn-northd-0" Nov 24 09:06:39 crc kubenswrapper[4886]: I1124 09:06:39.934349 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/28013454-2b4a-4d68-87fa-272095c8a651-scripts\") pod \"ovn-northd-0\" (UID: \"28013454-2b4a-4d68-87fa-272095c8a651\") " pod="openstack/ovn-northd-0" Nov 24 09:06:39 crc kubenswrapper[4886]: I1124 09:06:39.941810 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28013454-2b4a-4d68-87fa-272095c8a651-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"28013454-2b4a-4d68-87fa-272095c8a651\") " pod="openstack/ovn-northd-0" Nov 24 09:06:39 crc kubenswrapper[4886]: I1124 09:06:39.953982 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xch8k\" (UniqueName: \"kubernetes.io/projected/28013454-2b4a-4d68-87fa-272095c8a651-kube-api-access-xch8k\") pod \"ovn-northd-0\" (UID: \"28013454-2b4a-4d68-87fa-272095c8a651\") " pod="openstack/ovn-northd-0" Nov 24 09:06:39 crc kubenswrapper[4886]: I1124 09:06:39.954003 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/28013454-2b4a-4d68-87fa-272095c8a651-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"28013454-2b4a-4d68-87fa-272095c8a651\") " pod="openstack/ovn-northd-0" Nov 24 09:06:39 crc kubenswrapper[4886]: I1124 09:06:39.964058 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/28013454-2b4a-4d68-87fa-272095c8a651-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"28013454-2b4a-4d68-87fa-272095c8a651\") " pod="openstack/ovn-northd-0" Nov 24 09:06:40 crc kubenswrapper[4886]: I1124 09:06:40.034766 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/65b7f4e6-3f5e-419b-9761-c0fc78a4632d-cache\") pod \"swift-storage-0\" (UID: \"65b7f4e6-3f5e-419b-9761-c0fc78a4632d\") " pod="openstack/swift-storage-0" Nov 24 09:06:40 crc kubenswrapper[4886]: I1124 09:06:40.035144 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"swift-storage-0\" (UID: \"65b7f4e6-3f5e-419b-9761-c0fc78a4632d\") " pod="openstack/swift-storage-0" Nov 24 09:06:40 crc kubenswrapper[4886]: I1124 09:06:40.035356 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/65b7f4e6-3f5e-419b-9761-c0fc78a4632d-lock\") pod \"swift-storage-0\" (UID: \"65b7f4e6-3f5e-419b-9761-c0fc78a4632d\") " pod="openstack/swift-storage-0" Nov 24 09:06:40 crc kubenswrapper[4886]: I1124 09:06:40.035424 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/65b7f4e6-3f5e-419b-9761-c0fc78a4632d-cache\") pod \"swift-storage-0\" (UID: \"65b7f4e6-3f5e-419b-9761-c0fc78a4632d\") " pod="openstack/swift-storage-0" Nov 24 09:06:40 crc kubenswrapper[4886]: I1124 09:06:40.035570 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/65b7f4e6-3f5e-419b-9761-c0fc78a4632d-etc-swift\") pod \"swift-storage-0\" (UID: \"65b7f4e6-3f5e-419b-9761-c0fc78a4632d\") " pod="openstack/swift-storage-0" Nov 24 09:06:40 crc kubenswrapper[4886]: E1124 09:06:40.035832 4886 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Nov 24 09:06:40 crc kubenswrapper[4886]: E1124 09:06:40.035911 4886 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Nov 24 09:06:40 crc kubenswrapper[4886]: E1124 09:06:40.036064 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/65b7f4e6-3f5e-419b-9761-c0fc78a4632d-etc-swift podName:65b7f4e6-3f5e-419b-9761-c0fc78a4632d nodeName:}" failed. No retries permitted until 2025-11-24 09:06:40.536041036 +0000 UTC m=+1056.422779171 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/65b7f4e6-3f5e-419b-9761-c0fc78a4632d-etc-swift") pod "swift-storage-0" (UID: "65b7f4e6-3f5e-419b-9761-c0fc78a4632d") : configmap "swift-ring-files" not found Nov 24 09:06:40 crc kubenswrapper[4886]: I1124 09:06:40.035963 4886 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"swift-storage-0\" (UID: \"65b7f4e6-3f5e-419b-9761-c0fc78a4632d\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/swift-storage-0" Nov 24 09:06:40 crc kubenswrapper[4886]: I1124 09:06:40.037375 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/65b7f4e6-3f5e-419b-9761-c0fc78a4632d-lock\") pod \"swift-storage-0\" (UID: \"65b7f4e6-3f5e-419b-9761-c0fc78a4632d\") " pod="openstack/swift-storage-0" Nov 24 09:06:40 crc kubenswrapper[4886]: I1124 09:06:40.038480 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cl6km\" (UniqueName: \"kubernetes.io/projected/65b7f4e6-3f5e-419b-9761-c0fc78a4632d-kube-api-access-cl6km\") pod \"swift-storage-0\" (UID: \"65b7f4e6-3f5e-419b-9761-c0fc78a4632d\") " pod="openstack/swift-storage-0" Nov 24 09:06:40 crc kubenswrapper[4886]: I1124 09:06:40.063348 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cl6km\" (UniqueName: \"kubernetes.io/projected/65b7f4e6-3f5e-419b-9761-c0fc78a4632d-kube-api-access-cl6km\") pod \"swift-storage-0\" (UID: \"65b7f4e6-3f5e-419b-9761-c0fc78a4632d\") " pod="openstack/swift-storage-0" Nov 24 09:06:40 crc kubenswrapper[4886]: I1124 09:06:40.065244 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"swift-storage-0\" (UID: \"65b7f4e6-3f5e-419b-9761-c0fc78a4632d\") " pod="openstack/swift-storage-0" Nov 24 09:06:40 crc kubenswrapper[4886]: I1124 09:06:40.116192 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Nov 24 09:06:40 crc kubenswrapper[4886]: I1124 09:06:40.344477 4886 generic.go:334] "Generic (PLEG): container finished" podID="028c4a78-6e79-412a-954c-abf1cdf4d5a2" containerID="076b0adb589615dd8664bc77955de8b132285fa883ef6ce2a66466567eee6be6" exitCode=0 Nov 24 09:06:40 crc kubenswrapper[4886]: I1124 09:06:40.344907 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-ts9fz" event={"ID":"028c4a78-6e79-412a-954c-abf1cdf4d5a2","Type":"ContainerDied","Data":"076b0adb589615dd8664bc77955de8b132285fa883ef6ce2a66466567eee6be6"} Nov 24 09:06:40 crc kubenswrapper[4886]: I1124 09:06:40.344979 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-ts9fz" event={"ID":"028c4a78-6e79-412a-954c-abf1cdf4d5a2","Type":"ContainerStarted","Data":"a1388ecae04dcf8d19014e7b5c5f8344ac347b6502925e8cb6eed15e9b56426b"} Nov 24 09:06:40 crc kubenswrapper[4886]: I1124 09:06:40.345013 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-8554648995-lz6bg" podUID="647ee34f-ec2c-4feb-b583-4e37a20f96a3" containerName="dnsmasq-dns" containerID="cri-o://fa52e26d532fa5a97d26c9f138eda149e042e7d5ce5359a514ed5c20f0e7b513" gracePeriod=10 Nov 24 09:06:40 crc kubenswrapper[4886]: I1124 09:06:40.549764 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/65b7f4e6-3f5e-419b-9761-c0fc78a4632d-etc-swift\") pod \"swift-storage-0\" (UID: \"65b7f4e6-3f5e-419b-9761-c0fc78a4632d\") " pod="openstack/swift-storage-0" Nov 24 09:06:40 crc kubenswrapper[4886]: E1124 09:06:40.550947 4886 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Nov 24 09:06:40 crc kubenswrapper[4886]: E1124 09:06:40.550978 4886 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Nov 24 09:06:40 crc kubenswrapper[4886]: E1124 09:06:40.551027 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/65b7f4e6-3f5e-419b-9761-c0fc78a4632d-etc-swift podName:65b7f4e6-3f5e-419b-9761-c0fc78a4632d nodeName:}" failed. No retries permitted until 2025-11-24 09:06:41.551008317 +0000 UTC m=+1057.437746442 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/65b7f4e6-3f5e-419b-9761-c0fc78a4632d-etc-swift") pod "swift-storage-0" (UID: "65b7f4e6-3f5e-419b-9761-c0fc78a4632d") : configmap "swift-ring-files" not found Nov 24 09:06:40 crc kubenswrapper[4886]: I1124 09:06:40.608787 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Nov 24 09:06:40 crc kubenswrapper[4886]: I1124 09:06:40.768048 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-lz6bg" Nov 24 09:06:40 crc kubenswrapper[4886]: I1124 09:06:40.854212 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/647ee34f-ec2c-4feb-b583-4e37a20f96a3-config\") pod \"647ee34f-ec2c-4feb-b583-4e37a20f96a3\" (UID: \"647ee34f-ec2c-4feb-b583-4e37a20f96a3\") " Nov 24 09:06:40 crc kubenswrapper[4886]: I1124 09:06:40.854491 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/647ee34f-ec2c-4feb-b583-4e37a20f96a3-ovsdbserver-nb\") pod \"647ee34f-ec2c-4feb-b583-4e37a20f96a3\" (UID: \"647ee34f-ec2c-4feb-b583-4e37a20f96a3\") " Nov 24 09:06:40 crc kubenswrapper[4886]: I1124 09:06:40.854593 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9rjqn\" (UniqueName: \"kubernetes.io/projected/647ee34f-ec2c-4feb-b583-4e37a20f96a3-kube-api-access-9rjqn\") pod \"647ee34f-ec2c-4feb-b583-4e37a20f96a3\" (UID: \"647ee34f-ec2c-4feb-b583-4e37a20f96a3\") " Nov 24 09:06:40 crc kubenswrapper[4886]: I1124 09:06:40.854628 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/647ee34f-ec2c-4feb-b583-4e37a20f96a3-ovsdbserver-sb\") pod \"647ee34f-ec2c-4feb-b583-4e37a20f96a3\" (UID: \"647ee34f-ec2c-4feb-b583-4e37a20f96a3\") " Nov 24 09:06:40 crc kubenswrapper[4886]: I1124 09:06:40.854662 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/647ee34f-ec2c-4feb-b583-4e37a20f96a3-dns-svc\") pod \"647ee34f-ec2c-4feb-b583-4e37a20f96a3\" (UID: \"647ee34f-ec2c-4feb-b583-4e37a20f96a3\") " Nov 24 09:06:40 crc kubenswrapper[4886]: I1124 09:06:40.863133 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/647ee34f-ec2c-4feb-b583-4e37a20f96a3-kube-api-access-9rjqn" (OuterVolumeSpecName: "kube-api-access-9rjqn") pod "647ee34f-ec2c-4feb-b583-4e37a20f96a3" (UID: "647ee34f-ec2c-4feb-b583-4e37a20f96a3"). InnerVolumeSpecName "kube-api-access-9rjqn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 09:06:40 crc kubenswrapper[4886]: I1124 09:06:40.925041 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/647ee34f-ec2c-4feb-b583-4e37a20f96a3-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "647ee34f-ec2c-4feb-b583-4e37a20f96a3" (UID: "647ee34f-ec2c-4feb-b583-4e37a20f96a3"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 09:06:40 crc kubenswrapper[4886]: I1124 09:06:40.941734 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/647ee34f-ec2c-4feb-b583-4e37a20f96a3-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "647ee34f-ec2c-4feb-b583-4e37a20f96a3" (UID: "647ee34f-ec2c-4feb-b583-4e37a20f96a3"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 09:06:40 crc kubenswrapper[4886]: I1124 09:06:40.943849 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/647ee34f-ec2c-4feb-b583-4e37a20f96a3-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "647ee34f-ec2c-4feb-b583-4e37a20f96a3" (UID: "647ee34f-ec2c-4feb-b583-4e37a20f96a3"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 09:06:40 crc kubenswrapper[4886]: I1124 09:06:40.948825 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/647ee34f-ec2c-4feb-b583-4e37a20f96a3-config" (OuterVolumeSpecName: "config") pod "647ee34f-ec2c-4feb-b583-4e37a20f96a3" (UID: "647ee34f-ec2c-4feb-b583-4e37a20f96a3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 09:06:40 crc kubenswrapper[4886]: I1124 09:06:40.957802 4886 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/647ee34f-ec2c-4feb-b583-4e37a20f96a3-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 24 09:06:40 crc kubenswrapper[4886]: I1124 09:06:40.957847 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9rjqn\" (UniqueName: \"kubernetes.io/projected/647ee34f-ec2c-4feb-b583-4e37a20f96a3-kube-api-access-9rjqn\") on node \"crc\" DevicePath \"\"" Nov 24 09:06:40 crc kubenswrapper[4886]: I1124 09:06:40.957883 4886 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/647ee34f-ec2c-4feb-b583-4e37a20f96a3-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 24 09:06:40 crc kubenswrapper[4886]: I1124 09:06:40.957898 4886 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/647ee34f-ec2c-4feb-b583-4e37a20f96a3-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 24 09:06:40 crc kubenswrapper[4886]: I1124 09:06:40.957910 4886 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/647ee34f-ec2c-4feb-b583-4e37a20f96a3-config\") on node \"crc\" DevicePath \"\"" Nov 24 09:06:41 crc kubenswrapper[4886]: I1124 09:06:41.356063 4886 generic.go:334] "Generic (PLEG): container finished" podID="647ee34f-ec2c-4feb-b583-4e37a20f96a3" containerID="fa52e26d532fa5a97d26c9f138eda149e042e7d5ce5359a514ed5c20f0e7b513" exitCode=0 Nov 24 09:06:41 crc kubenswrapper[4886]: I1124 09:06:41.356190 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-lz6bg" Nov 24 09:06:41 crc kubenswrapper[4886]: I1124 09:06:41.356195 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-lz6bg" event={"ID":"647ee34f-ec2c-4feb-b583-4e37a20f96a3","Type":"ContainerDied","Data":"fa52e26d532fa5a97d26c9f138eda149e042e7d5ce5359a514ed5c20f0e7b513"} Nov 24 09:06:41 crc kubenswrapper[4886]: I1124 09:06:41.356673 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-lz6bg" event={"ID":"647ee34f-ec2c-4feb-b583-4e37a20f96a3","Type":"ContainerDied","Data":"d685dc4fa428ee8c5303c0d429b31f27513d1a20b9879a26d9cdd0adbdfb4649"} Nov 24 09:06:41 crc kubenswrapper[4886]: I1124 09:06:41.356703 4886 scope.go:117] "RemoveContainer" containerID="fa52e26d532fa5a97d26c9f138eda149e042e7d5ce5359a514ed5c20f0e7b513" Nov 24 09:06:41 crc kubenswrapper[4886]: I1124 09:06:41.358141 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"28013454-2b4a-4d68-87fa-272095c8a651","Type":"ContainerStarted","Data":"3ac4136c8d2e4213014890392ef8b84e5f563f4baf9315ea57b742436697afdd"} Nov 24 09:06:41 crc kubenswrapper[4886]: I1124 09:06:41.363452 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-ts9fz" event={"ID":"028c4a78-6e79-412a-954c-abf1cdf4d5a2","Type":"ContainerStarted","Data":"2a57bdaff3d59792302a81061c7531f811a5a9fae61be7ced5e8a140e8d01508"} Nov 24 09:06:41 crc kubenswrapper[4886]: I1124 09:06:41.364498 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-b8fbc5445-ts9fz" Nov 24 09:06:41 crc kubenswrapper[4886]: I1124 09:06:41.387449 4886 scope.go:117] "RemoveContainer" containerID="d02970b5d77e8a6494dc2c364a9d7086f28c6cdb5b9c67df5bb9a950197803e6" Nov 24 09:06:41 crc kubenswrapper[4886]: I1124 09:06:41.390769 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-b8fbc5445-ts9fz" podStartSLOduration=3.390749747 podStartE2EDuration="3.390749747s" podCreationTimestamp="2025-11-24 09:06:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 09:06:41.383963153 +0000 UTC m=+1057.270701298" watchObservedRunningTime="2025-11-24 09:06:41.390749747 +0000 UTC m=+1057.277487902" Nov 24 09:06:41 crc kubenswrapper[4886]: I1124 09:06:41.407260 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8554648995-lz6bg"] Nov 24 09:06:41 crc kubenswrapper[4886]: I1124 09:06:41.414964 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-8554648995-lz6bg"] Nov 24 09:06:41 crc kubenswrapper[4886]: I1124 09:06:41.422520 4886 scope.go:117] "RemoveContainer" containerID="fa52e26d532fa5a97d26c9f138eda149e042e7d5ce5359a514ed5c20f0e7b513" Nov 24 09:06:41 crc kubenswrapper[4886]: E1124 09:06:41.423335 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fa52e26d532fa5a97d26c9f138eda149e042e7d5ce5359a514ed5c20f0e7b513\": container with ID starting with fa52e26d532fa5a97d26c9f138eda149e042e7d5ce5359a514ed5c20f0e7b513 not found: ID does not exist" containerID="fa52e26d532fa5a97d26c9f138eda149e042e7d5ce5359a514ed5c20f0e7b513" Nov 24 09:06:41 crc kubenswrapper[4886]: I1124 09:06:41.423496 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fa52e26d532fa5a97d26c9f138eda149e042e7d5ce5359a514ed5c20f0e7b513"} err="failed to get container status \"fa52e26d532fa5a97d26c9f138eda149e042e7d5ce5359a514ed5c20f0e7b513\": rpc error: code = NotFound desc = could not find container \"fa52e26d532fa5a97d26c9f138eda149e042e7d5ce5359a514ed5c20f0e7b513\": container with ID starting with fa52e26d532fa5a97d26c9f138eda149e042e7d5ce5359a514ed5c20f0e7b513 not found: ID does not exist" Nov 24 09:06:41 crc kubenswrapper[4886]: I1124 09:06:41.423590 4886 scope.go:117] "RemoveContainer" containerID="d02970b5d77e8a6494dc2c364a9d7086f28c6cdb5b9c67df5bb9a950197803e6" Nov 24 09:06:41 crc kubenswrapper[4886]: E1124 09:06:41.424020 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d02970b5d77e8a6494dc2c364a9d7086f28c6cdb5b9c67df5bb9a950197803e6\": container with ID starting with d02970b5d77e8a6494dc2c364a9d7086f28c6cdb5b9c67df5bb9a950197803e6 not found: ID does not exist" containerID="d02970b5d77e8a6494dc2c364a9d7086f28c6cdb5b9c67df5bb9a950197803e6" Nov 24 09:06:41 crc kubenswrapper[4886]: I1124 09:06:41.424059 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d02970b5d77e8a6494dc2c364a9d7086f28c6cdb5b9c67df5bb9a950197803e6"} err="failed to get container status \"d02970b5d77e8a6494dc2c364a9d7086f28c6cdb5b9c67df5bb9a950197803e6\": rpc error: code = NotFound desc = could not find container \"d02970b5d77e8a6494dc2c364a9d7086f28c6cdb5b9c67df5bb9a950197803e6\": container with ID starting with d02970b5d77e8a6494dc2c364a9d7086f28c6cdb5b9c67df5bb9a950197803e6 not found: ID does not exist" Nov 24 09:06:41 crc kubenswrapper[4886]: I1124 09:06:41.571067 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/65b7f4e6-3f5e-419b-9761-c0fc78a4632d-etc-swift\") pod \"swift-storage-0\" (UID: \"65b7f4e6-3f5e-419b-9761-c0fc78a4632d\") " pod="openstack/swift-storage-0" Nov 24 09:06:41 crc kubenswrapper[4886]: E1124 09:06:41.571447 4886 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Nov 24 09:06:41 crc kubenswrapper[4886]: E1124 09:06:41.571480 4886 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Nov 24 09:06:41 crc kubenswrapper[4886]: E1124 09:06:41.571547 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/65b7f4e6-3f5e-419b-9761-c0fc78a4632d-etc-swift podName:65b7f4e6-3f5e-419b-9761-c0fc78a4632d nodeName:}" failed. No retries permitted until 2025-11-24 09:06:43.571525069 +0000 UTC m=+1059.458263214 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/65b7f4e6-3f5e-419b-9761-c0fc78a4632d-etc-swift") pod "swift-storage-0" (UID: "65b7f4e6-3f5e-419b-9761-c0fc78a4632d") : configmap "swift-ring-files" not found Nov 24 09:06:42 crc kubenswrapper[4886]: I1124 09:06:42.379658 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"28013454-2b4a-4d68-87fa-272095c8a651","Type":"ContainerStarted","Data":"7b94ff03f125c811585574f586227114874f03609710ccafcbcbe0156a615eed"} Nov 24 09:06:42 crc kubenswrapper[4886]: I1124 09:06:42.864854 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="647ee34f-ec2c-4feb-b583-4e37a20f96a3" path="/var/lib/kubelet/pods/647ee34f-ec2c-4feb-b583-4e37a20f96a3/volumes" Nov 24 09:06:43 crc kubenswrapper[4886]: I1124 09:06:43.390014 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"28013454-2b4a-4d68-87fa-272095c8a651","Type":"ContainerStarted","Data":"d50a6d4a5d9f587e066cc78806374b8e27a133245d53958cfccfee67259c78cc"} Nov 24 09:06:43 crc kubenswrapper[4886]: I1124 09:06:43.390076 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Nov 24 09:06:43 crc kubenswrapper[4886]: I1124 09:06:43.418327 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=3.051980234 podStartE2EDuration="4.418295997s" podCreationTimestamp="2025-11-24 09:06:39 +0000 UTC" firstStartedPulling="2025-11-24 09:06:40.627904307 +0000 UTC m=+1056.514642442" lastFinishedPulling="2025-11-24 09:06:41.99422007 +0000 UTC m=+1057.880958205" observedRunningTime="2025-11-24 09:06:43.4117671 +0000 UTC m=+1059.298505235" watchObservedRunningTime="2025-11-24 09:06:43.418295997 +0000 UTC m=+1059.305034142" Nov 24 09:06:43 crc kubenswrapper[4886]: I1124 09:06:43.454070 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-dxnk5"] Nov 24 09:06:43 crc kubenswrapper[4886]: E1124 09:06:43.454522 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="647ee34f-ec2c-4feb-b583-4e37a20f96a3" containerName="init" Nov 24 09:06:43 crc kubenswrapper[4886]: I1124 09:06:43.454550 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="647ee34f-ec2c-4feb-b583-4e37a20f96a3" containerName="init" Nov 24 09:06:43 crc kubenswrapper[4886]: E1124 09:06:43.454571 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="647ee34f-ec2c-4feb-b583-4e37a20f96a3" containerName="dnsmasq-dns" Nov 24 09:06:43 crc kubenswrapper[4886]: I1124 09:06:43.454581 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="647ee34f-ec2c-4feb-b583-4e37a20f96a3" containerName="dnsmasq-dns" Nov 24 09:06:43 crc kubenswrapper[4886]: I1124 09:06:43.454737 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="647ee34f-ec2c-4feb-b583-4e37a20f96a3" containerName="dnsmasq-dns" Nov 24 09:06:43 crc kubenswrapper[4886]: I1124 09:06:43.455387 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-dxnk5" Nov 24 09:06:43 crc kubenswrapper[4886]: I1124 09:06:43.457485 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Nov 24 09:06:43 crc kubenswrapper[4886]: I1124 09:06:43.457491 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Nov 24 09:06:43 crc kubenswrapper[4886]: I1124 09:06:43.459747 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Nov 24 09:06:43 crc kubenswrapper[4886]: I1124 09:06:43.476082 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-dxnk5"] Nov 24 09:06:43 crc kubenswrapper[4886]: I1124 09:06:43.619944 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9a54508-7f70-4e5d-952a-587f8fabeb1c-combined-ca-bundle\") pod \"swift-ring-rebalance-dxnk5\" (UID: \"c9a54508-7f70-4e5d-952a-587f8fabeb1c\") " pod="openstack/swift-ring-rebalance-dxnk5" Nov 24 09:06:43 crc kubenswrapper[4886]: I1124 09:06:43.620046 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/c9a54508-7f70-4e5d-952a-587f8fabeb1c-dispersionconf\") pod \"swift-ring-rebalance-dxnk5\" (UID: \"c9a54508-7f70-4e5d-952a-587f8fabeb1c\") " pod="openstack/swift-ring-rebalance-dxnk5" Nov 24 09:06:43 crc kubenswrapper[4886]: I1124 09:06:43.620136 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/c9a54508-7f70-4e5d-952a-587f8fabeb1c-ring-data-devices\") pod \"swift-ring-rebalance-dxnk5\" (UID: \"c9a54508-7f70-4e5d-952a-587f8fabeb1c\") " pod="openstack/swift-ring-rebalance-dxnk5" Nov 24 09:06:43 crc kubenswrapper[4886]: I1124 09:06:43.620213 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/65b7f4e6-3f5e-419b-9761-c0fc78a4632d-etc-swift\") pod \"swift-storage-0\" (UID: \"65b7f4e6-3f5e-419b-9761-c0fc78a4632d\") " pod="openstack/swift-storage-0" Nov 24 09:06:43 crc kubenswrapper[4886]: I1124 09:06:43.620268 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/c9a54508-7f70-4e5d-952a-587f8fabeb1c-etc-swift\") pod \"swift-ring-rebalance-dxnk5\" (UID: \"c9a54508-7f70-4e5d-952a-587f8fabeb1c\") " pod="openstack/swift-ring-rebalance-dxnk5" Nov 24 09:06:43 crc kubenswrapper[4886]: I1124 09:06:43.620309 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c9a54508-7f70-4e5d-952a-587f8fabeb1c-scripts\") pod \"swift-ring-rebalance-dxnk5\" (UID: \"c9a54508-7f70-4e5d-952a-587f8fabeb1c\") " pod="openstack/swift-ring-rebalance-dxnk5" Nov 24 09:06:43 crc kubenswrapper[4886]: I1124 09:06:43.620434 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bwjbl\" (UniqueName: \"kubernetes.io/projected/c9a54508-7f70-4e5d-952a-587f8fabeb1c-kube-api-access-bwjbl\") pod \"swift-ring-rebalance-dxnk5\" (UID: \"c9a54508-7f70-4e5d-952a-587f8fabeb1c\") " pod="openstack/swift-ring-rebalance-dxnk5" Nov 24 09:06:43 crc kubenswrapper[4886]: I1124 09:06:43.620512 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/c9a54508-7f70-4e5d-952a-587f8fabeb1c-swiftconf\") pod \"swift-ring-rebalance-dxnk5\" (UID: \"c9a54508-7f70-4e5d-952a-587f8fabeb1c\") " pod="openstack/swift-ring-rebalance-dxnk5" Nov 24 09:06:43 crc kubenswrapper[4886]: E1124 09:06:43.620463 4886 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Nov 24 09:06:43 crc kubenswrapper[4886]: E1124 09:06:43.620581 4886 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Nov 24 09:06:43 crc kubenswrapper[4886]: E1124 09:06:43.620657 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/65b7f4e6-3f5e-419b-9761-c0fc78a4632d-etc-swift podName:65b7f4e6-3f5e-419b-9761-c0fc78a4632d nodeName:}" failed. No retries permitted until 2025-11-24 09:06:47.620628725 +0000 UTC m=+1063.507367050 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/65b7f4e6-3f5e-419b-9761-c0fc78a4632d-etc-swift") pod "swift-storage-0" (UID: "65b7f4e6-3f5e-419b-9761-c0fc78a4632d") : configmap "swift-ring-files" not found Nov 24 09:06:43 crc kubenswrapper[4886]: I1124 09:06:43.722490 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c9a54508-7f70-4e5d-952a-587f8fabeb1c-scripts\") pod \"swift-ring-rebalance-dxnk5\" (UID: \"c9a54508-7f70-4e5d-952a-587f8fabeb1c\") " pod="openstack/swift-ring-rebalance-dxnk5" Nov 24 09:06:43 crc kubenswrapper[4886]: I1124 09:06:43.722577 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bwjbl\" (UniqueName: \"kubernetes.io/projected/c9a54508-7f70-4e5d-952a-587f8fabeb1c-kube-api-access-bwjbl\") pod \"swift-ring-rebalance-dxnk5\" (UID: \"c9a54508-7f70-4e5d-952a-587f8fabeb1c\") " pod="openstack/swift-ring-rebalance-dxnk5" Nov 24 09:06:43 crc kubenswrapper[4886]: I1124 09:06:43.722615 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/c9a54508-7f70-4e5d-952a-587f8fabeb1c-swiftconf\") pod \"swift-ring-rebalance-dxnk5\" (UID: \"c9a54508-7f70-4e5d-952a-587f8fabeb1c\") " pod="openstack/swift-ring-rebalance-dxnk5" Nov 24 09:06:43 crc kubenswrapper[4886]: I1124 09:06:43.722651 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9a54508-7f70-4e5d-952a-587f8fabeb1c-combined-ca-bundle\") pod \"swift-ring-rebalance-dxnk5\" (UID: \"c9a54508-7f70-4e5d-952a-587f8fabeb1c\") " pod="openstack/swift-ring-rebalance-dxnk5" Nov 24 09:06:43 crc kubenswrapper[4886]: I1124 09:06:43.722696 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/c9a54508-7f70-4e5d-952a-587f8fabeb1c-dispersionconf\") pod \"swift-ring-rebalance-dxnk5\" (UID: \"c9a54508-7f70-4e5d-952a-587f8fabeb1c\") " pod="openstack/swift-ring-rebalance-dxnk5" Nov 24 09:06:43 crc kubenswrapper[4886]: I1124 09:06:43.722720 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/c9a54508-7f70-4e5d-952a-587f8fabeb1c-ring-data-devices\") pod \"swift-ring-rebalance-dxnk5\" (UID: \"c9a54508-7f70-4e5d-952a-587f8fabeb1c\") " pod="openstack/swift-ring-rebalance-dxnk5" Nov 24 09:06:43 crc kubenswrapper[4886]: I1124 09:06:43.722773 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/c9a54508-7f70-4e5d-952a-587f8fabeb1c-etc-swift\") pod \"swift-ring-rebalance-dxnk5\" (UID: \"c9a54508-7f70-4e5d-952a-587f8fabeb1c\") " pod="openstack/swift-ring-rebalance-dxnk5" Nov 24 09:06:43 crc kubenswrapper[4886]: I1124 09:06:43.723355 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/c9a54508-7f70-4e5d-952a-587f8fabeb1c-etc-swift\") pod \"swift-ring-rebalance-dxnk5\" (UID: \"c9a54508-7f70-4e5d-952a-587f8fabeb1c\") " pod="openstack/swift-ring-rebalance-dxnk5" Nov 24 09:06:43 crc kubenswrapper[4886]: I1124 09:06:43.723559 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c9a54508-7f70-4e5d-952a-587f8fabeb1c-scripts\") pod \"swift-ring-rebalance-dxnk5\" (UID: \"c9a54508-7f70-4e5d-952a-587f8fabeb1c\") " pod="openstack/swift-ring-rebalance-dxnk5" Nov 24 09:06:43 crc kubenswrapper[4886]: I1124 09:06:43.723861 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/c9a54508-7f70-4e5d-952a-587f8fabeb1c-ring-data-devices\") pod \"swift-ring-rebalance-dxnk5\" (UID: \"c9a54508-7f70-4e5d-952a-587f8fabeb1c\") " pod="openstack/swift-ring-rebalance-dxnk5" Nov 24 09:06:43 crc kubenswrapper[4886]: I1124 09:06:43.729645 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/c9a54508-7f70-4e5d-952a-587f8fabeb1c-swiftconf\") pod \"swift-ring-rebalance-dxnk5\" (UID: \"c9a54508-7f70-4e5d-952a-587f8fabeb1c\") " pod="openstack/swift-ring-rebalance-dxnk5" Nov 24 09:06:43 crc kubenswrapper[4886]: I1124 09:06:43.729645 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/c9a54508-7f70-4e5d-952a-587f8fabeb1c-dispersionconf\") pod \"swift-ring-rebalance-dxnk5\" (UID: \"c9a54508-7f70-4e5d-952a-587f8fabeb1c\") " pod="openstack/swift-ring-rebalance-dxnk5" Nov 24 09:06:43 crc kubenswrapper[4886]: I1124 09:06:43.729812 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9a54508-7f70-4e5d-952a-587f8fabeb1c-combined-ca-bundle\") pod \"swift-ring-rebalance-dxnk5\" (UID: \"c9a54508-7f70-4e5d-952a-587f8fabeb1c\") " pod="openstack/swift-ring-rebalance-dxnk5" Nov 24 09:06:43 crc kubenswrapper[4886]: I1124 09:06:43.741835 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bwjbl\" (UniqueName: \"kubernetes.io/projected/c9a54508-7f70-4e5d-952a-587f8fabeb1c-kube-api-access-bwjbl\") pod \"swift-ring-rebalance-dxnk5\" (UID: \"c9a54508-7f70-4e5d-952a-587f8fabeb1c\") " pod="openstack/swift-ring-rebalance-dxnk5" Nov 24 09:06:43 crc kubenswrapper[4886]: I1124 09:06:43.774145 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-dxnk5" Nov 24 09:06:44 crc kubenswrapper[4886]: I1124 09:06:44.230806 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-dxnk5"] Nov 24 09:06:44 crc kubenswrapper[4886]: I1124 09:06:44.404038 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-dxnk5" event={"ID":"c9a54508-7f70-4e5d-952a-587f8fabeb1c","Type":"ContainerStarted","Data":"004ed56e9b7ce9f38ac92a7c63148ca227fc429cf89e3c599ea71e8647692736"} Nov 24 09:06:44 crc kubenswrapper[4886]: I1124 09:06:44.505803 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Nov 24 09:06:44 crc kubenswrapper[4886]: I1124 09:06:44.505908 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Nov 24 09:06:44 crc kubenswrapper[4886]: I1124 09:06:44.598268 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Nov 24 09:06:45 crc kubenswrapper[4886]: I1124 09:06:45.734649 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Nov 24 09:06:45 crc kubenswrapper[4886]: I1124 09:06:45.750668 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Nov 24 09:06:45 crc kubenswrapper[4886]: I1124 09:06:45.750720 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Nov 24 09:06:45 crc kubenswrapper[4886]: I1124 09:06:45.864681 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Nov 24 09:06:46 crc kubenswrapper[4886]: I1124 09:06:46.062545 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-8106-account-create-dkh5l"] Nov 24 09:06:46 crc kubenswrapper[4886]: I1124 09:06:46.064993 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-8106-account-create-dkh5l" Nov 24 09:06:46 crc kubenswrapper[4886]: I1124 09:06:46.071603 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Nov 24 09:06:46 crc kubenswrapper[4886]: I1124 09:06:46.075818 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-cbj48"] Nov 24 09:06:46 crc kubenswrapper[4886]: I1124 09:06:46.077322 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-cbj48" Nov 24 09:06:46 crc kubenswrapper[4886]: I1124 09:06:46.091374 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-8106-account-create-dkh5l"] Nov 24 09:06:46 crc kubenswrapper[4886]: I1124 09:06:46.106342 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-cbj48"] Nov 24 09:06:46 crc kubenswrapper[4886]: I1124 09:06:46.184611 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/19a39199-f209-40bc-932d-9b0274ce5a12-operator-scripts\") pod \"placement-db-create-cbj48\" (UID: \"19a39199-f209-40bc-932d-9b0274ce5a12\") " pod="openstack/placement-db-create-cbj48" Nov 24 09:06:46 crc kubenswrapper[4886]: I1124 09:06:46.185074 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nx24k\" (UniqueName: \"kubernetes.io/projected/18c6c7c4-c29b-4884-9803-ee0d75bd2791-kube-api-access-nx24k\") pod \"placement-8106-account-create-dkh5l\" (UID: \"18c6c7c4-c29b-4884-9803-ee0d75bd2791\") " pod="openstack/placement-8106-account-create-dkh5l" Nov 24 09:06:46 crc kubenswrapper[4886]: I1124 09:06:46.185129 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-69xf5\" (UniqueName: \"kubernetes.io/projected/19a39199-f209-40bc-932d-9b0274ce5a12-kube-api-access-69xf5\") pod \"placement-db-create-cbj48\" (UID: \"19a39199-f209-40bc-932d-9b0274ce5a12\") " pod="openstack/placement-db-create-cbj48" Nov 24 09:06:46 crc kubenswrapper[4886]: I1124 09:06:46.185242 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/18c6c7c4-c29b-4884-9803-ee0d75bd2791-operator-scripts\") pod \"placement-8106-account-create-dkh5l\" (UID: \"18c6c7c4-c29b-4884-9803-ee0d75bd2791\") " pod="openstack/placement-8106-account-create-dkh5l" Nov 24 09:06:46 crc kubenswrapper[4886]: I1124 09:06:46.286918 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nx24k\" (UniqueName: \"kubernetes.io/projected/18c6c7c4-c29b-4884-9803-ee0d75bd2791-kube-api-access-nx24k\") pod \"placement-8106-account-create-dkh5l\" (UID: \"18c6c7c4-c29b-4884-9803-ee0d75bd2791\") " pod="openstack/placement-8106-account-create-dkh5l" Nov 24 09:06:46 crc kubenswrapper[4886]: I1124 09:06:46.286985 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-69xf5\" (UniqueName: \"kubernetes.io/projected/19a39199-f209-40bc-932d-9b0274ce5a12-kube-api-access-69xf5\") pod \"placement-db-create-cbj48\" (UID: \"19a39199-f209-40bc-932d-9b0274ce5a12\") " pod="openstack/placement-db-create-cbj48" Nov 24 09:06:46 crc kubenswrapper[4886]: I1124 09:06:46.287028 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/18c6c7c4-c29b-4884-9803-ee0d75bd2791-operator-scripts\") pod \"placement-8106-account-create-dkh5l\" (UID: \"18c6c7c4-c29b-4884-9803-ee0d75bd2791\") " pod="openstack/placement-8106-account-create-dkh5l" Nov 24 09:06:46 crc kubenswrapper[4886]: I1124 09:06:46.287117 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/19a39199-f209-40bc-932d-9b0274ce5a12-operator-scripts\") pod \"placement-db-create-cbj48\" (UID: \"19a39199-f209-40bc-932d-9b0274ce5a12\") " pod="openstack/placement-db-create-cbj48" Nov 24 09:06:46 crc kubenswrapper[4886]: I1124 09:06:46.288207 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/19a39199-f209-40bc-932d-9b0274ce5a12-operator-scripts\") pod \"placement-db-create-cbj48\" (UID: \"19a39199-f209-40bc-932d-9b0274ce5a12\") " pod="openstack/placement-db-create-cbj48" Nov 24 09:06:46 crc kubenswrapper[4886]: I1124 09:06:46.288529 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/18c6c7c4-c29b-4884-9803-ee0d75bd2791-operator-scripts\") pod \"placement-8106-account-create-dkh5l\" (UID: \"18c6c7c4-c29b-4884-9803-ee0d75bd2791\") " pod="openstack/placement-8106-account-create-dkh5l" Nov 24 09:06:46 crc kubenswrapper[4886]: I1124 09:06:46.311827 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-69xf5\" (UniqueName: \"kubernetes.io/projected/19a39199-f209-40bc-932d-9b0274ce5a12-kube-api-access-69xf5\") pod \"placement-db-create-cbj48\" (UID: \"19a39199-f209-40bc-932d-9b0274ce5a12\") " pod="openstack/placement-db-create-cbj48" Nov 24 09:06:46 crc kubenswrapper[4886]: I1124 09:06:46.314383 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nx24k\" (UniqueName: \"kubernetes.io/projected/18c6c7c4-c29b-4884-9803-ee0d75bd2791-kube-api-access-nx24k\") pod \"placement-8106-account-create-dkh5l\" (UID: \"18c6c7c4-c29b-4884-9803-ee0d75bd2791\") " pod="openstack/placement-8106-account-create-dkh5l" Nov 24 09:06:46 crc kubenswrapper[4886]: I1124 09:06:46.394135 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-8106-account-create-dkh5l" Nov 24 09:06:46 crc kubenswrapper[4886]: I1124 09:06:46.408227 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-cbj48" Nov 24 09:06:46 crc kubenswrapper[4886]: I1124 09:06:46.514737 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Nov 24 09:06:47 crc kubenswrapper[4886]: I1124 09:06:47.716969 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/65b7f4e6-3f5e-419b-9761-c0fc78a4632d-etc-swift\") pod \"swift-storage-0\" (UID: \"65b7f4e6-3f5e-419b-9761-c0fc78a4632d\") " pod="openstack/swift-storage-0" Nov 24 09:06:47 crc kubenswrapper[4886]: E1124 09:06:47.717281 4886 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Nov 24 09:06:47 crc kubenswrapper[4886]: E1124 09:06:47.717730 4886 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Nov 24 09:06:47 crc kubenswrapper[4886]: E1124 09:06:47.717842 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/65b7f4e6-3f5e-419b-9761-c0fc78a4632d-etc-swift podName:65b7f4e6-3f5e-419b-9761-c0fc78a4632d nodeName:}" failed. No retries permitted until 2025-11-24 09:06:55.717810567 +0000 UTC m=+1071.604548702 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/65b7f4e6-3f5e-419b-9761-c0fc78a4632d-etc-swift") pod "swift-storage-0" (UID: "65b7f4e6-3f5e-419b-9761-c0fc78a4632d") : configmap "swift-ring-files" not found Nov 24 09:06:48 crc kubenswrapper[4886]: I1124 09:06:48.452487 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-dxnk5" event={"ID":"c9a54508-7f70-4e5d-952a-587f8fabeb1c","Type":"ContainerStarted","Data":"20e09619d69e3a480a7d435012c68cf01b943931d3d8e00378645a34077e5be1"} Nov 24 09:06:48 crc kubenswrapper[4886]: I1124 09:06:48.478706 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-cbj48"] Nov 24 09:06:48 crc kubenswrapper[4886]: I1124 09:06:48.484709 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-8106-account-create-dkh5l"] Nov 24 09:06:48 crc kubenswrapper[4886]: I1124 09:06:48.491440 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-dxnk5" podStartSLOduration=1.743343699 podStartE2EDuration="5.491413735s" podCreationTimestamp="2025-11-24 09:06:43 +0000 UTC" firstStartedPulling="2025-11-24 09:06:44.238779157 +0000 UTC m=+1060.125517292" lastFinishedPulling="2025-11-24 09:06:47.986849193 +0000 UTC m=+1063.873587328" observedRunningTime="2025-11-24 09:06:48.485529847 +0000 UTC m=+1064.372267992" watchObservedRunningTime="2025-11-24 09:06:48.491413735 +0000 UTC m=+1064.378151870" Nov 24 09:06:48 crc kubenswrapper[4886]: I1124 09:06:48.983326 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-b8fbc5445-ts9fz" Nov 24 09:06:49 crc kubenswrapper[4886]: I1124 09:06:49.051916 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-k45wp"] Nov 24 09:06:49 crc kubenswrapper[4886]: I1124 09:06:49.052553 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-57d769cc4f-k45wp" podUID="0ce0238f-3de7-44fa-8e99-5345309b8c44" containerName="dnsmasq-dns" containerID="cri-o://20bd5c0539ec2cb750f212f1008cf01d76dcdb39bb30bccd224032d7f092cb85" gracePeriod=10 Nov 24 09:06:49 crc kubenswrapper[4886]: I1124 09:06:49.468474 4886 generic.go:334] "Generic (PLEG): container finished" podID="18c6c7c4-c29b-4884-9803-ee0d75bd2791" containerID="d6df49255f3dce252e91237451426f9168cb6ac269bbf6064047991e81d048fd" exitCode=0 Nov 24 09:06:49 crc kubenswrapper[4886]: I1124 09:06:49.468593 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-8106-account-create-dkh5l" event={"ID":"18c6c7c4-c29b-4884-9803-ee0d75bd2791","Type":"ContainerDied","Data":"d6df49255f3dce252e91237451426f9168cb6ac269bbf6064047991e81d048fd"} Nov 24 09:06:49 crc kubenswrapper[4886]: I1124 09:06:49.469062 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-8106-account-create-dkh5l" event={"ID":"18c6c7c4-c29b-4884-9803-ee0d75bd2791","Type":"ContainerStarted","Data":"628af15502e92cc5397b4756cf888ee47a3b5ded5f45b0eba774f3056da8bd44"} Nov 24 09:06:49 crc kubenswrapper[4886]: I1124 09:06:49.472401 4886 generic.go:334] "Generic (PLEG): container finished" podID="19a39199-f209-40bc-932d-9b0274ce5a12" containerID="72ebab10f42383296e64cfed0ad748412174894810e8d30511ef9d6e00f3be8f" exitCode=0 Nov 24 09:06:49 crc kubenswrapper[4886]: I1124 09:06:49.472482 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-cbj48" event={"ID":"19a39199-f209-40bc-932d-9b0274ce5a12","Type":"ContainerDied","Data":"72ebab10f42383296e64cfed0ad748412174894810e8d30511ef9d6e00f3be8f"} Nov 24 09:06:49 crc kubenswrapper[4886]: I1124 09:06:49.472511 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-cbj48" event={"ID":"19a39199-f209-40bc-932d-9b0274ce5a12","Type":"ContainerStarted","Data":"b38deb55b7f40661b207e7a9b26cd36e930af31307e12fa422494b5e5ae400c1"} Nov 24 09:06:49 crc kubenswrapper[4886]: I1124 09:06:49.478260 4886 generic.go:334] "Generic (PLEG): container finished" podID="0ce0238f-3de7-44fa-8e99-5345309b8c44" containerID="20bd5c0539ec2cb750f212f1008cf01d76dcdb39bb30bccd224032d7f092cb85" exitCode=0 Nov 24 09:06:49 crc kubenswrapper[4886]: I1124 09:06:49.478480 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-k45wp" event={"ID":"0ce0238f-3de7-44fa-8e99-5345309b8c44","Type":"ContainerDied","Data":"20bd5c0539ec2cb750f212f1008cf01d76dcdb39bb30bccd224032d7f092cb85"} Nov 24 09:06:49 crc kubenswrapper[4886]: I1124 09:06:49.581574 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-k45wp" Nov 24 09:06:49 crc kubenswrapper[4886]: I1124 09:06:49.766955 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0ce0238f-3de7-44fa-8e99-5345309b8c44-config\") pod \"0ce0238f-3de7-44fa-8e99-5345309b8c44\" (UID: \"0ce0238f-3de7-44fa-8e99-5345309b8c44\") " Nov 24 09:06:49 crc kubenswrapper[4886]: I1124 09:06:49.767328 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9pm47\" (UniqueName: \"kubernetes.io/projected/0ce0238f-3de7-44fa-8e99-5345309b8c44-kube-api-access-9pm47\") pod \"0ce0238f-3de7-44fa-8e99-5345309b8c44\" (UID: \"0ce0238f-3de7-44fa-8e99-5345309b8c44\") " Nov 24 09:06:49 crc kubenswrapper[4886]: I1124 09:06:49.767468 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0ce0238f-3de7-44fa-8e99-5345309b8c44-dns-svc\") pod \"0ce0238f-3de7-44fa-8e99-5345309b8c44\" (UID: \"0ce0238f-3de7-44fa-8e99-5345309b8c44\") " Nov 24 09:06:49 crc kubenswrapper[4886]: I1124 09:06:49.786538 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0ce0238f-3de7-44fa-8e99-5345309b8c44-kube-api-access-9pm47" (OuterVolumeSpecName: "kube-api-access-9pm47") pod "0ce0238f-3de7-44fa-8e99-5345309b8c44" (UID: "0ce0238f-3de7-44fa-8e99-5345309b8c44"). InnerVolumeSpecName "kube-api-access-9pm47". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 09:06:49 crc kubenswrapper[4886]: I1124 09:06:49.813543 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0ce0238f-3de7-44fa-8e99-5345309b8c44-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "0ce0238f-3de7-44fa-8e99-5345309b8c44" (UID: "0ce0238f-3de7-44fa-8e99-5345309b8c44"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 09:06:49 crc kubenswrapper[4886]: I1124 09:06:49.822111 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0ce0238f-3de7-44fa-8e99-5345309b8c44-config" (OuterVolumeSpecName: "config") pod "0ce0238f-3de7-44fa-8e99-5345309b8c44" (UID: "0ce0238f-3de7-44fa-8e99-5345309b8c44"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 09:06:49 crc kubenswrapper[4886]: I1124 09:06:49.870857 4886 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0ce0238f-3de7-44fa-8e99-5345309b8c44-config\") on node \"crc\" DevicePath \"\"" Nov 24 09:06:49 crc kubenswrapper[4886]: I1124 09:06:49.871339 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9pm47\" (UniqueName: \"kubernetes.io/projected/0ce0238f-3de7-44fa-8e99-5345309b8c44-kube-api-access-9pm47\") on node \"crc\" DevicePath \"\"" Nov 24 09:06:49 crc kubenswrapper[4886]: I1124 09:06:49.871427 4886 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0ce0238f-3de7-44fa-8e99-5345309b8c44-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 24 09:06:50 crc kubenswrapper[4886]: I1124 09:06:50.490038 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-k45wp" event={"ID":"0ce0238f-3de7-44fa-8e99-5345309b8c44","Type":"ContainerDied","Data":"3bd3d7e233d17e25ef103151b9deeb2ead007499c22fe0dfa7b25ef541652d13"} Nov 24 09:06:50 crc kubenswrapper[4886]: I1124 09:06:50.490516 4886 scope.go:117] "RemoveContainer" containerID="20bd5c0539ec2cb750f212f1008cf01d76dcdb39bb30bccd224032d7f092cb85" Nov 24 09:06:50 crc kubenswrapper[4886]: I1124 09:06:50.490247 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-k45wp" Nov 24 09:06:50 crc kubenswrapper[4886]: I1124 09:06:50.522742 4886 scope.go:117] "RemoveContainer" containerID="26d3f2cb1b5105a1923875bb00474f6d8ed4589d8f2fd1dcaa3634aa7a414d08" Nov 24 09:06:50 crc kubenswrapper[4886]: I1124 09:06:50.543831 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-k45wp"] Nov 24 09:06:50 crc kubenswrapper[4886]: I1124 09:06:50.550744 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-k45wp"] Nov 24 09:06:50 crc kubenswrapper[4886]: I1124 09:06:50.862707 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0ce0238f-3de7-44fa-8e99-5345309b8c44" path="/var/lib/kubelet/pods/0ce0238f-3de7-44fa-8e99-5345309b8c44/volumes" Nov 24 09:06:50 crc kubenswrapper[4886]: I1124 09:06:50.981919 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-8106-account-create-dkh5l" Nov 24 09:06:51 crc kubenswrapper[4886]: I1124 09:06:51.011669 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/18c6c7c4-c29b-4884-9803-ee0d75bd2791-operator-scripts\") pod \"18c6c7c4-c29b-4884-9803-ee0d75bd2791\" (UID: \"18c6c7c4-c29b-4884-9803-ee0d75bd2791\") " Nov 24 09:06:51 crc kubenswrapper[4886]: I1124 09:06:51.011803 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nx24k\" (UniqueName: \"kubernetes.io/projected/18c6c7c4-c29b-4884-9803-ee0d75bd2791-kube-api-access-nx24k\") pod \"18c6c7c4-c29b-4884-9803-ee0d75bd2791\" (UID: \"18c6c7c4-c29b-4884-9803-ee0d75bd2791\") " Nov 24 09:06:51 crc kubenswrapper[4886]: I1124 09:06:51.012815 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/18c6c7c4-c29b-4884-9803-ee0d75bd2791-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "18c6c7c4-c29b-4884-9803-ee0d75bd2791" (UID: "18c6c7c4-c29b-4884-9803-ee0d75bd2791"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 09:06:51 crc kubenswrapper[4886]: I1124 09:06:51.043602 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/18c6c7c4-c29b-4884-9803-ee0d75bd2791-kube-api-access-nx24k" (OuterVolumeSpecName: "kube-api-access-nx24k") pod "18c6c7c4-c29b-4884-9803-ee0d75bd2791" (UID: "18c6c7c4-c29b-4884-9803-ee0d75bd2791"). InnerVolumeSpecName "kube-api-access-nx24k". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 09:06:51 crc kubenswrapper[4886]: I1124 09:06:51.106594 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-cbj48" Nov 24 09:06:51 crc kubenswrapper[4886]: I1124 09:06:51.115857 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/19a39199-f209-40bc-932d-9b0274ce5a12-operator-scripts\") pod \"19a39199-f209-40bc-932d-9b0274ce5a12\" (UID: \"19a39199-f209-40bc-932d-9b0274ce5a12\") " Nov 24 09:06:51 crc kubenswrapper[4886]: I1124 09:06:51.115977 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-69xf5\" (UniqueName: \"kubernetes.io/projected/19a39199-f209-40bc-932d-9b0274ce5a12-kube-api-access-69xf5\") pod \"19a39199-f209-40bc-932d-9b0274ce5a12\" (UID: \"19a39199-f209-40bc-932d-9b0274ce5a12\") " Nov 24 09:06:51 crc kubenswrapper[4886]: I1124 09:06:51.116553 4886 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/18c6c7c4-c29b-4884-9803-ee0d75bd2791-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 09:06:51 crc kubenswrapper[4886]: I1124 09:06:51.116570 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nx24k\" (UniqueName: \"kubernetes.io/projected/18c6c7c4-c29b-4884-9803-ee0d75bd2791-kube-api-access-nx24k\") on node \"crc\" DevicePath \"\"" Nov 24 09:06:51 crc kubenswrapper[4886]: I1124 09:06:51.117168 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/19a39199-f209-40bc-932d-9b0274ce5a12-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "19a39199-f209-40bc-932d-9b0274ce5a12" (UID: "19a39199-f209-40bc-932d-9b0274ce5a12"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 09:06:51 crc kubenswrapper[4886]: I1124 09:06:51.137456 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/19a39199-f209-40bc-932d-9b0274ce5a12-kube-api-access-69xf5" (OuterVolumeSpecName: "kube-api-access-69xf5") pod "19a39199-f209-40bc-932d-9b0274ce5a12" (UID: "19a39199-f209-40bc-932d-9b0274ce5a12"). InnerVolumeSpecName "kube-api-access-69xf5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 09:06:51 crc kubenswrapper[4886]: I1124 09:06:51.223710 4886 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/19a39199-f209-40bc-932d-9b0274ce5a12-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 09:06:51 crc kubenswrapper[4886]: I1124 09:06:51.223775 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-69xf5\" (UniqueName: \"kubernetes.io/projected/19a39199-f209-40bc-932d-9b0274ce5a12-kube-api-access-69xf5\") on node \"crc\" DevicePath \"\"" Nov 24 09:06:51 crc kubenswrapper[4886]: I1124 09:06:51.311225 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-mhdfz"] Nov 24 09:06:51 crc kubenswrapper[4886]: E1124 09:06:51.311750 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18c6c7c4-c29b-4884-9803-ee0d75bd2791" containerName="mariadb-account-create" Nov 24 09:06:51 crc kubenswrapper[4886]: I1124 09:06:51.311779 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="18c6c7c4-c29b-4884-9803-ee0d75bd2791" containerName="mariadb-account-create" Nov 24 09:06:51 crc kubenswrapper[4886]: E1124 09:06:51.311797 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ce0238f-3de7-44fa-8e99-5345309b8c44" containerName="init" Nov 24 09:06:51 crc kubenswrapper[4886]: I1124 09:06:51.311806 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ce0238f-3de7-44fa-8e99-5345309b8c44" containerName="init" Nov 24 09:06:51 crc kubenswrapper[4886]: E1124 09:06:51.311817 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="19a39199-f209-40bc-932d-9b0274ce5a12" containerName="mariadb-database-create" Nov 24 09:06:51 crc kubenswrapper[4886]: I1124 09:06:51.311824 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="19a39199-f209-40bc-932d-9b0274ce5a12" containerName="mariadb-database-create" Nov 24 09:06:51 crc kubenswrapper[4886]: E1124 09:06:51.311842 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ce0238f-3de7-44fa-8e99-5345309b8c44" containerName="dnsmasq-dns" Nov 24 09:06:51 crc kubenswrapper[4886]: I1124 09:06:51.311849 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ce0238f-3de7-44fa-8e99-5345309b8c44" containerName="dnsmasq-dns" Nov 24 09:06:51 crc kubenswrapper[4886]: I1124 09:06:51.312037 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="19a39199-f209-40bc-932d-9b0274ce5a12" containerName="mariadb-database-create" Nov 24 09:06:51 crc kubenswrapper[4886]: I1124 09:06:51.312052 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="0ce0238f-3de7-44fa-8e99-5345309b8c44" containerName="dnsmasq-dns" Nov 24 09:06:51 crc kubenswrapper[4886]: I1124 09:06:51.312068 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="18c6c7c4-c29b-4884-9803-ee0d75bd2791" containerName="mariadb-account-create" Nov 24 09:06:51 crc kubenswrapper[4886]: I1124 09:06:51.312817 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-mhdfz" Nov 24 09:06:51 crc kubenswrapper[4886]: I1124 09:06:51.324751 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-mhdfz"] Nov 24 09:06:51 crc kubenswrapper[4886]: I1124 09:06:51.324874 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g7ghb\" (UniqueName: \"kubernetes.io/projected/c0d4e8bd-b5f7-4429-8494-e88fcdb32491-kube-api-access-g7ghb\") pod \"glance-db-create-mhdfz\" (UID: \"c0d4e8bd-b5f7-4429-8494-e88fcdb32491\") " pod="openstack/glance-db-create-mhdfz" Nov 24 09:06:51 crc kubenswrapper[4886]: I1124 09:06:51.324946 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c0d4e8bd-b5f7-4429-8494-e88fcdb32491-operator-scripts\") pod \"glance-db-create-mhdfz\" (UID: \"c0d4e8bd-b5f7-4429-8494-e88fcdb32491\") " pod="openstack/glance-db-create-mhdfz" Nov 24 09:06:51 crc kubenswrapper[4886]: I1124 09:06:51.393874 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-e102-account-create-rjpsv"] Nov 24 09:06:51 crc kubenswrapper[4886]: I1124 09:06:51.395129 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-e102-account-create-rjpsv" Nov 24 09:06:51 crc kubenswrapper[4886]: I1124 09:06:51.398353 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Nov 24 09:06:51 crc kubenswrapper[4886]: I1124 09:06:51.412775 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-e102-account-create-rjpsv"] Nov 24 09:06:51 crc kubenswrapper[4886]: I1124 09:06:51.426064 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v6wnl\" (UniqueName: \"kubernetes.io/projected/ed63768a-813f-4e2e-8a49-878492cc908c-kube-api-access-v6wnl\") pod \"glance-e102-account-create-rjpsv\" (UID: \"ed63768a-813f-4e2e-8a49-878492cc908c\") " pod="openstack/glance-e102-account-create-rjpsv" Nov 24 09:06:51 crc kubenswrapper[4886]: I1124 09:06:51.426194 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ed63768a-813f-4e2e-8a49-878492cc908c-operator-scripts\") pod \"glance-e102-account-create-rjpsv\" (UID: \"ed63768a-813f-4e2e-8a49-878492cc908c\") " pod="openstack/glance-e102-account-create-rjpsv" Nov 24 09:06:51 crc kubenswrapper[4886]: I1124 09:06:51.426237 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g7ghb\" (UniqueName: \"kubernetes.io/projected/c0d4e8bd-b5f7-4429-8494-e88fcdb32491-kube-api-access-g7ghb\") pod \"glance-db-create-mhdfz\" (UID: \"c0d4e8bd-b5f7-4429-8494-e88fcdb32491\") " pod="openstack/glance-db-create-mhdfz" Nov 24 09:06:51 crc kubenswrapper[4886]: I1124 09:06:51.426400 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c0d4e8bd-b5f7-4429-8494-e88fcdb32491-operator-scripts\") pod \"glance-db-create-mhdfz\" (UID: \"c0d4e8bd-b5f7-4429-8494-e88fcdb32491\") " pod="openstack/glance-db-create-mhdfz" Nov 24 09:06:51 crc kubenswrapper[4886]: I1124 09:06:51.427656 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c0d4e8bd-b5f7-4429-8494-e88fcdb32491-operator-scripts\") pod \"glance-db-create-mhdfz\" (UID: \"c0d4e8bd-b5f7-4429-8494-e88fcdb32491\") " pod="openstack/glance-db-create-mhdfz" Nov 24 09:06:51 crc kubenswrapper[4886]: I1124 09:06:51.453013 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g7ghb\" (UniqueName: \"kubernetes.io/projected/c0d4e8bd-b5f7-4429-8494-e88fcdb32491-kube-api-access-g7ghb\") pod \"glance-db-create-mhdfz\" (UID: \"c0d4e8bd-b5f7-4429-8494-e88fcdb32491\") " pod="openstack/glance-db-create-mhdfz" Nov 24 09:06:51 crc kubenswrapper[4886]: I1124 09:06:51.511601 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-8106-account-create-dkh5l" Nov 24 09:06:51 crc kubenswrapper[4886]: I1124 09:06:51.511616 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-8106-account-create-dkh5l" event={"ID":"18c6c7c4-c29b-4884-9803-ee0d75bd2791","Type":"ContainerDied","Data":"628af15502e92cc5397b4756cf888ee47a3b5ded5f45b0eba774f3056da8bd44"} Nov 24 09:06:51 crc kubenswrapper[4886]: I1124 09:06:51.511680 4886 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="628af15502e92cc5397b4756cf888ee47a3b5ded5f45b0eba774f3056da8bd44" Nov 24 09:06:51 crc kubenswrapper[4886]: I1124 09:06:51.517832 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-cbj48" Nov 24 09:06:51 crc kubenswrapper[4886]: I1124 09:06:51.517866 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-cbj48" event={"ID":"19a39199-f209-40bc-932d-9b0274ce5a12","Type":"ContainerDied","Data":"b38deb55b7f40661b207e7a9b26cd36e930af31307e12fa422494b5e5ae400c1"} Nov 24 09:06:51 crc kubenswrapper[4886]: I1124 09:06:51.517934 4886 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b38deb55b7f40661b207e7a9b26cd36e930af31307e12fa422494b5e5ae400c1" Nov 24 09:06:51 crc kubenswrapper[4886]: I1124 09:06:51.527869 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v6wnl\" (UniqueName: \"kubernetes.io/projected/ed63768a-813f-4e2e-8a49-878492cc908c-kube-api-access-v6wnl\") pod \"glance-e102-account-create-rjpsv\" (UID: \"ed63768a-813f-4e2e-8a49-878492cc908c\") " pod="openstack/glance-e102-account-create-rjpsv" Nov 24 09:06:51 crc kubenswrapper[4886]: I1124 09:06:51.527984 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ed63768a-813f-4e2e-8a49-878492cc908c-operator-scripts\") pod \"glance-e102-account-create-rjpsv\" (UID: \"ed63768a-813f-4e2e-8a49-878492cc908c\") " pod="openstack/glance-e102-account-create-rjpsv" Nov 24 09:06:51 crc kubenswrapper[4886]: I1124 09:06:51.529035 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ed63768a-813f-4e2e-8a49-878492cc908c-operator-scripts\") pod \"glance-e102-account-create-rjpsv\" (UID: \"ed63768a-813f-4e2e-8a49-878492cc908c\") " pod="openstack/glance-e102-account-create-rjpsv" Nov 24 09:06:51 crc kubenswrapper[4886]: I1124 09:06:51.550875 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v6wnl\" (UniqueName: \"kubernetes.io/projected/ed63768a-813f-4e2e-8a49-878492cc908c-kube-api-access-v6wnl\") pod \"glance-e102-account-create-rjpsv\" (UID: \"ed63768a-813f-4e2e-8a49-878492cc908c\") " pod="openstack/glance-e102-account-create-rjpsv" Nov 24 09:06:51 crc kubenswrapper[4886]: I1124 09:06:51.635321 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-mhdfz" Nov 24 09:06:51 crc kubenswrapper[4886]: I1124 09:06:51.797893 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-e102-account-create-rjpsv" Nov 24 09:06:52 crc kubenswrapper[4886]: I1124 09:06:52.104054 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-mhdfz"] Nov 24 09:06:52 crc kubenswrapper[4886]: I1124 09:06:52.246347 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-e102-account-create-rjpsv"] Nov 24 09:06:52 crc kubenswrapper[4886]: I1124 09:06:52.534671 4886 generic.go:334] "Generic (PLEG): container finished" podID="c0d4e8bd-b5f7-4429-8494-e88fcdb32491" containerID="02118a86ec1c00b341f4e149ec6b1bdbeb9ccfb61da67958ae942f8004d6372c" exitCode=0 Nov 24 09:06:52 crc kubenswrapper[4886]: I1124 09:06:52.534786 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-mhdfz" event={"ID":"c0d4e8bd-b5f7-4429-8494-e88fcdb32491","Type":"ContainerDied","Data":"02118a86ec1c00b341f4e149ec6b1bdbeb9ccfb61da67958ae942f8004d6372c"} Nov 24 09:06:52 crc kubenswrapper[4886]: I1124 09:06:52.534878 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-mhdfz" event={"ID":"c0d4e8bd-b5f7-4429-8494-e88fcdb32491","Type":"ContainerStarted","Data":"b26c07d14b19ac2385c75ed5e2d23c397341bf9aeab3c02a6e1642c2d11f0bd4"} Nov 24 09:06:52 crc kubenswrapper[4886]: I1124 09:06:52.542454 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-e102-account-create-rjpsv" event={"ID":"ed63768a-813f-4e2e-8a49-878492cc908c","Type":"ContainerStarted","Data":"0786816348c1f062fb2bc65a7e71b1bc252fb4f34cbb392aacf694c2e1faa01d"} Nov 24 09:06:52 crc kubenswrapper[4886]: I1124 09:06:52.542775 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-e102-account-create-rjpsv" event={"ID":"ed63768a-813f-4e2e-8a49-878492cc908c","Type":"ContainerStarted","Data":"d85454ae480cb418d1222ac745c7205ac9f65e8c3fda738a07a3420ebadf7a31"} Nov 24 09:06:52 crc kubenswrapper[4886]: I1124 09:06:52.580576 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-e102-account-create-rjpsv" podStartSLOduration=1.580552307 podStartE2EDuration="1.580552307s" podCreationTimestamp="2025-11-24 09:06:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 09:06:52.577355735 +0000 UTC m=+1068.464093870" watchObservedRunningTime="2025-11-24 09:06:52.580552307 +0000 UTC m=+1068.467290442" Nov 24 09:06:53 crc kubenswrapper[4886]: I1124 09:06:53.551011 4886 generic.go:334] "Generic (PLEG): container finished" podID="ed63768a-813f-4e2e-8a49-878492cc908c" containerID="0786816348c1f062fb2bc65a7e71b1bc252fb4f34cbb392aacf694c2e1faa01d" exitCode=0 Nov 24 09:06:53 crc kubenswrapper[4886]: I1124 09:06:53.551108 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-e102-account-create-rjpsv" event={"ID":"ed63768a-813f-4e2e-8a49-878492cc908c","Type":"ContainerDied","Data":"0786816348c1f062fb2bc65a7e71b1bc252fb4f34cbb392aacf694c2e1faa01d"} Nov 24 09:06:53 crc kubenswrapper[4886]: I1124 09:06:53.965547 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-mhdfz" Nov 24 09:06:54 crc kubenswrapper[4886]: I1124 09:06:54.091985 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g7ghb\" (UniqueName: \"kubernetes.io/projected/c0d4e8bd-b5f7-4429-8494-e88fcdb32491-kube-api-access-g7ghb\") pod \"c0d4e8bd-b5f7-4429-8494-e88fcdb32491\" (UID: \"c0d4e8bd-b5f7-4429-8494-e88fcdb32491\") " Nov 24 09:06:54 crc kubenswrapper[4886]: I1124 09:06:54.092099 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c0d4e8bd-b5f7-4429-8494-e88fcdb32491-operator-scripts\") pod \"c0d4e8bd-b5f7-4429-8494-e88fcdb32491\" (UID: \"c0d4e8bd-b5f7-4429-8494-e88fcdb32491\") " Nov 24 09:06:54 crc kubenswrapper[4886]: I1124 09:06:54.092922 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c0d4e8bd-b5f7-4429-8494-e88fcdb32491-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c0d4e8bd-b5f7-4429-8494-e88fcdb32491" (UID: "c0d4e8bd-b5f7-4429-8494-e88fcdb32491"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 09:06:54 crc kubenswrapper[4886]: I1124 09:06:54.097898 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c0d4e8bd-b5f7-4429-8494-e88fcdb32491-kube-api-access-g7ghb" (OuterVolumeSpecName: "kube-api-access-g7ghb") pod "c0d4e8bd-b5f7-4429-8494-e88fcdb32491" (UID: "c0d4e8bd-b5f7-4429-8494-e88fcdb32491"). InnerVolumeSpecName "kube-api-access-g7ghb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 09:06:54 crc kubenswrapper[4886]: I1124 09:06:54.194779 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g7ghb\" (UniqueName: \"kubernetes.io/projected/c0d4e8bd-b5f7-4429-8494-e88fcdb32491-kube-api-access-g7ghb\") on node \"crc\" DevicePath \"\"" Nov 24 09:06:54 crc kubenswrapper[4886]: I1124 09:06:54.194822 4886 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c0d4e8bd-b5f7-4429-8494-e88fcdb32491-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 09:06:54 crc kubenswrapper[4886]: I1124 09:06:54.563344 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-mhdfz" event={"ID":"c0d4e8bd-b5f7-4429-8494-e88fcdb32491","Type":"ContainerDied","Data":"b26c07d14b19ac2385c75ed5e2d23c397341bf9aeab3c02a6e1642c2d11f0bd4"} Nov 24 09:06:54 crc kubenswrapper[4886]: I1124 09:06:54.563411 4886 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b26c07d14b19ac2385c75ed5e2d23c397341bf9aeab3c02a6e1642c2d11f0bd4" Nov 24 09:06:54 crc kubenswrapper[4886]: I1124 09:06:54.563414 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-mhdfz" Nov 24 09:06:55 crc kubenswrapper[4886]: I1124 09:06:55.023992 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-e102-account-create-rjpsv" Nov 24 09:06:55 crc kubenswrapper[4886]: I1124 09:06:55.186881 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Nov 24 09:06:55 crc kubenswrapper[4886]: I1124 09:06:55.215369 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v6wnl\" (UniqueName: \"kubernetes.io/projected/ed63768a-813f-4e2e-8a49-878492cc908c-kube-api-access-v6wnl\") pod \"ed63768a-813f-4e2e-8a49-878492cc908c\" (UID: \"ed63768a-813f-4e2e-8a49-878492cc908c\") " Nov 24 09:06:55 crc kubenswrapper[4886]: I1124 09:06:55.215530 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ed63768a-813f-4e2e-8a49-878492cc908c-operator-scripts\") pod \"ed63768a-813f-4e2e-8a49-878492cc908c\" (UID: \"ed63768a-813f-4e2e-8a49-878492cc908c\") " Nov 24 09:06:55 crc kubenswrapper[4886]: I1124 09:06:55.216409 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ed63768a-813f-4e2e-8a49-878492cc908c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ed63768a-813f-4e2e-8a49-878492cc908c" (UID: "ed63768a-813f-4e2e-8a49-878492cc908c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 09:06:55 crc kubenswrapper[4886]: I1124 09:06:55.221609 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ed63768a-813f-4e2e-8a49-878492cc908c-kube-api-access-v6wnl" (OuterVolumeSpecName: "kube-api-access-v6wnl") pod "ed63768a-813f-4e2e-8a49-878492cc908c" (UID: "ed63768a-813f-4e2e-8a49-878492cc908c"). InnerVolumeSpecName "kube-api-access-v6wnl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 09:06:55 crc kubenswrapper[4886]: I1124 09:06:55.319547 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v6wnl\" (UniqueName: \"kubernetes.io/projected/ed63768a-813f-4e2e-8a49-878492cc908c-kube-api-access-v6wnl\") on node \"crc\" DevicePath \"\"" Nov 24 09:06:55 crc kubenswrapper[4886]: I1124 09:06:55.319599 4886 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ed63768a-813f-4e2e-8a49-878492cc908c-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 09:06:55 crc kubenswrapper[4886]: I1124 09:06:55.576008 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-e102-account-create-rjpsv" event={"ID":"ed63768a-813f-4e2e-8a49-878492cc908c","Type":"ContainerDied","Data":"d85454ae480cb418d1222ac745c7205ac9f65e8c3fda738a07a3420ebadf7a31"} Nov 24 09:06:55 crc kubenswrapper[4886]: I1124 09:06:55.576070 4886 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d85454ae480cb418d1222ac745c7205ac9f65e8c3fda738a07a3420ebadf7a31" Nov 24 09:06:55 crc kubenswrapper[4886]: I1124 09:06:55.576211 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-e102-account-create-rjpsv" Nov 24 09:06:55 crc kubenswrapper[4886]: I1124 09:06:55.652514 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-9n6ds"] Nov 24 09:06:55 crc kubenswrapper[4886]: E1124 09:06:55.653017 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0d4e8bd-b5f7-4429-8494-e88fcdb32491" containerName="mariadb-database-create" Nov 24 09:06:55 crc kubenswrapper[4886]: I1124 09:06:55.653041 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0d4e8bd-b5f7-4429-8494-e88fcdb32491" containerName="mariadb-database-create" Nov 24 09:06:55 crc kubenswrapper[4886]: E1124 09:06:55.653082 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed63768a-813f-4e2e-8a49-878492cc908c" containerName="mariadb-account-create" Nov 24 09:06:55 crc kubenswrapper[4886]: I1124 09:06:55.653091 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed63768a-813f-4e2e-8a49-878492cc908c" containerName="mariadb-account-create" Nov 24 09:06:55 crc kubenswrapper[4886]: I1124 09:06:55.653315 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="c0d4e8bd-b5f7-4429-8494-e88fcdb32491" containerName="mariadb-database-create" Nov 24 09:06:55 crc kubenswrapper[4886]: I1124 09:06:55.653341 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="ed63768a-813f-4e2e-8a49-878492cc908c" containerName="mariadb-account-create" Nov 24 09:06:55 crc kubenswrapper[4886]: I1124 09:06:55.654102 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-9n6ds" Nov 24 09:06:55 crc kubenswrapper[4886]: I1124 09:06:55.667097 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-9n6ds"] Nov 24 09:06:55 crc kubenswrapper[4886]: I1124 09:06:55.727811 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/65b7f4e6-3f5e-419b-9761-c0fc78a4632d-etc-swift\") pod \"swift-storage-0\" (UID: \"65b7f4e6-3f5e-419b-9761-c0fc78a4632d\") " pod="openstack/swift-storage-0" Nov 24 09:06:55 crc kubenswrapper[4886]: E1124 09:06:55.728021 4886 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Nov 24 09:06:55 crc kubenswrapper[4886]: E1124 09:06:55.728060 4886 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Nov 24 09:06:55 crc kubenswrapper[4886]: E1124 09:06:55.728137 4886 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/65b7f4e6-3f5e-419b-9761-c0fc78a4632d-etc-swift podName:65b7f4e6-3f5e-419b-9761-c0fc78a4632d nodeName:}" failed. No retries permitted until 2025-11-24 09:07:11.728112484 +0000 UTC m=+1087.614850619 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/65b7f4e6-3f5e-419b-9761-c0fc78a4632d-etc-swift") pod "swift-storage-0" (UID: "65b7f4e6-3f5e-419b-9761-c0fc78a4632d") : configmap "swift-ring-files" not found Nov 24 09:06:55 crc kubenswrapper[4886]: I1124 09:06:55.799703 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-e011-account-create-rzd97"] Nov 24 09:06:55 crc kubenswrapper[4886]: I1124 09:06:55.800971 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-e011-account-create-rzd97" Nov 24 09:06:55 crc kubenswrapper[4886]: I1124 09:06:55.806653 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Nov 24 09:06:55 crc kubenswrapper[4886]: I1124 09:06:55.818049 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-e011-account-create-rzd97"] Nov 24 09:06:55 crc kubenswrapper[4886]: I1124 09:06:55.833385 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sl5k6\" (UniqueName: \"kubernetes.io/projected/9ed0c697-2ac4-4500-a90d-e09d6ba279ae-kube-api-access-sl5k6\") pod \"keystone-db-create-9n6ds\" (UID: \"9ed0c697-2ac4-4500-a90d-e09d6ba279ae\") " pod="openstack/keystone-db-create-9n6ds" Nov 24 09:06:55 crc kubenswrapper[4886]: I1124 09:06:55.833545 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9ed0c697-2ac4-4500-a90d-e09d6ba279ae-operator-scripts\") pod \"keystone-db-create-9n6ds\" (UID: \"9ed0c697-2ac4-4500-a90d-e09d6ba279ae\") " pod="openstack/keystone-db-create-9n6ds" Nov 24 09:06:55 crc kubenswrapper[4886]: I1124 09:06:55.935771 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gzw9c\" (UniqueName: \"kubernetes.io/projected/6b674b68-5eef-4c55-817f-8dec4dc781fd-kube-api-access-gzw9c\") pod \"keystone-e011-account-create-rzd97\" (UID: \"6b674b68-5eef-4c55-817f-8dec4dc781fd\") " pod="openstack/keystone-e011-account-create-rzd97" Nov 24 09:06:55 crc kubenswrapper[4886]: I1124 09:06:55.935846 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sl5k6\" (UniqueName: \"kubernetes.io/projected/9ed0c697-2ac4-4500-a90d-e09d6ba279ae-kube-api-access-sl5k6\") pod \"keystone-db-create-9n6ds\" (UID: \"9ed0c697-2ac4-4500-a90d-e09d6ba279ae\") " pod="openstack/keystone-db-create-9n6ds" Nov 24 09:06:55 crc kubenswrapper[4886]: I1124 09:06:55.936144 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9ed0c697-2ac4-4500-a90d-e09d6ba279ae-operator-scripts\") pod \"keystone-db-create-9n6ds\" (UID: \"9ed0c697-2ac4-4500-a90d-e09d6ba279ae\") " pod="openstack/keystone-db-create-9n6ds" Nov 24 09:06:55 crc kubenswrapper[4886]: I1124 09:06:55.936249 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6b674b68-5eef-4c55-817f-8dec4dc781fd-operator-scripts\") pod \"keystone-e011-account-create-rzd97\" (UID: \"6b674b68-5eef-4c55-817f-8dec4dc781fd\") " pod="openstack/keystone-e011-account-create-rzd97" Nov 24 09:06:55 crc kubenswrapper[4886]: I1124 09:06:55.937205 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9ed0c697-2ac4-4500-a90d-e09d6ba279ae-operator-scripts\") pod \"keystone-db-create-9n6ds\" (UID: \"9ed0c697-2ac4-4500-a90d-e09d6ba279ae\") " pod="openstack/keystone-db-create-9n6ds" Nov 24 09:06:55 crc kubenswrapper[4886]: I1124 09:06:55.960225 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sl5k6\" (UniqueName: \"kubernetes.io/projected/9ed0c697-2ac4-4500-a90d-e09d6ba279ae-kube-api-access-sl5k6\") pod \"keystone-db-create-9n6ds\" (UID: \"9ed0c697-2ac4-4500-a90d-e09d6ba279ae\") " pod="openstack/keystone-db-create-9n6ds" Nov 24 09:06:55 crc kubenswrapper[4886]: I1124 09:06:55.982184 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-9n6ds" Nov 24 09:06:56 crc kubenswrapper[4886]: I1124 09:06:56.037980 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gzw9c\" (UniqueName: \"kubernetes.io/projected/6b674b68-5eef-4c55-817f-8dec4dc781fd-kube-api-access-gzw9c\") pod \"keystone-e011-account-create-rzd97\" (UID: \"6b674b68-5eef-4c55-817f-8dec4dc781fd\") " pod="openstack/keystone-e011-account-create-rzd97" Nov 24 09:06:56 crc kubenswrapper[4886]: I1124 09:06:56.038214 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6b674b68-5eef-4c55-817f-8dec4dc781fd-operator-scripts\") pod \"keystone-e011-account-create-rzd97\" (UID: \"6b674b68-5eef-4c55-817f-8dec4dc781fd\") " pod="openstack/keystone-e011-account-create-rzd97" Nov 24 09:06:56 crc kubenswrapper[4886]: I1124 09:06:56.039247 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6b674b68-5eef-4c55-817f-8dec4dc781fd-operator-scripts\") pod \"keystone-e011-account-create-rzd97\" (UID: \"6b674b68-5eef-4c55-817f-8dec4dc781fd\") " pod="openstack/keystone-e011-account-create-rzd97" Nov 24 09:06:56 crc kubenswrapper[4886]: I1124 09:06:56.076257 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gzw9c\" (UniqueName: \"kubernetes.io/projected/6b674b68-5eef-4c55-817f-8dec4dc781fd-kube-api-access-gzw9c\") pod \"keystone-e011-account-create-rzd97\" (UID: \"6b674b68-5eef-4c55-817f-8dec4dc781fd\") " pod="openstack/keystone-e011-account-create-rzd97" Nov 24 09:06:56 crc kubenswrapper[4886]: I1124 09:06:56.177944 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-e011-account-create-rzd97" Nov 24 09:06:56 crc kubenswrapper[4886]: I1124 09:06:56.395443 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-rzmth" podUID="b7951685-e0e7-4524-ba49-b720357aa59c" containerName="ovn-controller" probeResult="failure" output=< Nov 24 09:06:56 crc kubenswrapper[4886]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Nov 24 09:06:56 crc kubenswrapper[4886]: > Nov 24 09:06:56 crc kubenswrapper[4886]: W1124 09:06:56.463747 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9ed0c697_2ac4_4500_a90d_e09d6ba279ae.slice/crio-0770f22fb2521ee071718df387565645d0bfaee9345402f777e9e308ac3d8193 WatchSource:0}: Error finding container 0770f22fb2521ee071718df387565645d0bfaee9345402f777e9e308ac3d8193: Status 404 returned error can't find the container with id 0770f22fb2521ee071718df387565645d0bfaee9345402f777e9e308ac3d8193 Nov 24 09:06:56 crc kubenswrapper[4886]: I1124 09:06:56.465099 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-9n6ds"] Nov 24 09:06:56 crc kubenswrapper[4886]: I1124 09:06:56.586838 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-9n6ds" event={"ID":"9ed0c697-2ac4-4500-a90d-e09d6ba279ae","Type":"ContainerStarted","Data":"0770f22fb2521ee071718df387565645d0bfaee9345402f777e9e308ac3d8193"} Nov 24 09:06:56 crc kubenswrapper[4886]: I1124 09:06:56.660752 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-e011-account-create-rzd97"] Nov 24 09:06:56 crc kubenswrapper[4886]: W1124 09:06:56.663362 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6b674b68_5eef_4c55_817f_8dec4dc781fd.slice/crio-badff5857db8e63c75c8bb4baaa4ea031d99363f6a50d0b4b7d53d6f9b98040a WatchSource:0}: Error finding container badff5857db8e63c75c8bb4baaa4ea031d99363f6a50d0b4b7d53d6f9b98040a: Status 404 returned error can't find the container with id badff5857db8e63c75c8bb4baaa4ea031d99363f6a50d0b4b7d53d6f9b98040a Nov 24 09:06:56 crc kubenswrapper[4886]: I1124 09:06:56.715280 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-4sj8x"] Nov 24 09:06:56 crc kubenswrapper[4886]: I1124 09:06:56.716645 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-4sj8x" Nov 24 09:06:56 crc kubenswrapper[4886]: I1124 09:06:56.722134 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Nov 24 09:06:56 crc kubenswrapper[4886]: I1124 09:06:56.722274 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-ld45n" Nov 24 09:06:56 crc kubenswrapper[4886]: I1124 09:06:56.755772 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-4sj8x"] Nov 24 09:06:56 crc kubenswrapper[4886]: I1124 09:06:56.853204 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-547jv\" (UniqueName: \"kubernetes.io/projected/f45428c8-b123-4e3e-9ba0-5ab11cf317a5-kube-api-access-547jv\") pod \"glance-db-sync-4sj8x\" (UID: \"f45428c8-b123-4e3e-9ba0-5ab11cf317a5\") " pod="openstack/glance-db-sync-4sj8x" Nov 24 09:06:56 crc kubenswrapper[4886]: I1124 09:06:56.853698 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f45428c8-b123-4e3e-9ba0-5ab11cf317a5-combined-ca-bundle\") pod \"glance-db-sync-4sj8x\" (UID: \"f45428c8-b123-4e3e-9ba0-5ab11cf317a5\") " pod="openstack/glance-db-sync-4sj8x" Nov 24 09:06:56 crc kubenswrapper[4886]: I1124 09:06:56.853764 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f45428c8-b123-4e3e-9ba0-5ab11cf317a5-config-data\") pod \"glance-db-sync-4sj8x\" (UID: \"f45428c8-b123-4e3e-9ba0-5ab11cf317a5\") " pod="openstack/glance-db-sync-4sj8x" Nov 24 09:06:56 crc kubenswrapper[4886]: I1124 09:06:56.853815 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f45428c8-b123-4e3e-9ba0-5ab11cf317a5-db-sync-config-data\") pod \"glance-db-sync-4sj8x\" (UID: \"f45428c8-b123-4e3e-9ba0-5ab11cf317a5\") " pod="openstack/glance-db-sync-4sj8x" Nov 24 09:06:56 crc kubenswrapper[4886]: I1124 09:06:56.955522 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f45428c8-b123-4e3e-9ba0-5ab11cf317a5-combined-ca-bundle\") pod \"glance-db-sync-4sj8x\" (UID: \"f45428c8-b123-4e3e-9ba0-5ab11cf317a5\") " pod="openstack/glance-db-sync-4sj8x" Nov 24 09:06:56 crc kubenswrapper[4886]: I1124 09:06:56.955634 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f45428c8-b123-4e3e-9ba0-5ab11cf317a5-config-data\") pod \"glance-db-sync-4sj8x\" (UID: \"f45428c8-b123-4e3e-9ba0-5ab11cf317a5\") " pod="openstack/glance-db-sync-4sj8x" Nov 24 09:06:56 crc kubenswrapper[4886]: I1124 09:06:56.955736 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f45428c8-b123-4e3e-9ba0-5ab11cf317a5-db-sync-config-data\") pod \"glance-db-sync-4sj8x\" (UID: \"f45428c8-b123-4e3e-9ba0-5ab11cf317a5\") " pod="openstack/glance-db-sync-4sj8x" Nov 24 09:06:56 crc kubenswrapper[4886]: I1124 09:06:56.955833 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-547jv\" (UniqueName: \"kubernetes.io/projected/f45428c8-b123-4e3e-9ba0-5ab11cf317a5-kube-api-access-547jv\") pod \"glance-db-sync-4sj8x\" (UID: \"f45428c8-b123-4e3e-9ba0-5ab11cf317a5\") " pod="openstack/glance-db-sync-4sj8x" Nov 24 09:06:56 crc kubenswrapper[4886]: I1124 09:06:56.963494 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f45428c8-b123-4e3e-9ba0-5ab11cf317a5-db-sync-config-data\") pod \"glance-db-sync-4sj8x\" (UID: \"f45428c8-b123-4e3e-9ba0-5ab11cf317a5\") " pod="openstack/glance-db-sync-4sj8x" Nov 24 09:06:56 crc kubenswrapper[4886]: I1124 09:06:56.964186 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f45428c8-b123-4e3e-9ba0-5ab11cf317a5-combined-ca-bundle\") pod \"glance-db-sync-4sj8x\" (UID: \"f45428c8-b123-4e3e-9ba0-5ab11cf317a5\") " pod="openstack/glance-db-sync-4sj8x" Nov 24 09:06:56 crc kubenswrapper[4886]: I1124 09:06:56.970098 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f45428c8-b123-4e3e-9ba0-5ab11cf317a5-config-data\") pod \"glance-db-sync-4sj8x\" (UID: \"f45428c8-b123-4e3e-9ba0-5ab11cf317a5\") " pod="openstack/glance-db-sync-4sj8x" Nov 24 09:06:56 crc kubenswrapper[4886]: I1124 09:06:56.988866 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-547jv\" (UniqueName: \"kubernetes.io/projected/f45428c8-b123-4e3e-9ba0-5ab11cf317a5-kube-api-access-547jv\") pod \"glance-db-sync-4sj8x\" (UID: \"f45428c8-b123-4e3e-9ba0-5ab11cf317a5\") " pod="openstack/glance-db-sync-4sj8x" Nov 24 09:06:57 crc kubenswrapper[4886]: I1124 09:06:57.053663 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-4sj8x" Nov 24 09:06:57 crc kubenswrapper[4886]: I1124 09:06:57.598410 4886 generic.go:334] "Generic (PLEG): container finished" podID="c9a54508-7f70-4e5d-952a-587f8fabeb1c" containerID="20e09619d69e3a480a7d435012c68cf01b943931d3d8e00378645a34077e5be1" exitCode=0 Nov 24 09:06:57 crc kubenswrapper[4886]: I1124 09:06:57.598634 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-dxnk5" event={"ID":"c9a54508-7f70-4e5d-952a-587f8fabeb1c","Type":"ContainerDied","Data":"20e09619d69e3a480a7d435012c68cf01b943931d3d8e00378645a34077e5be1"} Nov 24 09:06:57 crc kubenswrapper[4886]: I1124 09:06:57.603239 4886 generic.go:334] "Generic (PLEG): container finished" podID="6b674b68-5eef-4c55-817f-8dec4dc781fd" containerID="3c84c6617657ebb244bbabcc08a3d5478d8672f9c4313122c44cc435d41e0cbc" exitCode=0 Nov 24 09:06:57 crc kubenswrapper[4886]: I1124 09:06:57.603320 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-e011-account-create-rzd97" event={"ID":"6b674b68-5eef-4c55-817f-8dec4dc781fd","Type":"ContainerDied","Data":"3c84c6617657ebb244bbabcc08a3d5478d8672f9c4313122c44cc435d41e0cbc"} Nov 24 09:06:57 crc kubenswrapper[4886]: I1124 09:06:57.603447 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-e011-account-create-rzd97" event={"ID":"6b674b68-5eef-4c55-817f-8dec4dc781fd","Type":"ContainerStarted","Data":"badff5857db8e63c75c8bb4baaa4ea031d99363f6a50d0b4b7d53d6f9b98040a"} Nov 24 09:06:57 crc kubenswrapper[4886]: I1124 09:06:57.607267 4886 generic.go:334] "Generic (PLEG): container finished" podID="9ed0c697-2ac4-4500-a90d-e09d6ba279ae" containerID="f7d92ca92717e86cfb25908c6478d07438610dc900d55fdbfdd9f32a7eedf7e0" exitCode=0 Nov 24 09:06:57 crc kubenswrapper[4886]: I1124 09:06:57.607339 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-9n6ds" event={"ID":"9ed0c697-2ac4-4500-a90d-e09d6ba279ae","Type":"ContainerDied","Data":"f7d92ca92717e86cfb25908c6478d07438610dc900d55fdbfdd9f32a7eedf7e0"} Nov 24 09:06:57 crc kubenswrapper[4886]: I1124 09:06:57.620702 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-4sj8x"] Nov 24 09:06:58 crc kubenswrapper[4886]: I1124 09:06:58.617898 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-4sj8x" event={"ID":"f45428c8-b123-4e3e-9ba0-5ab11cf317a5","Type":"ContainerStarted","Data":"5a01627cc6be978907eec89b811bf58c99d17dd220703d43c3735633a89210b8"} Nov 24 09:06:59 crc kubenswrapper[4886]: I1124 09:06:59.233205 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-e011-account-create-rzd97" Nov 24 09:06:59 crc kubenswrapper[4886]: I1124 09:06:59.240428 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-dxnk5" Nov 24 09:06:59 crc kubenswrapper[4886]: I1124 09:06:59.249339 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-9n6ds" Nov 24 09:06:59 crc kubenswrapper[4886]: I1124 09:06:59.304687 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/c9a54508-7f70-4e5d-952a-587f8fabeb1c-swiftconf\") pod \"c9a54508-7f70-4e5d-952a-587f8fabeb1c\" (UID: \"c9a54508-7f70-4e5d-952a-587f8fabeb1c\") " Nov 24 09:06:59 crc kubenswrapper[4886]: I1124 09:06:59.304747 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9ed0c697-2ac4-4500-a90d-e09d6ba279ae-operator-scripts\") pod \"9ed0c697-2ac4-4500-a90d-e09d6ba279ae\" (UID: \"9ed0c697-2ac4-4500-a90d-e09d6ba279ae\") " Nov 24 09:06:59 crc kubenswrapper[4886]: I1124 09:06:59.304854 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6b674b68-5eef-4c55-817f-8dec4dc781fd-operator-scripts\") pod \"6b674b68-5eef-4c55-817f-8dec4dc781fd\" (UID: \"6b674b68-5eef-4c55-817f-8dec4dc781fd\") " Nov 24 09:06:59 crc kubenswrapper[4886]: I1124 09:06:59.304892 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/c9a54508-7f70-4e5d-952a-587f8fabeb1c-etc-swift\") pod \"c9a54508-7f70-4e5d-952a-587f8fabeb1c\" (UID: \"c9a54508-7f70-4e5d-952a-587f8fabeb1c\") " Nov 24 09:06:59 crc kubenswrapper[4886]: I1124 09:06:59.305004 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/c9a54508-7f70-4e5d-952a-587f8fabeb1c-dispersionconf\") pod \"c9a54508-7f70-4e5d-952a-587f8fabeb1c\" (UID: \"c9a54508-7f70-4e5d-952a-587f8fabeb1c\") " Nov 24 09:06:59 crc kubenswrapper[4886]: I1124 09:06:59.305057 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gzw9c\" (UniqueName: \"kubernetes.io/projected/6b674b68-5eef-4c55-817f-8dec4dc781fd-kube-api-access-gzw9c\") pod \"6b674b68-5eef-4c55-817f-8dec4dc781fd\" (UID: \"6b674b68-5eef-4c55-817f-8dec4dc781fd\") " Nov 24 09:06:59 crc kubenswrapper[4886]: I1124 09:06:59.305080 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/c9a54508-7f70-4e5d-952a-587f8fabeb1c-ring-data-devices\") pod \"c9a54508-7f70-4e5d-952a-587f8fabeb1c\" (UID: \"c9a54508-7f70-4e5d-952a-587f8fabeb1c\") " Nov 24 09:06:59 crc kubenswrapper[4886]: I1124 09:06:59.305116 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c9a54508-7f70-4e5d-952a-587f8fabeb1c-scripts\") pod \"c9a54508-7f70-4e5d-952a-587f8fabeb1c\" (UID: \"c9a54508-7f70-4e5d-952a-587f8fabeb1c\") " Nov 24 09:06:59 crc kubenswrapper[4886]: I1124 09:06:59.305164 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sl5k6\" (UniqueName: \"kubernetes.io/projected/9ed0c697-2ac4-4500-a90d-e09d6ba279ae-kube-api-access-sl5k6\") pod \"9ed0c697-2ac4-4500-a90d-e09d6ba279ae\" (UID: \"9ed0c697-2ac4-4500-a90d-e09d6ba279ae\") " Nov 24 09:06:59 crc kubenswrapper[4886]: I1124 09:06:59.305221 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bwjbl\" (UniqueName: \"kubernetes.io/projected/c9a54508-7f70-4e5d-952a-587f8fabeb1c-kube-api-access-bwjbl\") pod \"c9a54508-7f70-4e5d-952a-587f8fabeb1c\" (UID: \"c9a54508-7f70-4e5d-952a-587f8fabeb1c\") " Nov 24 09:06:59 crc kubenswrapper[4886]: I1124 09:06:59.305292 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9a54508-7f70-4e5d-952a-587f8fabeb1c-combined-ca-bundle\") pod \"c9a54508-7f70-4e5d-952a-587f8fabeb1c\" (UID: \"c9a54508-7f70-4e5d-952a-587f8fabeb1c\") " Nov 24 09:06:59 crc kubenswrapper[4886]: I1124 09:06:59.306183 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9ed0c697-2ac4-4500-a90d-e09d6ba279ae-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "9ed0c697-2ac4-4500-a90d-e09d6ba279ae" (UID: "9ed0c697-2ac4-4500-a90d-e09d6ba279ae"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 09:06:59 crc kubenswrapper[4886]: I1124 09:06:59.307595 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c9a54508-7f70-4e5d-952a-587f8fabeb1c-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "c9a54508-7f70-4e5d-952a-587f8fabeb1c" (UID: "c9a54508-7f70-4e5d-952a-587f8fabeb1c"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 09:06:59 crc kubenswrapper[4886]: I1124 09:06:59.308759 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6b674b68-5eef-4c55-817f-8dec4dc781fd-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "6b674b68-5eef-4c55-817f-8dec4dc781fd" (UID: "6b674b68-5eef-4c55-817f-8dec4dc781fd"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 09:06:59 crc kubenswrapper[4886]: I1124 09:06:59.309284 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c9a54508-7f70-4e5d-952a-587f8fabeb1c-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "c9a54508-7f70-4e5d-952a-587f8fabeb1c" (UID: "c9a54508-7f70-4e5d-952a-587f8fabeb1c"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 09:06:59 crc kubenswrapper[4886]: I1124 09:06:59.314619 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6b674b68-5eef-4c55-817f-8dec4dc781fd-kube-api-access-gzw9c" (OuterVolumeSpecName: "kube-api-access-gzw9c") pod "6b674b68-5eef-4c55-817f-8dec4dc781fd" (UID: "6b674b68-5eef-4c55-817f-8dec4dc781fd"). InnerVolumeSpecName "kube-api-access-gzw9c". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 09:06:59 crc kubenswrapper[4886]: I1124 09:06:59.314791 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c9a54508-7f70-4e5d-952a-587f8fabeb1c-kube-api-access-bwjbl" (OuterVolumeSpecName: "kube-api-access-bwjbl") pod "c9a54508-7f70-4e5d-952a-587f8fabeb1c" (UID: "c9a54508-7f70-4e5d-952a-587f8fabeb1c"). InnerVolumeSpecName "kube-api-access-bwjbl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 09:06:59 crc kubenswrapper[4886]: I1124 09:06:59.315546 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9ed0c697-2ac4-4500-a90d-e09d6ba279ae-kube-api-access-sl5k6" (OuterVolumeSpecName: "kube-api-access-sl5k6") pod "9ed0c697-2ac4-4500-a90d-e09d6ba279ae" (UID: "9ed0c697-2ac4-4500-a90d-e09d6ba279ae"). InnerVolumeSpecName "kube-api-access-sl5k6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 09:06:59 crc kubenswrapper[4886]: I1124 09:06:59.333470 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c9a54508-7f70-4e5d-952a-587f8fabeb1c-scripts" (OuterVolumeSpecName: "scripts") pod "c9a54508-7f70-4e5d-952a-587f8fabeb1c" (UID: "c9a54508-7f70-4e5d-952a-587f8fabeb1c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 09:06:59 crc kubenswrapper[4886]: I1124 09:06:59.337923 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c9a54508-7f70-4e5d-952a-587f8fabeb1c-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "c9a54508-7f70-4e5d-952a-587f8fabeb1c" (UID: "c9a54508-7f70-4e5d-952a-587f8fabeb1c"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:06:59 crc kubenswrapper[4886]: I1124 09:06:59.341836 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c9a54508-7f70-4e5d-952a-587f8fabeb1c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c9a54508-7f70-4e5d-952a-587f8fabeb1c" (UID: "c9a54508-7f70-4e5d-952a-587f8fabeb1c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:06:59 crc kubenswrapper[4886]: I1124 09:06:59.359788 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c9a54508-7f70-4e5d-952a-587f8fabeb1c-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "c9a54508-7f70-4e5d-952a-587f8fabeb1c" (UID: "c9a54508-7f70-4e5d-952a-587f8fabeb1c"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:06:59 crc kubenswrapper[4886]: I1124 09:06:59.407579 4886 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/c9a54508-7f70-4e5d-952a-587f8fabeb1c-swiftconf\") on node \"crc\" DevicePath \"\"" Nov 24 09:06:59 crc kubenswrapper[4886]: I1124 09:06:59.407631 4886 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9ed0c697-2ac4-4500-a90d-e09d6ba279ae-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 09:06:59 crc kubenswrapper[4886]: I1124 09:06:59.407648 4886 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6b674b68-5eef-4c55-817f-8dec4dc781fd-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 09:06:59 crc kubenswrapper[4886]: I1124 09:06:59.407661 4886 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/c9a54508-7f70-4e5d-952a-587f8fabeb1c-etc-swift\") on node \"crc\" DevicePath \"\"" Nov 24 09:06:59 crc kubenswrapper[4886]: I1124 09:06:59.407673 4886 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/c9a54508-7f70-4e5d-952a-587f8fabeb1c-dispersionconf\") on node \"crc\" DevicePath \"\"" Nov 24 09:06:59 crc kubenswrapper[4886]: I1124 09:06:59.407686 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gzw9c\" (UniqueName: \"kubernetes.io/projected/6b674b68-5eef-4c55-817f-8dec4dc781fd-kube-api-access-gzw9c\") on node \"crc\" DevicePath \"\"" Nov 24 09:06:59 crc kubenswrapper[4886]: I1124 09:06:59.407698 4886 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/c9a54508-7f70-4e5d-952a-587f8fabeb1c-ring-data-devices\") on node \"crc\" DevicePath \"\"" Nov 24 09:06:59 crc kubenswrapper[4886]: I1124 09:06:59.407716 4886 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c9a54508-7f70-4e5d-952a-587f8fabeb1c-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 09:06:59 crc kubenswrapper[4886]: I1124 09:06:59.407730 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sl5k6\" (UniqueName: \"kubernetes.io/projected/9ed0c697-2ac4-4500-a90d-e09d6ba279ae-kube-api-access-sl5k6\") on node \"crc\" DevicePath \"\"" Nov 24 09:06:59 crc kubenswrapper[4886]: I1124 09:06:59.407745 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bwjbl\" (UniqueName: \"kubernetes.io/projected/c9a54508-7f70-4e5d-952a-587f8fabeb1c-kube-api-access-bwjbl\") on node \"crc\" DevicePath \"\"" Nov 24 09:06:59 crc kubenswrapper[4886]: I1124 09:06:59.407756 4886 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9a54508-7f70-4e5d-952a-587f8fabeb1c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 09:06:59 crc kubenswrapper[4886]: I1124 09:06:59.632256 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-e011-account-create-rzd97" Nov 24 09:06:59 crc kubenswrapper[4886]: I1124 09:06:59.632302 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-e011-account-create-rzd97" event={"ID":"6b674b68-5eef-4c55-817f-8dec4dc781fd","Type":"ContainerDied","Data":"badff5857db8e63c75c8bb4baaa4ea031d99363f6a50d0b4b7d53d6f9b98040a"} Nov 24 09:06:59 crc kubenswrapper[4886]: I1124 09:06:59.632395 4886 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="badff5857db8e63c75c8bb4baaa4ea031d99363f6a50d0b4b7d53d6f9b98040a" Nov 24 09:06:59 crc kubenswrapper[4886]: I1124 09:06:59.634904 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-9n6ds" Nov 24 09:06:59 crc kubenswrapper[4886]: I1124 09:06:59.634904 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-9n6ds" event={"ID":"9ed0c697-2ac4-4500-a90d-e09d6ba279ae","Type":"ContainerDied","Data":"0770f22fb2521ee071718df387565645d0bfaee9345402f777e9e308ac3d8193"} Nov 24 09:06:59 crc kubenswrapper[4886]: I1124 09:06:59.635054 4886 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0770f22fb2521ee071718df387565645d0bfaee9345402f777e9e308ac3d8193" Nov 24 09:06:59 crc kubenswrapper[4886]: I1124 09:06:59.638463 4886 generic.go:334] "Generic (PLEG): container finished" podID="f10026aa-640c-4f36-9912-cd4177af074d" containerID="e65c615b4f50616bd7bd98b9b0df7821f9a9633ce9315706f4f12417f031030d" exitCode=0 Nov 24 09:06:59 crc kubenswrapper[4886]: I1124 09:06:59.638565 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"f10026aa-640c-4f36-9912-cd4177af074d","Type":"ContainerDied","Data":"e65c615b4f50616bd7bd98b9b0df7821f9a9633ce9315706f4f12417f031030d"} Nov 24 09:06:59 crc kubenswrapper[4886]: I1124 09:06:59.641385 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-dxnk5" event={"ID":"c9a54508-7f70-4e5d-952a-587f8fabeb1c","Type":"ContainerDied","Data":"004ed56e9b7ce9f38ac92a7c63148ca227fc429cf89e3c599ea71e8647692736"} Nov 24 09:06:59 crc kubenswrapper[4886]: I1124 09:06:59.641437 4886 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="004ed56e9b7ce9f38ac92a7c63148ca227fc429cf89e3c599ea71e8647692736" Nov 24 09:06:59 crc kubenswrapper[4886]: I1124 09:06:59.641407 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-dxnk5" Nov 24 09:06:59 crc kubenswrapper[4886]: I1124 09:06:59.649066 4886 generic.go:334] "Generic (PLEG): container finished" podID="510b7a7a-1206-44f7-bd72-a85590e7a1ac" containerID="63f08eea34ce9c4a5864e0f2001dccb0b8447f6d4df9007985b597db35ca4b60" exitCode=0 Nov 24 09:06:59 crc kubenswrapper[4886]: I1124 09:06:59.649128 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"510b7a7a-1206-44f7-bd72-a85590e7a1ac","Type":"ContainerDied","Data":"63f08eea34ce9c4a5864e0f2001dccb0b8447f6d4df9007985b597db35ca4b60"} Nov 24 09:07:00 crc kubenswrapper[4886]: I1124 09:07:00.696909 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"510b7a7a-1206-44f7-bd72-a85590e7a1ac","Type":"ContainerStarted","Data":"69d84f8ba68cde94097a7d01257c5d06594a351a401687983ff87b10e4d1a745"} Nov 24 09:07:00 crc kubenswrapper[4886]: I1124 09:07:00.699065 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Nov 24 09:07:00 crc kubenswrapper[4886]: I1124 09:07:00.700378 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"f10026aa-640c-4f36-9912-cd4177af074d","Type":"ContainerStarted","Data":"3a43dfdd3bfeead9977c7153e7f0a2e6a287282da1aa52e2dc0d8529402136d7"} Nov 24 09:07:00 crc kubenswrapper[4886]: I1124 09:07:00.700841 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Nov 24 09:07:00 crc kubenswrapper[4886]: I1124 09:07:00.732038 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=51.166831462 podStartE2EDuration="59.732017013s" podCreationTimestamp="2025-11-24 09:06:01 +0000 UTC" firstStartedPulling="2025-11-24 09:06:14.522369537 +0000 UTC m=+1030.409107672" lastFinishedPulling="2025-11-24 09:06:23.087555088 +0000 UTC m=+1038.974293223" observedRunningTime="2025-11-24 09:07:00.723733336 +0000 UTC m=+1076.610471481" watchObservedRunningTime="2025-11-24 09:07:00.732017013 +0000 UTC m=+1076.618755158" Nov 24 09:07:00 crc kubenswrapper[4886]: I1124 09:07:00.769455 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=52.312204686 podStartE2EDuration="59.769423053s" podCreationTimestamp="2025-11-24 09:06:01 +0000 UTC" firstStartedPulling="2025-11-24 09:06:15.527352165 +0000 UTC m=+1031.414090300" lastFinishedPulling="2025-11-24 09:06:22.984570532 +0000 UTC m=+1038.871308667" observedRunningTime="2025-11-24 09:07:00.761598879 +0000 UTC m=+1076.648337024" watchObservedRunningTime="2025-11-24 09:07:00.769423053 +0000 UTC m=+1076.656161208" Nov 24 09:07:01 crc kubenswrapper[4886]: I1124 09:07:01.407110 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-rzmth" podUID="b7951685-e0e7-4524-ba49-b720357aa59c" containerName="ovn-controller" probeResult="failure" output=< Nov 24 09:07:01 crc kubenswrapper[4886]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Nov 24 09:07:01 crc kubenswrapper[4886]: > Nov 24 09:07:06 crc kubenswrapper[4886]: I1124 09:07:06.513561 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-rzmth" podUID="b7951685-e0e7-4524-ba49-b720357aa59c" containerName="ovn-controller" probeResult="failure" output=< Nov 24 09:07:06 crc kubenswrapper[4886]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Nov 24 09:07:06 crc kubenswrapper[4886]: > Nov 24 09:07:06 crc kubenswrapper[4886]: I1124 09:07:06.519613 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-vclvw" Nov 24 09:07:06 crc kubenswrapper[4886]: I1124 09:07:06.536737 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-vclvw" Nov 24 09:07:06 crc kubenswrapper[4886]: I1124 09:07:06.809810 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-rzmth-config-gbppv"] Nov 24 09:07:06 crc kubenswrapper[4886]: E1124 09:07:06.810301 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ed0c697-2ac4-4500-a90d-e09d6ba279ae" containerName="mariadb-database-create" Nov 24 09:07:06 crc kubenswrapper[4886]: I1124 09:07:06.810320 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ed0c697-2ac4-4500-a90d-e09d6ba279ae" containerName="mariadb-database-create" Nov 24 09:07:06 crc kubenswrapper[4886]: E1124 09:07:06.810362 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6b674b68-5eef-4c55-817f-8dec4dc781fd" containerName="mariadb-account-create" Nov 24 09:07:06 crc kubenswrapper[4886]: I1124 09:07:06.810368 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b674b68-5eef-4c55-817f-8dec4dc781fd" containerName="mariadb-account-create" Nov 24 09:07:06 crc kubenswrapper[4886]: E1124 09:07:06.810383 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9a54508-7f70-4e5d-952a-587f8fabeb1c" containerName="swift-ring-rebalance" Nov 24 09:07:06 crc kubenswrapper[4886]: I1124 09:07:06.810390 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9a54508-7f70-4e5d-952a-587f8fabeb1c" containerName="swift-ring-rebalance" Nov 24 09:07:06 crc kubenswrapper[4886]: I1124 09:07:06.810549 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="6b674b68-5eef-4c55-817f-8dec4dc781fd" containerName="mariadb-account-create" Nov 24 09:07:06 crc kubenswrapper[4886]: I1124 09:07:06.810569 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="9ed0c697-2ac4-4500-a90d-e09d6ba279ae" containerName="mariadb-database-create" Nov 24 09:07:06 crc kubenswrapper[4886]: I1124 09:07:06.810581 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="c9a54508-7f70-4e5d-952a-587f8fabeb1c" containerName="swift-ring-rebalance" Nov 24 09:07:06 crc kubenswrapper[4886]: I1124 09:07:06.811336 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-rzmth-config-gbppv" Nov 24 09:07:06 crc kubenswrapper[4886]: I1124 09:07:06.816249 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Nov 24 09:07:06 crc kubenswrapper[4886]: I1124 09:07:06.826019 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-rzmth-config-gbppv"] Nov 24 09:07:06 crc kubenswrapper[4886]: I1124 09:07:06.986019 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/9e2af7a5-08f0-4784-baec-f47dd090ac37-var-run\") pod \"ovn-controller-rzmth-config-gbppv\" (UID: \"9e2af7a5-08f0-4784-baec-f47dd090ac37\") " pod="openstack/ovn-controller-rzmth-config-gbppv" Nov 24 09:07:06 crc kubenswrapper[4886]: I1124 09:07:06.986099 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/9e2af7a5-08f0-4784-baec-f47dd090ac37-var-run-ovn\") pod \"ovn-controller-rzmth-config-gbppv\" (UID: \"9e2af7a5-08f0-4784-baec-f47dd090ac37\") " pod="openstack/ovn-controller-rzmth-config-gbppv" Nov 24 09:07:06 crc kubenswrapper[4886]: I1124 09:07:06.986251 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/9e2af7a5-08f0-4784-baec-f47dd090ac37-var-log-ovn\") pod \"ovn-controller-rzmth-config-gbppv\" (UID: \"9e2af7a5-08f0-4784-baec-f47dd090ac37\") " pod="openstack/ovn-controller-rzmth-config-gbppv" Nov 24 09:07:06 crc kubenswrapper[4886]: I1124 09:07:06.987242 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9e2af7a5-08f0-4784-baec-f47dd090ac37-scripts\") pod \"ovn-controller-rzmth-config-gbppv\" (UID: \"9e2af7a5-08f0-4784-baec-f47dd090ac37\") " pod="openstack/ovn-controller-rzmth-config-gbppv" Nov 24 09:07:06 crc kubenswrapper[4886]: I1124 09:07:06.987321 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/9e2af7a5-08f0-4784-baec-f47dd090ac37-additional-scripts\") pod \"ovn-controller-rzmth-config-gbppv\" (UID: \"9e2af7a5-08f0-4784-baec-f47dd090ac37\") " pod="openstack/ovn-controller-rzmth-config-gbppv" Nov 24 09:07:06 crc kubenswrapper[4886]: I1124 09:07:06.987401 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7f8bb\" (UniqueName: \"kubernetes.io/projected/9e2af7a5-08f0-4784-baec-f47dd090ac37-kube-api-access-7f8bb\") pod \"ovn-controller-rzmth-config-gbppv\" (UID: \"9e2af7a5-08f0-4784-baec-f47dd090ac37\") " pod="openstack/ovn-controller-rzmth-config-gbppv" Nov 24 09:07:07 crc kubenswrapper[4886]: I1124 09:07:07.089739 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/9e2af7a5-08f0-4784-baec-f47dd090ac37-var-run-ovn\") pod \"ovn-controller-rzmth-config-gbppv\" (UID: \"9e2af7a5-08f0-4784-baec-f47dd090ac37\") " pod="openstack/ovn-controller-rzmth-config-gbppv" Nov 24 09:07:07 crc kubenswrapper[4886]: I1124 09:07:07.089845 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/9e2af7a5-08f0-4784-baec-f47dd090ac37-var-log-ovn\") pod \"ovn-controller-rzmth-config-gbppv\" (UID: \"9e2af7a5-08f0-4784-baec-f47dd090ac37\") " pod="openstack/ovn-controller-rzmth-config-gbppv" Nov 24 09:07:07 crc kubenswrapper[4886]: I1124 09:07:07.089909 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9e2af7a5-08f0-4784-baec-f47dd090ac37-scripts\") pod \"ovn-controller-rzmth-config-gbppv\" (UID: \"9e2af7a5-08f0-4784-baec-f47dd090ac37\") " pod="openstack/ovn-controller-rzmth-config-gbppv" Nov 24 09:07:07 crc kubenswrapper[4886]: I1124 09:07:07.089957 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/9e2af7a5-08f0-4784-baec-f47dd090ac37-additional-scripts\") pod \"ovn-controller-rzmth-config-gbppv\" (UID: \"9e2af7a5-08f0-4784-baec-f47dd090ac37\") " pod="openstack/ovn-controller-rzmth-config-gbppv" Nov 24 09:07:07 crc kubenswrapper[4886]: I1124 09:07:07.090022 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7f8bb\" (UniqueName: \"kubernetes.io/projected/9e2af7a5-08f0-4784-baec-f47dd090ac37-kube-api-access-7f8bb\") pod \"ovn-controller-rzmth-config-gbppv\" (UID: \"9e2af7a5-08f0-4784-baec-f47dd090ac37\") " pod="openstack/ovn-controller-rzmth-config-gbppv" Nov 24 09:07:07 crc kubenswrapper[4886]: I1124 09:07:07.090168 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/9e2af7a5-08f0-4784-baec-f47dd090ac37-var-run\") pod \"ovn-controller-rzmth-config-gbppv\" (UID: \"9e2af7a5-08f0-4784-baec-f47dd090ac37\") " pod="openstack/ovn-controller-rzmth-config-gbppv" Nov 24 09:07:07 crc kubenswrapper[4886]: I1124 09:07:07.090564 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/9e2af7a5-08f0-4784-baec-f47dd090ac37-var-run\") pod \"ovn-controller-rzmth-config-gbppv\" (UID: \"9e2af7a5-08f0-4784-baec-f47dd090ac37\") " pod="openstack/ovn-controller-rzmth-config-gbppv" Nov 24 09:07:07 crc kubenswrapper[4886]: I1124 09:07:07.090625 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/9e2af7a5-08f0-4784-baec-f47dd090ac37-var-log-ovn\") pod \"ovn-controller-rzmth-config-gbppv\" (UID: \"9e2af7a5-08f0-4784-baec-f47dd090ac37\") " pod="openstack/ovn-controller-rzmth-config-gbppv" Nov 24 09:07:07 crc kubenswrapper[4886]: I1124 09:07:07.091078 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/9e2af7a5-08f0-4784-baec-f47dd090ac37-additional-scripts\") pod \"ovn-controller-rzmth-config-gbppv\" (UID: \"9e2af7a5-08f0-4784-baec-f47dd090ac37\") " pod="openstack/ovn-controller-rzmth-config-gbppv" Nov 24 09:07:07 crc kubenswrapper[4886]: I1124 09:07:07.091219 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/9e2af7a5-08f0-4784-baec-f47dd090ac37-var-run-ovn\") pod \"ovn-controller-rzmth-config-gbppv\" (UID: \"9e2af7a5-08f0-4784-baec-f47dd090ac37\") " pod="openstack/ovn-controller-rzmth-config-gbppv" Nov 24 09:07:07 crc kubenswrapper[4886]: I1124 09:07:07.092782 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9e2af7a5-08f0-4784-baec-f47dd090ac37-scripts\") pod \"ovn-controller-rzmth-config-gbppv\" (UID: \"9e2af7a5-08f0-4784-baec-f47dd090ac37\") " pod="openstack/ovn-controller-rzmth-config-gbppv" Nov 24 09:07:07 crc kubenswrapper[4886]: I1124 09:07:07.124219 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7f8bb\" (UniqueName: \"kubernetes.io/projected/9e2af7a5-08f0-4784-baec-f47dd090ac37-kube-api-access-7f8bb\") pod \"ovn-controller-rzmth-config-gbppv\" (UID: \"9e2af7a5-08f0-4784-baec-f47dd090ac37\") " pod="openstack/ovn-controller-rzmth-config-gbppv" Nov 24 09:07:07 crc kubenswrapper[4886]: I1124 09:07:07.163410 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-rzmth-config-gbppv" Nov 24 09:07:11 crc kubenswrapper[4886]: I1124 09:07:11.383581 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-rzmth" podUID="b7951685-e0e7-4524-ba49-b720357aa59c" containerName="ovn-controller" probeResult="failure" output=< Nov 24 09:07:11 crc kubenswrapper[4886]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Nov 24 09:07:11 crc kubenswrapper[4886]: > Nov 24 09:07:11 crc kubenswrapper[4886]: I1124 09:07:11.801039 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/65b7f4e6-3f5e-419b-9761-c0fc78a4632d-etc-swift\") pod \"swift-storage-0\" (UID: \"65b7f4e6-3f5e-419b-9761-c0fc78a4632d\") " pod="openstack/swift-storage-0" Nov 24 09:07:11 crc kubenswrapper[4886]: I1124 09:07:11.807921 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/65b7f4e6-3f5e-419b-9761-c0fc78a4632d-etc-swift\") pod \"swift-storage-0\" (UID: \"65b7f4e6-3f5e-419b-9761-c0fc78a4632d\") " pod="openstack/swift-storage-0" Nov 24 09:07:11 crc kubenswrapper[4886]: I1124 09:07:11.866871 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Nov 24 09:07:12 crc kubenswrapper[4886]: I1124 09:07:12.797945 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Nov 24 09:07:13 crc kubenswrapper[4886]: I1124 09:07:13.191493 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Nov 24 09:07:13 crc kubenswrapper[4886]: I1124 09:07:13.225721 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-ll5c2"] Nov 24 09:07:13 crc kubenswrapper[4886]: I1124 09:07:13.227008 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-ll5c2" Nov 24 09:07:13 crc kubenswrapper[4886]: I1124 09:07:13.266324 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-2c4c-account-create-qcwml"] Nov 24 09:07:13 crc kubenswrapper[4886]: I1124 09:07:13.268675 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-2c4c-account-create-qcwml" Nov 24 09:07:13 crc kubenswrapper[4886]: I1124 09:07:13.271557 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Nov 24 09:07:13 crc kubenswrapper[4886]: I1124 09:07:13.303219 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-ll5c2"] Nov 24 09:07:13 crc kubenswrapper[4886]: I1124 09:07:13.334901 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fe94e3da-9230-46bd-9139-1ec416d11108-operator-scripts\") pod \"barbican-db-create-ll5c2\" (UID: \"fe94e3da-9230-46bd-9139-1ec416d11108\") " pod="openstack/barbican-db-create-ll5c2" Nov 24 09:07:13 crc kubenswrapper[4886]: I1124 09:07:13.335030 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jckzb\" (UniqueName: \"kubernetes.io/projected/fe94e3da-9230-46bd-9139-1ec416d11108-kube-api-access-jckzb\") pod \"barbican-db-create-ll5c2\" (UID: \"fe94e3da-9230-46bd-9139-1ec416d11108\") " pod="openstack/barbican-db-create-ll5c2" Nov 24 09:07:13 crc kubenswrapper[4886]: I1124 09:07:13.337005 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-2c4c-account-create-qcwml"] Nov 24 09:07:13 crc kubenswrapper[4886]: I1124 09:07:13.376249 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-7fqnd"] Nov 24 09:07:13 crc kubenswrapper[4886]: I1124 09:07:13.377856 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-7fqnd" Nov 24 09:07:13 crc kubenswrapper[4886]: I1124 09:07:13.392772 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-7fqnd"] Nov 24 09:07:13 crc kubenswrapper[4886]: I1124 09:07:13.441674 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/63eb5a8c-8a76-421f-9a44-a63c7ab43077-operator-scripts\") pod \"barbican-2c4c-account-create-qcwml\" (UID: \"63eb5a8c-8a76-421f-9a44-a63c7ab43077\") " pod="openstack/barbican-2c4c-account-create-qcwml" Nov 24 09:07:13 crc kubenswrapper[4886]: I1124 09:07:13.441720 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fe94e3da-9230-46bd-9139-1ec416d11108-operator-scripts\") pod \"barbican-db-create-ll5c2\" (UID: \"fe94e3da-9230-46bd-9139-1ec416d11108\") " pod="openstack/barbican-db-create-ll5c2" Nov 24 09:07:13 crc kubenswrapper[4886]: I1124 09:07:13.441786 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jckzb\" (UniqueName: \"kubernetes.io/projected/fe94e3da-9230-46bd-9139-1ec416d11108-kube-api-access-jckzb\") pod \"barbican-db-create-ll5c2\" (UID: \"fe94e3da-9230-46bd-9139-1ec416d11108\") " pod="openstack/barbican-db-create-ll5c2" Nov 24 09:07:13 crc kubenswrapper[4886]: I1124 09:07:13.441862 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-blnff\" (UniqueName: \"kubernetes.io/projected/63eb5a8c-8a76-421f-9a44-a63c7ab43077-kube-api-access-blnff\") pod \"barbican-2c4c-account-create-qcwml\" (UID: \"63eb5a8c-8a76-421f-9a44-a63c7ab43077\") " pod="openstack/barbican-2c4c-account-create-qcwml" Nov 24 09:07:13 crc kubenswrapper[4886]: I1124 09:07:13.442679 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fe94e3da-9230-46bd-9139-1ec416d11108-operator-scripts\") pod \"barbican-db-create-ll5c2\" (UID: \"fe94e3da-9230-46bd-9139-1ec416d11108\") " pod="openstack/barbican-db-create-ll5c2" Nov 24 09:07:13 crc kubenswrapper[4886]: I1124 09:07:13.494730 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jckzb\" (UniqueName: \"kubernetes.io/projected/fe94e3da-9230-46bd-9139-1ec416d11108-kube-api-access-jckzb\") pod \"barbican-db-create-ll5c2\" (UID: \"fe94e3da-9230-46bd-9139-1ec416d11108\") " pod="openstack/barbican-db-create-ll5c2" Nov 24 09:07:13 crc kubenswrapper[4886]: I1124 09:07:13.522441 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-8e9a-account-create-b55cn"] Nov 24 09:07:13 crc kubenswrapper[4886]: I1124 09:07:13.523791 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-8e9a-account-create-b55cn" Nov 24 09:07:13 crc kubenswrapper[4886]: I1124 09:07:13.530877 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Nov 24 09:07:13 crc kubenswrapper[4886]: I1124 09:07:13.545417 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/63eb5a8c-8a76-421f-9a44-a63c7ab43077-operator-scripts\") pod \"barbican-2c4c-account-create-qcwml\" (UID: \"63eb5a8c-8a76-421f-9a44-a63c7ab43077\") " pod="openstack/barbican-2c4c-account-create-qcwml" Nov 24 09:07:13 crc kubenswrapper[4886]: I1124 09:07:13.545513 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t5xxk\" (UniqueName: \"kubernetes.io/projected/22909ed9-c35f-4768-ab83-9f8a3442718b-kube-api-access-t5xxk\") pod \"cinder-db-create-7fqnd\" (UID: \"22909ed9-c35f-4768-ab83-9f8a3442718b\") " pod="openstack/cinder-db-create-7fqnd" Nov 24 09:07:13 crc kubenswrapper[4886]: I1124 09:07:13.545557 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/22909ed9-c35f-4768-ab83-9f8a3442718b-operator-scripts\") pod \"cinder-db-create-7fqnd\" (UID: \"22909ed9-c35f-4768-ab83-9f8a3442718b\") " pod="openstack/cinder-db-create-7fqnd" Nov 24 09:07:13 crc kubenswrapper[4886]: I1124 09:07:13.545652 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-blnff\" (UniqueName: \"kubernetes.io/projected/63eb5a8c-8a76-421f-9a44-a63c7ab43077-kube-api-access-blnff\") pod \"barbican-2c4c-account-create-qcwml\" (UID: \"63eb5a8c-8a76-421f-9a44-a63c7ab43077\") " pod="openstack/barbican-2c4c-account-create-qcwml" Nov 24 09:07:13 crc kubenswrapper[4886]: I1124 09:07:13.546692 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/63eb5a8c-8a76-421f-9a44-a63c7ab43077-operator-scripts\") pod \"barbican-2c4c-account-create-qcwml\" (UID: \"63eb5a8c-8a76-421f-9a44-a63c7ab43077\") " pod="openstack/barbican-2c4c-account-create-qcwml" Nov 24 09:07:13 crc kubenswrapper[4886]: I1124 09:07:13.550046 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-ll5c2" Nov 24 09:07:13 crc kubenswrapper[4886]: I1124 09:07:13.556817 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-8e9a-account-create-b55cn"] Nov 24 09:07:13 crc kubenswrapper[4886]: I1124 09:07:13.599080 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-blnff\" (UniqueName: \"kubernetes.io/projected/63eb5a8c-8a76-421f-9a44-a63c7ab43077-kube-api-access-blnff\") pod \"barbican-2c4c-account-create-qcwml\" (UID: \"63eb5a8c-8a76-421f-9a44-a63c7ab43077\") " pod="openstack/barbican-2c4c-account-create-qcwml" Nov 24 09:07:13 crc kubenswrapper[4886]: I1124 09:07:13.646225 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-t7vln"] Nov 24 09:07:13 crc kubenswrapper[4886]: I1124 09:07:13.647835 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7659c776-9218-442b-b813-ebff19a5e5ee-operator-scripts\") pod \"cinder-8e9a-account-create-b55cn\" (UID: \"7659c776-9218-442b-b813-ebff19a5e5ee\") " pod="openstack/cinder-8e9a-account-create-b55cn" Nov 24 09:07:13 crc kubenswrapper[4886]: I1124 09:07:13.647929 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t5xxk\" (UniqueName: \"kubernetes.io/projected/22909ed9-c35f-4768-ab83-9f8a3442718b-kube-api-access-t5xxk\") pod \"cinder-db-create-7fqnd\" (UID: \"22909ed9-c35f-4768-ab83-9f8a3442718b\") " pod="openstack/cinder-db-create-7fqnd" Nov 24 09:07:13 crc kubenswrapper[4886]: I1124 09:07:13.647966 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/22909ed9-c35f-4768-ab83-9f8a3442718b-operator-scripts\") pod \"cinder-db-create-7fqnd\" (UID: \"22909ed9-c35f-4768-ab83-9f8a3442718b\") " pod="openstack/cinder-db-create-7fqnd" Nov 24 09:07:13 crc kubenswrapper[4886]: I1124 09:07:13.648005 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hjf27\" (UniqueName: \"kubernetes.io/projected/7659c776-9218-442b-b813-ebff19a5e5ee-kube-api-access-hjf27\") pod \"cinder-8e9a-account-create-b55cn\" (UID: \"7659c776-9218-442b-b813-ebff19a5e5ee\") " pod="openstack/cinder-8e9a-account-create-b55cn" Nov 24 09:07:13 crc kubenswrapper[4886]: I1124 09:07:13.648056 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-t7vln" Nov 24 09:07:13 crc kubenswrapper[4886]: I1124 09:07:13.649204 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/22909ed9-c35f-4768-ab83-9f8a3442718b-operator-scripts\") pod \"cinder-db-create-7fqnd\" (UID: \"22909ed9-c35f-4768-ab83-9f8a3442718b\") " pod="openstack/cinder-db-create-7fqnd" Nov 24 09:07:13 crc kubenswrapper[4886]: I1124 09:07:13.652489 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Nov 24 09:07:13 crc kubenswrapper[4886]: I1124 09:07:13.652585 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Nov 24 09:07:13 crc kubenswrapper[4886]: I1124 09:07:13.652671 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Nov 24 09:07:13 crc kubenswrapper[4886]: I1124 09:07:13.655823 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-t7vln"] Nov 24 09:07:13 crc kubenswrapper[4886]: I1124 09:07:13.673108 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-tfhvb" Nov 24 09:07:13 crc kubenswrapper[4886]: I1124 09:07:13.675101 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-2c4c-account-create-qcwml" Nov 24 09:07:13 crc kubenswrapper[4886]: I1124 09:07:13.702370 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t5xxk\" (UniqueName: \"kubernetes.io/projected/22909ed9-c35f-4768-ab83-9f8a3442718b-kube-api-access-t5xxk\") pod \"cinder-db-create-7fqnd\" (UID: \"22909ed9-c35f-4768-ab83-9f8a3442718b\") " pod="openstack/cinder-db-create-7fqnd" Nov 24 09:07:13 crc kubenswrapper[4886]: I1124 09:07:13.743308 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-5xk4v"] Nov 24 09:07:13 crc kubenswrapper[4886]: I1124 09:07:13.745415 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-5xk4v" Nov 24 09:07:13 crc kubenswrapper[4886]: I1124 09:07:13.749396 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/606b0ae4-0857-44f3-a72a-aa8cfa5416ef-config-data\") pod \"keystone-db-sync-t7vln\" (UID: \"606b0ae4-0857-44f3-a72a-aa8cfa5416ef\") " pod="openstack/keystone-db-sync-t7vln" Nov 24 09:07:13 crc kubenswrapper[4886]: I1124 09:07:13.749458 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-td9bp\" (UniqueName: \"kubernetes.io/projected/606b0ae4-0857-44f3-a72a-aa8cfa5416ef-kube-api-access-td9bp\") pod \"keystone-db-sync-t7vln\" (UID: \"606b0ae4-0857-44f3-a72a-aa8cfa5416ef\") " pod="openstack/keystone-db-sync-t7vln" Nov 24 09:07:13 crc kubenswrapper[4886]: I1124 09:07:13.749498 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/606b0ae4-0857-44f3-a72a-aa8cfa5416ef-combined-ca-bundle\") pod \"keystone-db-sync-t7vln\" (UID: \"606b0ae4-0857-44f3-a72a-aa8cfa5416ef\") " pod="openstack/keystone-db-sync-t7vln" Nov 24 09:07:13 crc kubenswrapper[4886]: I1124 09:07:13.749535 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hjf27\" (UniqueName: \"kubernetes.io/projected/7659c776-9218-442b-b813-ebff19a5e5ee-kube-api-access-hjf27\") pod \"cinder-8e9a-account-create-b55cn\" (UID: \"7659c776-9218-442b-b813-ebff19a5e5ee\") " pod="openstack/cinder-8e9a-account-create-b55cn" Nov 24 09:07:13 crc kubenswrapper[4886]: I1124 09:07:13.749599 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7659c776-9218-442b-b813-ebff19a5e5ee-operator-scripts\") pod \"cinder-8e9a-account-create-b55cn\" (UID: \"7659c776-9218-442b-b813-ebff19a5e5ee\") " pod="openstack/cinder-8e9a-account-create-b55cn" Nov 24 09:07:13 crc kubenswrapper[4886]: I1124 09:07:13.750402 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7659c776-9218-442b-b813-ebff19a5e5ee-operator-scripts\") pod \"cinder-8e9a-account-create-b55cn\" (UID: \"7659c776-9218-442b-b813-ebff19a5e5ee\") " pod="openstack/cinder-8e9a-account-create-b55cn" Nov 24 09:07:13 crc kubenswrapper[4886]: I1124 09:07:13.779878 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hjf27\" (UniqueName: \"kubernetes.io/projected/7659c776-9218-442b-b813-ebff19a5e5ee-kube-api-access-hjf27\") pod \"cinder-8e9a-account-create-b55cn\" (UID: \"7659c776-9218-442b-b813-ebff19a5e5ee\") " pod="openstack/cinder-8e9a-account-create-b55cn" Nov 24 09:07:13 crc kubenswrapper[4886]: I1124 09:07:13.804081 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-5xk4v"] Nov 24 09:07:13 crc kubenswrapper[4886]: I1124 09:07:13.851668 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-td9bp\" (UniqueName: \"kubernetes.io/projected/606b0ae4-0857-44f3-a72a-aa8cfa5416ef-kube-api-access-td9bp\") pod \"keystone-db-sync-t7vln\" (UID: \"606b0ae4-0857-44f3-a72a-aa8cfa5416ef\") " pod="openstack/keystone-db-sync-t7vln" Nov 24 09:07:13 crc kubenswrapper[4886]: I1124 09:07:13.851737 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/606b0ae4-0857-44f3-a72a-aa8cfa5416ef-combined-ca-bundle\") pod \"keystone-db-sync-t7vln\" (UID: \"606b0ae4-0857-44f3-a72a-aa8cfa5416ef\") " pod="openstack/keystone-db-sync-t7vln" Nov 24 09:07:13 crc kubenswrapper[4886]: I1124 09:07:13.851858 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fa3c02ae-6e3f-4d5d-bd36-57dc2ea8c131-operator-scripts\") pod \"neutron-db-create-5xk4v\" (UID: \"fa3c02ae-6e3f-4d5d-bd36-57dc2ea8c131\") " pod="openstack/neutron-db-create-5xk4v" Nov 24 09:07:13 crc kubenswrapper[4886]: I1124 09:07:13.851893 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xpw5s\" (UniqueName: \"kubernetes.io/projected/fa3c02ae-6e3f-4d5d-bd36-57dc2ea8c131-kube-api-access-xpw5s\") pod \"neutron-db-create-5xk4v\" (UID: \"fa3c02ae-6e3f-4d5d-bd36-57dc2ea8c131\") " pod="openstack/neutron-db-create-5xk4v" Nov 24 09:07:13 crc kubenswrapper[4886]: I1124 09:07:13.851921 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/606b0ae4-0857-44f3-a72a-aa8cfa5416ef-config-data\") pod \"keystone-db-sync-t7vln\" (UID: \"606b0ae4-0857-44f3-a72a-aa8cfa5416ef\") " pod="openstack/keystone-db-sync-t7vln" Nov 24 09:07:13 crc kubenswrapper[4886]: I1124 09:07:13.863552 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-7fqnd" Nov 24 09:07:13 crc kubenswrapper[4886]: I1124 09:07:13.864207 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/606b0ae4-0857-44f3-a72a-aa8cfa5416ef-config-data\") pod \"keystone-db-sync-t7vln\" (UID: \"606b0ae4-0857-44f3-a72a-aa8cfa5416ef\") " pod="openstack/keystone-db-sync-t7vln" Nov 24 09:07:13 crc kubenswrapper[4886]: I1124 09:07:13.872131 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-8e9a-account-create-b55cn" Nov 24 09:07:13 crc kubenswrapper[4886]: I1124 09:07:13.884120 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/606b0ae4-0857-44f3-a72a-aa8cfa5416ef-combined-ca-bundle\") pod \"keystone-db-sync-t7vln\" (UID: \"606b0ae4-0857-44f3-a72a-aa8cfa5416ef\") " pod="openstack/keystone-db-sync-t7vln" Nov 24 09:07:13 crc kubenswrapper[4886]: I1124 09:07:13.894519 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-td9bp\" (UniqueName: \"kubernetes.io/projected/606b0ae4-0857-44f3-a72a-aa8cfa5416ef-kube-api-access-td9bp\") pod \"keystone-db-sync-t7vln\" (UID: \"606b0ae4-0857-44f3-a72a-aa8cfa5416ef\") " pod="openstack/keystone-db-sync-t7vln" Nov 24 09:07:13 crc kubenswrapper[4886]: I1124 09:07:13.903722 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-387e-account-create-hz44q"] Nov 24 09:07:13 crc kubenswrapper[4886]: I1124 09:07:13.921567 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-387e-account-create-hz44q" Nov 24 09:07:13 crc kubenswrapper[4886]: I1124 09:07:13.931743 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Nov 24 09:07:13 crc kubenswrapper[4886]: I1124 09:07:13.941320 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-387e-account-create-hz44q"] Nov 24 09:07:13 crc kubenswrapper[4886]: I1124 09:07:13.954258 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fa3c02ae-6e3f-4d5d-bd36-57dc2ea8c131-operator-scripts\") pod \"neutron-db-create-5xk4v\" (UID: \"fa3c02ae-6e3f-4d5d-bd36-57dc2ea8c131\") " pod="openstack/neutron-db-create-5xk4v" Nov 24 09:07:13 crc kubenswrapper[4886]: I1124 09:07:13.954329 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xpw5s\" (UniqueName: \"kubernetes.io/projected/fa3c02ae-6e3f-4d5d-bd36-57dc2ea8c131-kube-api-access-xpw5s\") pod \"neutron-db-create-5xk4v\" (UID: \"fa3c02ae-6e3f-4d5d-bd36-57dc2ea8c131\") " pod="openstack/neutron-db-create-5xk4v" Nov 24 09:07:13 crc kubenswrapper[4886]: I1124 09:07:13.964742 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fa3c02ae-6e3f-4d5d-bd36-57dc2ea8c131-operator-scripts\") pod \"neutron-db-create-5xk4v\" (UID: \"fa3c02ae-6e3f-4d5d-bd36-57dc2ea8c131\") " pod="openstack/neutron-db-create-5xk4v" Nov 24 09:07:14 crc kubenswrapper[4886]: I1124 09:07:14.001136 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xpw5s\" (UniqueName: \"kubernetes.io/projected/fa3c02ae-6e3f-4d5d-bd36-57dc2ea8c131-kube-api-access-xpw5s\") pod \"neutron-db-create-5xk4v\" (UID: \"fa3c02ae-6e3f-4d5d-bd36-57dc2ea8c131\") " pod="openstack/neutron-db-create-5xk4v" Nov 24 09:07:14 crc kubenswrapper[4886]: I1124 09:07:14.039480 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-t7vln" Nov 24 09:07:14 crc kubenswrapper[4886]: I1124 09:07:14.059511 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fa05ad9f-553a-4565-b074-6cae6220d5d1-operator-scripts\") pod \"neutron-387e-account-create-hz44q\" (UID: \"fa05ad9f-553a-4565-b074-6cae6220d5d1\") " pod="openstack/neutron-387e-account-create-hz44q" Nov 24 09:07:14 crc kubenswrapper[4886]: I1124 09:07:14.059579 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zf6qf\" (UniqueName: \"kubernetes.io/projected/fa05ad9f-553a-4565-b074-6cae6220d5d1-kube-api-access-zf6qf\") pod \"neutron-387e-account-create-hz44q\" (UID: \"fa05ad9f-553a-4565-b074-6cae6220d5d1\") " pod="openstack/neutron-387e-account-create-hz44q" Nov 24 09:07:14 crc kubenswrapper[4886]: I1124 09:07:14.090227 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-5xk4v" Nov 24 09:07:14 crc kubenswrapper[4886]: I1124 09:07:14.160845 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fa05ad9f-553a-4565-b074-6cae6220d5d1-operator-scripts\") pod \"neutron-387e-account-create-hz44q\" (UID: \"fa05ad9f-553a-4565-b074-6cae6220d5d1\") " pod="openstack/neutron-387e-account-create-hz44q" Nov 24 09:07:14 crc kubenswrapper[4886]: I1124 09:07:14.160902 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zf6qf\" (UniqueName: \"kubernetes.io/projected/fa05ad9f-553a-4565-b074-6cae6220d5d1-kube-api-access-zf6qf\") pod \"neutron-387e-account-create-hz44q\" (UID: \"fa05ad9f-553a-4565-b074-6cae6220d5d1\") " pod="openstack/neutron-387e-account-create-hz44q" Nov 24 09:07:14 crc kubenswrapper[4886]: I1124 09:07:14.161942 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fa05ad9f-553a-4565-b074-6cae6220d5d1-operator-scripts\") pod \"neutron-387e-account-create-hz44q\" (UID: \"fa05ad9f-553a-4565-b074-6cae6220d5d1\") " pod="openstack/neutron-387e-account-create-hz44q" Nov 24 09:07:14 crc kubenswrapper[4886]: I1124 09:07:14.192383 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zf6qf\" (UniqueName: \"kubernetes.io/projected/fa05ad9f-553a-4565-b074-6cae6220d5d1-kube-api-access-zf6qf\") pod \"neutron-387e-account-create-hz44q\" (UID: \"fa05ad9f-553a-4565-b074-6cae6220d5d1\") " pod="openstack/neutron-387e-account-create-hz44q" Nov 24 09:07:14 crc kubenswrapper[4886]: I1124 09:07:14.294672 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-387e-account-create-hz44q" Nov 24 09:07:14 crc kubenswrapper[4886]: I1124 09:07:14.469591 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-rzmth-config-gbppv"] Nov 24 09:07:14 crc kubenswrapper[4886]: I1124 09:07:14.563782 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-ll5c2"] Nov 24 09:07:14 crc kubenswrapper[4886]: I1124 09:07:14.636863 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Nov 24 09:07:14 crc kubenswrapper[4886]: I1124 09:07:14.707574 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-7fqnd"] Nov 24 09:07:14 crc kubenswrapper[4886]: I1124 09:07:14.750273 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-2c4c-account-create-qcwml"] Nov 24 09:07:14 crc kubenswrapper[4886]: W1124 09:07:14.780580 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod22909ed9_c35f_4768_ab83_9f8a3442718b.slice/crio-91c2627f58a629b157df035487ff8744fca6e25e5f971ffa6bc74cc642f83541 WatchSource:0}: Error finding container 91c2627f58a629b157df035487ff8744fca6e25e5f971ffa6bc74cc642f83541: Status 404 returned error can't find the container with id 91c2627f58a629b157df035487ff8744fca6e25e5f971ffa6bc74cc642f83541 Nov 24 09:07:14 crc kubenswrapper[4886]: I1124 09:07:14.924876 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-ll5c2" event={"ID":"fe94e3da-9230-46bd-9139-1ec416d11108","Type":"ContainerStarted","Data":"18e35a5033cd96cf724834d173755ee51b97f27edb107bf91ac89a94f2258793"} Nov 24 09:07:14 crc kubenswrapper[4886]: I1124 09:07:14.948661 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"65b7f4e6-3f5e-419b-9761-c0fc78a4632d","Type":"ContainerStarted","Data":"96efbe41c3d34bde3e264d2856dab48b87a553f45194de272400a58d9c5b67a2"} Nov 24 09:07:14 crc kubenswrapper[4886]: I1124 09:07:14.960244 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-rzmth-config-gbppv" event={"ID":"9e2af7a5-08f0-4784-baec-f47dd090ac37","Type":"ContainerStarted","Data":"06e9ce1256238b17286ccc2a76b7ddeeaa4b2dca038d71e9db92cc4b56f8f138"} Nov 24 09:07:14 crc kubenswrapper[4886]: I1124 09:07:14.967542 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-7fqnd" event={"ID":"22909ed9-c35f-4768-ab83-9f8a3442718b","Type":"ContainerStarted","Data":"91c2627f58a629b157df035487ff8744fca6e25e5f971ffa6bc74cc642f83541"} Nov 24 09:07:15 crc kubenswrapper[4886]: I1124 09:07:15.012264 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-2c4c-account-create-qcwml" event={"ID":"63eb5a8c-8a76-421f-9a44-a63c7ab43077","Type":"ContainerStarted","Data":"ec1e13adb55d7d32a9861e0ac5fbc56a73fc8d362c4729bab05dd2e8a45f07d3"} Nov 24 09:07:15 crc kubenswrapper[4886]: W1124 09:07:15.022491 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod606b0ae4_0857_44f3_a72a_aa8cfa5416ef.slice/crio-7f8949908f56bbd8b1222aa63e88d37caf33e13ebd6d7b9e1d476736e5fe801d WatchSource:0}: Error finding container 7f8949908f56bbd8b1222aa63e88d37caf33e13ebd6d7b9e1d476736e5fe801d: Status 404 returned error can't find the container with id 7f8949908f56bbd8b1222aa63e88d37caf33e13ebd6d7b9e1d476736e5fe801d Nov 24 09:07:15 crc kubenswrapper[4886]: I1124 09:07:15.061356 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-8e9a-account-create-b55cn"] Nov 24 09:07:15 crc kubenswrapper[4886]: I1124 09:07:15.072959 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-t7vln"] Nov 24 09:07:15 crc kubenswrapper[4886]: I1124 09:07:15.173632 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-387e-account-create-hz44q"] Nov 24 09:07:15 crc kubenswrapper[4886]: W1124 09:07:15.186761 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfa05ad9f_553a_4565_b074_6cae6220d5d1.slice/crio-65e3cb1026bc85ab0b2a2d9a74b824ec627c36fd38f106b4a3883642af8fdda0 WatchSource:0}: Error finding container 65e3cb1026bc85ab0b2a2d9a74b824ec627c36fd38f106b4a3883642af8fdda0: Status 404 returned error can't find the container with id 65e3cb1026bc85ab0b2a2d9a74b824ec627c36fd38f106b4a3883642af8fdda0 Nov 24 09:07:15 crc kubenswrapper[4886]: I1124 09:07:15.199223 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-5xk4v"] Nov 24 09:07:16 crc kubenswrapper[4886]: I1124 09:07:16.038939 4886 generic.go:334] "Generic (PLEG): container finished" podID="63eb5a8c-8a76-421f-9a44-a63c7ab43077" containerID="ee2c3e316b0e6a192598ed3200f7cf25cb688664e2dd7f2c96ff86b9759f9a8c" exitCode=0 Nov 24 09:07:16 crc kubenswrapper[4886]: I1124 09:07:16.039013 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-2c4c-account-create-qcwml" event={"ID":"63eb5a8c-8a76-421f-9a44-a63c7ab43077","Type":"ContainerDied","Data":"ee2c3e316b0e6a192598ed3200f7cf25cb688664e2dd7f2c96ff86b9759f9a8c"} Nov 24 09:07:16 crc kubenswrapper[4886]: I1124 09:07:16.042175 4886 generic.go:334] "Generic (PLEG): container finished" podID="fe94e3da-9230-46bd-9139-1ec416d11108" containerID="3ba76402272655a4a57caed0d6d49a8a00ea1c56aca763d213346df679685c22" exitCode=0 Nov 24 09:07:16 crc kubenswrapper[4886]: I1124 09:07:16.042252 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-ll5c2" event={"ID":"fe94e3da-9230-46bd-9139-1ec416d11108","Type":"ContainerDied","Data":"3ba76402272655a4a57caed0d6d49a8a00ea1c56aca763d213346df679685c22"} Nov 24 09:07:16 crc kubenswrapper[4886]: I1124 09:07:16.044263 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-t7vln" event={"ID":"606b0ae4-0857-44f3-a72a-aa8cfa5416ef","Type":"ContainerStarted","Data":"7f8949908f56bbd8b1222aa63e88d37caf33e13ebd6d7b9e1d476736e5fe801d"} Nov 24 09:07:16 crc kubenswrapper[4886]: I1124 09:07:16.046089 4886 generic.go:334] "Generic (PLEG): container finished" podID="22909ed9-c35f-4768-ab83-9f8a3442718b" containerID="9a4b068b4efa9e961ba6485d83cd258db877c7b2ebd1a6c688d9d3508638d9f4" exitCode=0 Nov 24 09:07:16 crc kubenswrapper[4886]: I1124 09:07:16.046074 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-7fqnd" event={"ID":"22909ed9-c35f-4768-ab83-9f8a3442718b","Type":"ContainerDied","Data":"9a4b068b4efa9e961ba6485d83cd258db877c7b2ebd1a6c688d9d3508638d9f4"} Nov 24 09:07:16 crc kubenswrapper[4886]: I1124 09:07:16.050074 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-4sj8x" event={"ID":"f45428c8-b123-4e3e-9ba0-5ab11cf317a5","Type":"ContainerStarted","Data":"4bcf034099c8cf9144e31f390a627282fb58c0b2bce78275df11349602751575"} Nov 24 09:07:16 crc kubenswrapper[4886]: I1124 09:07:16.053011 4886 generic.go:334] "Generic (PLEG): container finished" podID="7659c776-9218-442b-b813-ebff19a5e5ee" containerID="b1e92747402cd8daf4648b849b492976ddbfd8a3f0b0310c191abc6e545d4597" exitCode=0 Nov 24 09:07:16 crc kubenswrapper[4886]: I1124 09:07:16.053122 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-8e9a-account-create-b55cn" event={"ID":"7659c776-9218-442b-b813-ebff19a5e5ee","Type":"ContainerDied","Data":"b1e92747402cd8daf4648b849b492976ddbfd8a3f0b0310c191abc6e545d4597"} Nov 24 09:07:16 crc kubenswrapper[4886]: I1124 09:07:16.053196 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-8e9a-account-create-b55cn" event={"ID":"7659c776-9218-442b-b813-ebff19a5e5ee","Type":"ContainerStarted","Data":"5c83b6e9d3c8dc0eced8ff7dfa8cb5096d0179432167b75f910856a789ccd3da"} Nov 24 09:07:16 crc kubenswrapper[4886]: I1124 09:07:16.066933 4886 generic.go:334] "Generic (PLEG): container finished" podID="fa3c02ae-6e3f-4d5d-bd36-57dc2ea8c131" containerID="f0691e020efc30c39acfccae054077c4373d9bf81d4c01d5c3966c750533db67" exitCode=0 Nov 24 09:07:16 crc kubenswrapper[4886]: I1124 09:07:16.067061 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-5xk4v" event={"ID":"fa3c02ae-6e3f-4d5d-bd36-57dc2ea8c131","Type":"ContainerDied","Data":"f0691e020efc30c39acfccae054077c4373d9bf81d4c01d5c3966c750533db67"} Nov 24 09:07:16 crc kubenswrapper[4886]: I1124 09:07:16.067238 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-5xk4v" event={"ID":"fa3c02ae-6e3f-4d5d-bd36-57dc2ea8c131","Type":"ContainerStarted","Data":"0d0236941f0a3785f04efed98f490d6974664dd1cc4d40a2710be76fc5a50982"} Nov 24 09:07:16 crc kubenswrapper[4886]: I1124 09:07:16.077577 4886 generic.go:334] "Generic (PLEG): container finished" podID="fa05ad9f-553a-4565-b074-6cae6220d5d1" containerID="3474680a0d4d0f9b57bf626a496826fba8e355aa9b52bd3d677a201467a8b0a2" exitCode=0 Nov 24 09:07:16 crc kubenswrapper[4886]: I1124 09:07:16.077846 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-387e-account-create-hz44q" event={"ID":"fa05ad9f-553a-4565-b074-6cae6220d5d1","Type":"ContainerDied","Data":"3474680a0d4d0f9b57bf626a496826fba8e355aa9b52bd3d677a201467a8b0a2"} Nov 24 09:07:16 crc kubenswrapper[4886]: I1124 09:07:16.077883 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-387e-account-create-hz44q" event={"ID":"fa05ad9f-553a-4565-b074-6cae6220d5d1","Type":"ContainerStarted","Data":"65e3cb1026bc85ab0b2a2d9a74b824ec627c36fd38f106b4a3883642af8fdda0"} Nov 24 09:07:16 crc kubenswrapper[4886]: I1124 09:07:16.095686 4886 generic.go:334] "Generic (PLEG): container finished" podID="9e2af7a5-08f0-4784-baec-f47dd090ac37" containerID="d68a5d6237822d87f912b831c39028de71f2381796c7c8d731910a94191d89ef" exitCode=0 Nov 24 09:07:16 crc kubenswrapper[4886]: I1124 09:07:16.095782 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-rzmth-config-gbppv" event={"ID":"9e2af7a5-08f0-4784-baec-f47dd090ac37","Type":"ContainerDied","Data":"d68a5d6237822d87f912b831c39028de71f2381796c7c8d731910a94191d89ef"} Nov 24 09:07:16 crc kubenswrapper[4886]: I1124 09:07:16.117911 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-4sj8x" podStartSLOduration=4.286058126 podStartE2EDuration="20.117884293s" podCreationTimestamp="2025-11-24 09:06:56 +0000 UTC" firstStartedPulling="2025-11-24 09:06:57.631861962 +0000 UTC m=+1073.518600097" lastFinishedPulling="2025-11-24 09:07:13.463688129 +0000 UTC m=+1089.350426264" observedRunningTime="2025-11-24 09:07:16.11254501 +0000 UTC m=+1091.999283145" watchObservedRunningTime="2025-11-24 09:07:16.117884293 +0000 UTC m=+1092.004622418" Nov 24 09:07:16 crc kubenswrapper[4886]: I1124 09:07:16.420955 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-rzmth" Nov 24 09:07:17 crc kubenswrapper[4886]: I1124 09:07:17.157770 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"65b7f4e6-3f5e-419b-9761-c0fc78a4632d","Type":"ContainerStarted","Data":"fc93dc258956d237c743ec79541f303dde7c43f1eb690acc2364f032a8c5810b"} Nov 24 09:07:17 crc kubenswrapper[4886]: I1124 09:07:17.157862 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"65b7f4e6-3f5e-419b-9761-c0fc78a4632d","Type":"ContainerStarted","Data":"a58d6965e20f6fa2d36f599cb705918b6ec1ba81835bf6fca8abf59466449d3d"} Nov 24 09:07:17 crc kubenswrapper[4886]: I1124 09:07:17.736533 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-8e9a-account-create-b55cn" Nov 24 09:07:17 crc kubenswrapper[4886]: I1124 09:07:17.895893 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7659c776-9218-442b-b813-ebff19a5e5ee-operator-scripts\") pod \"7659c776-9218-442b-b813-ebff19a5e5ee\" (UID: \"7659c776-9218-442b-b813-ebff19a5e5ee\") " Nov 24 09:07:17 crc kubenswrapper[4886]: I1124 09:07:17.896520 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hjf27\" (UniqueName: \"kubernetes.io/projected/7659c776-9218-442b-b813-ebff19a5e5ee-kube-api-access-hjf27\") pod \"7659c776-9218-442b-b813-ebff19a5e5ee\" (UID: \"7659c776-9218-442b-b813-ebff19a5e5ee\") " Nov 24 09:07:17 crc kubenswrapper[4886]: I1124 09:07:17.897336 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7659c776-9218-442b-b813-ebff19a5e5ee-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "7659c776-9218-442b-b813-ebff19a5e5ee" (UID: "7659c776-9218-442b-b813-ebff19a5e5ee"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 09:07:17 crc kubenswrapper[4886]: I1124 09:07:17.922077 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7659c776-9218-442b-b813-ebff19a5e5ee-kube-api-access-hjf27" (OuterVolumeSpecName: "kube-api-access-hjf27") pod "7659c776-9218-442b-b813-ebff19a5e5ee" (UID: "7659c776-9218-442b-b813-ebff19a5e5ee"). InnerVolumeSpecName "kube-api-access-hjf27". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 09:07:17 crc kubenswrapper[4886]: I1124 09:07:17.999181 4886 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7659c776-9218-442b-b813-ebff19a5e5ee-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 09:07:17 crc kubenswrapper[4886]: I1124 09:07:17.999215 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hjf27\" (UniqueName: \"kubernetes.io/projected/7659c776-9218-442b-b813-ebff19a5e5ee-kube-api-access-hjf27\") on node \"crc\" DevicePath \"\"" Nov 24 09:07:18 crc kubenswrapper[4886]: I1124 09:07:18.048958 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-387e-account-create-hz44q" Nov 24 09:07:18 crc kubenswrapper[4886]: I1124 09:07:18.052409 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-rzmth-config-gbppv" Nov 24 09:07:18 crc kubenswrapper[4886]: I1124 09:07:18.067972 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-ll5c2" Nov 24 09:07:18 crc kubenswrapper[4886]: I1124 09:07:18.076649 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-2c4c-account-create-qcwml" Nov 24 09:07:18 crc kubenswrapper[4886]: I1124 09:07:18.090221 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-7fqnd" Nov 24 09:07:18 crc kubenswrapper[4886]: I1124 09:07:18.101205 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-5xk4v" Nov 24 09:07:18 crc kubenswrapper[4886]: I1124 09:07:18.179013 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-5xk4v" event={"ID":"fa3c02ae-6e3f-4d5d-bd36-57dc2ea8c131","Type":"ContainerDied","Data":"0d0236941f0a3785f04efed98f490d6974664dd1cc4d40a2710be76fc5a50982"} Nov 24 09:07:18 crc kubenswrapper[4886]: I1124 09:07:18.179088 4886 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0d0236941f0a3785f04efed98f490d6974664dd1cc4d40a2710be76fc5a50982" Nov 24 09:07:18 crc kubenswrapper[4886]: I1124 09:07:18.179099 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-5xk4v" Nov 24 09:07:18 crc kubenswrapper[4886]: I1124 09:07:18.183357 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"65b7f4e6-3f5e-419b-9761-c0fc78a4632d","Type":"ContainerStarted","Data":"1cafdb2198b0440219524feee9c8cf0c289abaebf651a057fee05ff01e9f884d"} Nov 24 09:07:18 crc kubenswrapper[4886]: I1124 09:07:18.183418 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"65b7f4e6-3f5e-419b-9761-c0fc78a4632d","Type":"ContainerStarted","Data":"4ce92738c46e4ac7e193055a6040ff3b0384fec08775e888ffed5d740c73a5c5"} Nov 24 09:07:18 crc kubenswrapper[4886]: I1124 09:07:18.186695 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-387e-account-create-hz44q" event={"ID":"fa05ad9f-553a-4565-b074-6cae6220d5d1","Type":"ContainerDied","Data":"65e3cb1026bc85ab0b2a2d9a74b824ec627c36fd38f106b4a3883642af8fdda0"} Nov 24 09:07:18 crc kubenswrapper[4886]: I1124 09:07:18.186716 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-387e-account-create-hz44q" Nov 24 09:07:18 crc kubenswrapper[4886]: I1124 09:07:18.186733 4886 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="65e3cb1026bc85ab0b2a2d9a74b824ec627c36fd38f106b4a3883642af8fdda0" Nov 24 09:07:18 crc kubenswrapper[4886]: I1124 09:07:18.193496 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-rzmth-config-gbppv" event={"ID":"9e2af7a5-08f0-4784-baec-f47dd090ac37","Type":"ContainerDied","Data":"06e9ce1256238b17286ccc2a76b7ddeeaa4b2dca038d71e9db92cc4b56f8f138"} Nov 24 09:07:18 crc kubenswrapper[4886]: I1124 09:07:18.193537 4886 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="06e9ce1256238b17286ccc2a76b7ddeeaa4b2dca038d71e9db92cc4b56f8f138" Nov 24 09:07:18 crc kubenswrapper[4886]: I1124 09:07:18.193509 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-rzmth-config-gbppv" Nov 24 09:07:18 crc kubenswrapper[4886]: I1124 09:07:18.197877 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-7fqnd" event={"ID":"22909ed9-c35f-4768-ab83-9f8a3442718b","Type":"ContainerDied","Data":"91c2627f58a629b157df035487ff8744fca6e25e5f971ffa6bc74cc642f83541"} Nov 24 09:07:18 crc kubenswrapper[4886]: I1124 09:07:18.197975 4886 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="91c2627f58a629b157df035487ff8744fca6e25e5f971ffa6bc74cc642f83541" Nov 24 09:07:18 crc kubenswrapper[4886]: I1124 09:07:18.197906 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-7fqnd" Nov 24 09:07:18 crc kubenswrapper[4886]: I1124 09:07:18.200227 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-2c4c-account-create-qcwml" Nov 24 09:07:18 crc kubenswrapper[4886]: I1124 09:07:18.200229 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-2c4c-account-create-qcwml" event={"ID":"63eb5a8c-8a76-421f-9a44-a63c7ab43077","Type":"ContainerDied","Data":"ec1e13adb55d7d32a9861e0ac5fbc56a73fc8d362c4729bab05dd2e8a45f07d3"} Nov 24 09:07:18 crc kubenswrapper[4886]: I1124 09:07:18.200338 4886 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ec1e13adb55d7d32a9861e0ac5fbc56a73fc8d362c4729bab05dd2e8a45f07d3" Nov 24 09:07:18 crc kubenswrapper[4886]: I1124 09:07:18.202704 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-ll5c2" event={"ID":"fe94e3da-9230-46bd-9139-1ec416d11108","Type":"ContainerDied","Data":"18e35a5033cd96cf724834d173755ee51b97f27edb107bf91ac89a94f2258793"} Nov 24 09:07:18 crc kubenswrapper[4886]: I1124 09:07:18.202735 4886 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="18e35a5033cd96cf724834d173755ee51b97f27edb107bf91ac89a94f2258793" Nov 24 09:07:18 crc kubenswrapper[4886]: I1124 09:07:18.202862 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-ll5c2" Nov 24 09:07:18 crc kubenswrapper[4886]: I1124 09:07:18.207496 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jckzb\" (UniqueName: \"kubernetes.io/projected/fe94e3da-9230-46bd-9139-1ec416d11108-kube-api-access-jckzb\") pod \"fe94e3da-9230-46bd-9139-1ec416d11108\" (UID: \"fe94e3da-9230-46bd-9139-1ec416d11108\") " Nov 24 09:07:18 crc kubenswrapper[4886]: I1124 09:07:18.207565 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/9e2af7a5-08f0-4784-baec-f47dd090ac37-var-run\") pod \"9e2af7a5-08f0-4784-baec-f47dd090ac37\" (UID: \"9e2af7a5-08f0-4784-baec-f47dd090ac37\") " Nov 24 09:07:18 crc kubenswrapper[4886]: I1124 09:07:18.207600 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zf6qf\" (UniqueName: \"kubernetes.io/projected/fa05ad9f-553a-4565-b074-6cae6220d5d1-kube-api-access-zf6qf\") pod \"fa05ad9f-553a-4565-b074-6cae6220d5d1\" (UID: \"fa05ad9f-553a-4565-b074-6cae6220d5d1\") " Nov 24 09:07:18 crc kubenswrapper[4886]: I1124 09:07:18.207684 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t5xxk\" (UniqueName: \"kubernetes.io/projected/22909ed9-c35f-4768-ab83-9f8a3442718b-kube-api-access-t5xxk\") pod \"22909ed9-c35f-4768-ab83-9f8a3442718b\" (UID: \"22909ed9-c35f-4768-ab83-9f8a3442718b\") " Nov 24 09:07:18 crc kubenswrapper[4886]: I1124 09:07:18.207723 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/22909ed9-c35f-4768-ab83-9f8a3442718b-operator-scripts\") pod \"22909ed9-c35f-4768-ab83-9f8a3442718b\" (UID: \"22909ed9-c35f-4768-ab83-9f8a3442718b\") " Nov 24 09:07:18 crc kubenswrapper[4886]: I1124 09:07:18.207740 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9e2af7a5-08f0-4784-baec-f47dd090ac37-var-run" (OuterVolumeSpecName: "var-run") pod "9e2af7a5-08f0-4784-baec-f47dd090ac37" (UID: "9e2af7a5-08f0-4784-baec-f47dd090ac37"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 09:07:18 crc kubenswrapper[4886]: I1124 09:07:18.207867 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fe94e3da-9230-46bd-9139-1ec416d11108-operator-scripts\") pod \"fe94e3da-9230-46bd-9139-1ec416d11108\" (UID: \"fe94e3da-9230-46bd-9139-1ec416d11108\") " Nov 24 09:07:18 crc kubenswrapper[4886]: I1124 09:07:18.207999 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/9e2af7a5-08f0-4784-baec-f47dd090ac37-var-log-ovn\") pod \"9e2af7a5-08f0-4784-baec-f47dd090ac37\" (UID: \"9e2af7a5-08f0-4784-baec-f47dd090ac37\") " Nov 24 09:07:18 crc kubenswrapper[4886]: I1124 09:07:18.208035 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xpw5s\" (UniqueName: \"kubernetes.io/projected/fa3c02ae-6e3f-4d5d-bd36-57dc2ea8c131-kube-api-access-xpw5s\") pod \"fa3c02ae-6e3f-4d5d-bd36-57dc2ea8c131\" (UID: \"fa3c02ae-6e3f-4d5d-bd36-57dc2ea8c131\") " Nov 24 09:07:18 crc kubenswrapper[4886]: I1124 09:07:18.208093 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fa05ad9f-553a-4565-b074-6cae6220d5d1-operator-scripts\") pod \"fa05ad9f-553a-4565-b074-6cae6220d5d1\" (UID: \"fa05ad9f-553a-4565-b074-6cae6220d5d1\") " Nov 24 09:07:18 crc kubenswrapper[4886]: I1124 09:07:18.208199 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9e2af7a5-08f0-4784-baec-f47dd090ac37-scripts\") pod \"9e2af7a5-08f0-4784-baec-f47dd090ac37\" (UID: \"9e2af7a5-08f0-4784-baec-f47dd090ac37\") " Nov 24 09:07:18 crc kubenswrapper[4886]: I1124 09:07:18.208338 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/63eb5a8c-8a76-421f-9a44-a63c7ab43077-operator-scripts\") pod \"63eb5a8c-8a76-421f-9a44-a63c7ab43077\" (UID: \"63eb5a8c-8a76-421f-9a44-a63c7ab43077\") " Nov 24 09:07:18 crc kubenswrapper[4886]: I1124 09:07:18.208378 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22909ed9-c35f-4768-ab83-9f8a3442718b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "22909ed9-c35f-4768-ab83-9f8a3442718b" (UID: "22909ed9-c35f-4768-ab83-9f8a3442718b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 09:07:18 crc kubenswrapper[4886]: I1124 09:07:18.208443 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7f8bb\" (UniqueName: \"kubernetes.io/projected/9e2af7a5-08f0-4784-baec-f47dd090ac37-kube-api-access-7f8bb\") pod \"9e2af7a5-08f0-4784-baec-f47dd090ac37\" (UID: \"9e2af7a5-08f0-4784-baec-f47dd090ac37\") " Nov 24 09:07:18 crc kubenswrapper[4886]: I1124 09:07:18.208491 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/9e2af7a5-08f0-4784-baec-f47dd090ac37-var-run-ovn\") pod \"9e2af7a5-08f0-4784-baec-f47dd090ac37\" (UID: \"9e2af7a5-08f0-4784-baec-f47dd090ac37\") " Nov 24 09:07:18 crc kubenswrapper[4886]: I1124 09:07:18.208537 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fa3c02ae-6e3f-4d5d-bd36-57dc2ea8c131-operator-scripts\") pod \"fa3c02ae-6e3f-4d5d-bd36-57dc2ea8c131\" (UID: \"fa3c02ae-6e3f-4d5d-bd36-57dc2ea8c131\") " Nov 24 09:07:18 crc kubenswrapper[4886]: I1124 09:07:18.208576 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/9e2af7a5-08f0-4784-baec-f47dd090ac37-additional-scripts\") pod \"9e2af7a5-08f0-4784-baec-f47dd090ac37\" (UID: \"9e2af7a5-08f0-4784-baec-f47dd090ac37\") " Nov 24 09:07:18 crc kubenswrapper[4886]: I1124 09:07:18.208602 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-blnff\" (UniqueName: \"kubernetes.io/projected/63eb5a8c-8a76-421f-9a44-a63c7ab43077-kube-api-access-blnff\") pod \"63eb5a8c-8a76-421f-9a44-a63c7ab43077\" (UID: \"63eb5a8c-8a76-421f-9a44-a63c7ab43077\") " Nov 24 09:07:18 crc kubenswrapper[4886]: I1124 09:07:18.208676 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-8e9a-account-create-b55cn" event={"ID":"7659c776-9218-442b-b813-ebff19a5e5ee","Type":"ContainerDied","Data":"5c83b6e9d3c8dc0eced8ff7dfa8cb5096d0179432167b75f910856a789ccd3da"} Nov 24 09:07:18 crc kubenswrapper[4886]: I1124 09:07:18.208722 4886 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5c83b6e9d3c8dc0eced8ff7dfa8cb5096d0179432167b75f910856a789ccd3da" Nov 24 09:07:18 crc kubenswrapper[4886]: I1124 09:07:18.208795 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-8e9a-account-create-b55cn" Nov 24 09:07:18 crc kubenswrapper[4886]: I1124 09:07:18.209163 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9e2af7a5-08f0-4784-baec-f47dd090ac37-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "9e2af7a5-08f0-4784-baec-f47dd090ac37" (UID: "9e2af7a5-08f0-4784-baec-f47dd090ac37"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 09:07:18 crc kubenswrapper[4886]: I1124 09:07:18.209319 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9e2af7a5-08f0-4784-baec-f47dd090ac37-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "9e2af7a5-08f0-4784-baec-f47dd090ac37" (UID: "9e2af7a5-08f0-4784-baec-f47dd090ac37"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 09:07:18 crc kubenswrapper[4886]: I1124 09:07:18.209658 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fa05ad9f-553a-4565-b074-6cae6220d5d1-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "fa05ad9f-553a-4565-b074-6cae6220d5d1" (UID: "fa05ad9f-553a-4565-b074-6cae6220d5d1"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 09:07:18 crc kubenswrapper[4886]: I1124 09:07:18.209822 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/63eb5a8c-8a76-421f-9a44-a63c7ab43077-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "63eb5a8c-8a76-421f-9a44-a63c7ab43077" (UID: "63eb5a8c-8a76-421f-9a44-a63c7ab43077"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 09:07:18 crc kubenswrapper[4886]: I1124 09:07:18.210036 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fa3c02ae-6e3f-4d5d-bd36-57dc2ea8c131-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "fa3c02ae-6e3f-4d5d-bd36-57dc2ea8c131" (UID: "fa3c02ae-6e3f-4d5d-bd36-57dc2ea8c131"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 09:07:18 crc kubenswrapper[4886]: I1124 09:07:18.210379 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fe94e3da-9230-46bd-9139-1ec416d11108-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "fe94e3da-9230-46bd-9139-1ec416d11108" (UID: "fe94e3da-9230-46bd-9139-1ec416d11108"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 09:07:18 crc kubenswrapper[4886]: I1124 09:07:18.210443 4886 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/9e2af7a5-08f0-4784-baec-f47dd090ac37-var-run\") on node \"crc\" DevicePath \"\"" Nov 24 09:07:18 crc kubenswrapper[4886]: I1124 09:07:18.210462 4886 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/22909ed9-c35f-4768-ab83-9f8a3442718b-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 09:07:18 crc kubenswrapper[4886]: I1124 09:07:18.210781 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9e2af7a5-08f0-4784-baec-f47dd090ac37-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "9e2af7a5-08f0-4784-baec-f47dd090ac37" (UID: "9e2af7a5-08f0-4784-baec-f47dd090ac37"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 09:07:18 crc kubenswrapper[4886]: I1124 09:07:18.210880 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9e2af7a5-08f0-4784-baec-f47dd090ac37-scripts" (OuterVolumeSpecName: "scripts") pod "9e2af7a5-08f0-4784-baec-f47dd090ac37" (UID: "9e2af7a5-08f0-4784-baec-f47dd090ac37"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 09:07:18 crc kubenswrapper[4886]: I1124 09:07:18.212332 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fa3c02ae-6e3f-4d5d-bd36-57dc2ea8c131-kube-api-access-xpw5s" (OuterVolumeSpecName: "kube-api-access-xpw5s") pod "fa3c02ae-6e3f-4d5d-bd36-57dc2ea8c131" (UID: "fa3c02ae-6e3f-4d5d-bd36-57dc2ea8c131"). InnerVolumeSpecName "kube-api-access-xpw5s". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 09:07:18 crc kubenswrapper[4886]: I1124 09:07:18.213534 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22909ed9-c35f-4768-ab83-9f8a3442718b-kube-api-access-t5xxk" (OuterVolumeSpecName: "kube-api-access-t5xxk") pod "22909ed9-c35f-4768-ab83-9f8a3442718b" (UID: "22909ed9-c35f-4768-ab83-9f8a3442718b"). InnerVolumeSpecName "kube-api-access-t5xxk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 09:07:18 crc kubenswrapper[4886]: I1124 09:07:18.214709 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fe94e3da-9230-46bd-9139-1ec416d11108-kube-api-access-jckzb" (OuterVolumeSpecName: "kube-api-access-jckzb") pod "fe94e3da-9230-46bd-9139-1ec416d11108" (UID: "fe94e3da-9230-46bd-9139-1ec416d11108"). InnerVolumeSpecName "kube-api-access-jckzb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 09:07:18 crc kubenswrapper[4886]: I1124 09:07:18.215655 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/63eb5a8c-8a76-421f-9a44-a63c7ab43077-kube-api-access-blnff" (OuterVolumeSpecName: "kube-api-access-blnff") pod "63eb5a8c-8a76-421f-9a44-a63c7ab43077" (UID: "63eb5a8c-8a76-421f-9a44-a63c7ab43077"). InnerVolumeSpecName "kube-api-access-blnff". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 09:07:18 crc kubenswrapper[4886]: I1124 09:07:18.215987 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e2af7a5-08f0-4784-baec-f47dd090ac37-kube-api-access-7f8bb" (OuterVolumeSpecName: "kube-api-access-7f8bb") pod "9e2af7a5-08f0-4784-baec-f47dd090ac37" (UID: "9e2af7a5-08f0-4784-baec-f47dd090ac37"). InnerVolumeSpecName "kube-api-access-7f8bb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 09:07:18 crc kubenswrapper[4886]: I1124 09:07:18.219747 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fa05ad9f-553a-4565-b074-6cae6220d5d1-kube-api-access-zf6qf" (OuterVolumeSpecName: "kube-api-access-zf6qf") pod "fa05ad9f-553a-4565-b074-6cae6220d5d1" (UID: "fa05ad9f-553a-4565-b074-6cae6220d5d1"). InnerVolumeSpecName "kube-api-access-zf6qf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 09:07:18 crc kubenswrapper[4886]: I1124 09:07:18.312893 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jckzb\" (UniqueName: \"kubernetes.io/projected/fe94e3da-9230-46bd-9139-1ec416d11108-kube-api-access-jckzb\") on node \"crc\" DevicePath \"\"" Nov 24 09:07:18 crc kubenswrapper[4886]: I1124 09:07:18.312950 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zf6qf\" (UniqueName: \"kubernetes.io/projected/fa05ad9f-553a-4565-b074-6cae6220d5d1-kube-api-access-zf6qf\") on node \"crc\" DevicePath \"\"" Nov 24 09:07:18 crc kubenswrapper[4886]: I1124 09:07:18.312964 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t5xxk\" (UniqueName: \"kubernetes.io/projected/22909ed9-c35f-4768-ab83-9f8a3442718b-kube-api-access-t5xxk\") on node \"crc\" DevicePath \"\"" Nov 24 09:07:18 crc kubenswrapper[4886]: I1124 09:07:18.312977 4886 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fe94e3da-9230-46bd-9139-1ec416d11108-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 09:07:18 crc kubenswrapper[4886]: I1124 09:07:18.312991 4886 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/9e2af7a5-08f0-4784-baec-f47dd090ac37-var-log-ovn\") on node \"crc\" DevicePath \"\"" Nov 24 09:07:18 crc kubenswrapper[4886]: I1124 09:07:18.313004 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xpw5s\" (UniqueName: \"kubernetes.io/projected/fa3c02ae-6e3f-4d5d-bd36-57dc2ea8c131-kube-api-access-xpw5s\") on node \"crc\" DevicePath \"\"" Nov 24 09:07:18 crc kubenswrapper[4886]: I1124 09:07:18.313017 4886 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fa05ad9f-553a-4565-b074-6cae6220d5d1-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 09:07:18 crc kubenswrapper[4886]: I1124 09:07:18.313029 4886 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9e2af7a5-08f0-4784-baec-f47dd090ac37-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 09:07:18 crc kubenswrapper[4886]: I1124 09:07:18.313042 4886 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/63eb5a8c-8a76-421f-9a44-a63c7ab43077-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 09:07:18 crc kubenswrapper[4886]: I1124 09:07:18.313053 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7f8bb\" (UniqueName: \"kubernetes.io/projected/9e2af7a5-08f0-4784-baec-f47dd090ac37-kube-api-access-7f8bb\") on node \"crc\" DevicePath \"\"" Nov 24 09:07:18 crc kubenswrapper[4886]: I1124 09:07:18.313068 4886 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/9e2af7a5-08f0-4784-baec-f47dd090ac37-var-run-ovn\") on node \"crc\" DevicePath \"\"" Nov 24 09:07:18 crc kubenswrapper[4886]: I1124 09:07:18.313080 4886 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fa3c02ae-6e3f-4d5d-bd36-57dc2ea8c131-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 09:07:18 crc kubenswrapper[4886]: I1124 09:07:18.313094 4886 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/9e2af7a5-08f0-4784-baec-f47dd090ac37-additional-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 09:07:18 crc kubenswrapper[4886]: I1124 09:07:18.313105 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-blnff\" (UniqueName: \"kubernetes.io/projected/63eb5a8c-8a76-421f-9a44-a63c7ab43077-kube-api-access-blnff\") on node \"crc\" DevicePath \"\"" Nov 24 09:07:19 crc kubenswrapper[4886]: I1124 09:07:19.211048 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-rzmth-config-gbppv"] Nov 24 09:07:19 crc kubenswrapper[4886]: I1124 09:07:19.225543 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-rzmth-config-gbppv"] Nov 24 09:07:20 crc kubenswrapper[4886]: I1124 09:07:20.867078 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9e2af7a5-08f0-4784-baec-f47dd090ac37" path="/var/lib/kubelet/pods/9e2af7a5-08f0-4784-baec-f47dd090ac37/volumes" Nov 24 09:07:22 crc kubenswrapper[4886]: I1124 09:07:22.269898 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-t7vln" event={"ID":"606b0ae4-0857-44f3-a72a-aa8cfa5416ef","Type":"ContainerStarted","Data":"42803f854b1af557b3c6ed91baf22b6bb75aa4c3c8cc122b053f1b891d9e59bd"} Nov 24 09:07:22 crc kubenswrapper[4886]: I1124 09:07:22.275253 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"65b7f4e6-3f5e-419b-9761-c0fc78a4632d","Type":"ContainerStarted","Data":"056dfe4ed337248109751999e663fb31ecd10f2fbcdbcbdcc2472385b6bfaec5"} Nov 24 09:07:22 crc kubenswrapper[4886]: I1124 09:07:22.275309 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"65b7f4e6-3f5e-419b-9761-c0fc78a4632d","Type":"ContainerStarted","Data":"c869b679d80c6da5632586d0e9686d143ee5325c575b930dd7932cfa7ccb8039"} Nov 24 09:07:22 crc kubenswrapper[4886]: I1124 09:07:22.275319 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"65b7f4e6-3f5e-419b-9761-c0fc78a4632d","Type":"ContainerStarted","Data":"3d4193b28bb2daca19f9fc71f8b86278be081e9c6cd2685c2f7b231fc0a6c62f"} Nov 24 09:07:22 crc kubenswrapper[4886]: I1124 09:07:22.275331 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"65b7f4e6-3f5e-419b-9761-c0fc78a4632d","Type":"ContainerStarted","Data":"5e1fdd72d6a4806ef8b24e521d78c8ddff92f25a518ae6b16e2a32bee4e3dba7"} Nov 24 09:07:22 crc kubenswrapper[4886]: I1124 09:07:22.292471 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-t7vln" podStartSLOduration=2.808636146 podStartE2EDuration="9.292445728s" podCreationTimestamp="2025-11-24 09:07:13 +0000 UTC" firstStartedPulling="2025-11-24 09:07:15.025879075 +0000 UTC m=+1090.912617200" lastFinishedPulling="2025-11-24 09:07:21.509688657 +0000 UTC m=+1097.396426782" observedRunningTime="2025-11-24 09:07:22.291931113 +0000 UTC m=+1098.178669258" watchObservedRunningTime="2025-11-24 09:07:22.292445728 +0000 UTC m=+1098.179183873" Nov 24 09:07:23 crc kubenswrapper[4886]: I1124 09:07:23.300183 4886 generic.go:334] "Generic (PLEG): container finished" podID="f45428c8-b123-4e3e-9ba0-5ab11cf317a5" containerID="4bcf034099c8cf9144e31f390a627282fb58c0b2bce78275df11349602751575" exitCode=0 Nov 24 09:07:23 crc kubenswrapper[4886]: I1124 09:07:23.300254 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-4sj8x" event={"ID":"f45428c8-b123-4e3e-9ba0-5ab11cf317a5","Type":"ContainerDied","Data":"4bcf034099c8cf9144e31f390a627282fb58c0b2bce78275df11349602751575"} Nov 24 09:07:24 crc kubenswrapper[4886]: I1124 09:07:24.318277 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"65b7f4e6-3f5e-419b-9761-c0fc78a4632d","Type":"ContainerStarted","Data":"fac2f9160fd9153d2cd8bbfc95a87ddfbc8d59fad488ace9b4ca4fe951d9e622"} Nov 24 09:07:24 crc kubenswrapper[4886]: I1124 09:07:24.318609 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"65b7f4e6-3f5e-419b-9761-c0fc78a4632d","Type":"ContainerStarted","Data":"3ce6f3b74e22765458632c4ebc0328486331c815c2fbe4ed2be99b2d353b0daa"} Nov 24 09:07:24 crc kubenswrapper[4886]: I1124 09:07:24.318624 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"65b7f4e6-3f5e-419b-9761-c0fc78a4632d","Type":"ContainerStarted","Data":"fadafcf9029b6e9ba75d220fedadad81da90875dfa8072e09f7a85802b8778c3"} Nov 24 09:07:24 crc kubenswrapper[4886]: I1124 09:07:24.318636 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"65b7f4e6-3f5e-419b-9761-c0fc78a4632d","Type":"ContainerStarted","Data":"bd690aa5abede61fb388d32ac02d4ec33a593a183d3da042fc97204c455739b3"} Nov 24 09:07:24 crc kubenswrapper[4886]: I1124 09:07:24.865922 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-4sj8x" Nov 24 09:07:24 crc kubenswrapper[4886]: I1124 09:07:24.945686 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f45428c8-b123-4e3e-9ba0-5ab11cf317a5-config-data\") pod \"f45428c8-b123-4e3e-9ba0-5ab11cf317a5\" (UID: \"f45428c8-b123-4e3e-9ba0-5ab11cf317a5\") " Nov 24 09:07:24 crc kubenswrapper[4886]: I1124 09:07:24.945808 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f45428c8-b123-4e3e-9ba0-5ab11cf317a5-combined-ca-bundle\") pod \"f45428c8-b123-4e3e-9ba0-5ab11cf317a5\" (UID: \"f45428c8-b123-4e3e-9ba0-5ab11cf317a5\") " Nov 24 09:07:24 crc kubenswrapper[4886]: I1124 09:07:24.945932 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f45428c8-b123-4e3e-9ba0-5ab11cf317a5-db-sync-config-data\") pod \"f45428c8-b123-4e3e-9ba0-5ab11cf317a5\" (UID: \"f45428c8-b123-4e3e-9ba0-5ab11cf317a5\") " Nov 24 09:07:24 crc kubenswrapper[4886]: I1124 09:07:24.946020 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-547jv\" (UniqueName: \"kubernetes.io/projected/f45428c8-b123-4e3e-9ba0-5ab11cf317a5-kube-api-access-547jv\") pod \"f45428c8-b123-4e3e-9ba0-5ab11cf317a5\" (UID: \"f45428c8-b123-4e3e-9ba0-5ab11cf317a5\") " Nov 24 09:07:24 crc kubenswrapper[4886]: I1124 09:07:24.958635 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f45428c8-b123-4e3e-9ba0-5ab11cf317a5-kube-api-access-547jv" (OuterVolumeSpecName: "kube-api-access-547jv") pod "f45428c8-b123-4e3e-9ba0-5ab11cf317a5" (UID: "f45428c8-b123-4e3e-9ba0-5ab11cf317a5"). InnerVolumeSpecName "kube-api-access-547jv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 09:07:24 crc kubenswrapper[4886]: I1124 09:07:24.964320 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f45428c8-b123-4e3e-9ba0-5ab11cf317a5-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "f45428c8-b123-4e3e-9ba0-5ab11cf317a5" (UID: "f45428c8-b123-4e3e-9ba0-5ab11cf317a5"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:07:24 crc kubenswrapper[4886]: I1124 09:07:24.986131 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f45428c8-b123-4e3e-9ba0-5ab11cf317a5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f45428c8-b123-4e3e-9ba0-5ab11cf317a5" (UID: "f45428c8-b123-4e3e-9ba0-5ab11cf317a5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:07:25 crc kubenswrapper[4886]: I1124 09:07:25.010454 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f45428c8-b123-4e3e-9ba0-5ab11cf317a5-config-data" (OuterVolumeSpecName: "config-data") pod "f45428c8-b123-4e3e-9ba0-5ab11cf317a5" (UID: "f45428c8-b123-4e3e-9ba0-5ab11cf317a5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:07:25 crc kubenswrapper[4886]: I1124 09:07:25.051043 4886 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f45428c8-b123-4e3e-9ba0-5ab11cf317a5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 09:07:25 crc kubenswrapper[4886]: I1124 09:07:25.051084 4886 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f45428c8-b123-4e3e-9ba0-5ab11cf317a5-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 09:07:25 crc kubenswrapper[4886]: I1124 09:07:25.051096 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-547jv\" (UniqueName: \"kubernetes.io/projected/f45428c8-b123-4e3e-9ba0-5ab11cf317a5-kube-api-access-547jv\") on node \"crc\" DevicePath \"\"" Nov 24 09:07:25 crc kubenswrapper[4886]: I1124 09:07:25.051111 4886 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f45428c8-b123-4e3e-9ba0-5ab11cf317a5-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 09:07:25 crc kubenswrapper[4886]: I1124 09:07:25.333651 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"65b7f4e6-3f5e-419b-9761-c0fc78a4632d","Type":"ContainerStarted","Data":"57b2b6aae7df6c85d86a4175ce8622f1a1d70c4d274d49c492639419ca0453c9"} Nov 24 09:07:25 crc kubenswrapper[4886]: I1124 09:07:25.333705 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"65b7f4e6-3f5e-419b-9761-c0fc78a4632d","Type":"ContainerStarted","Data":"c5e1ec16303e5b2e0006c1d57c77c26c91179d85347ad654049b1944b3f6dd23"} Nov 24 09:07:25 crc kubenswrapper[4886]: I1124 09:07:25.333715 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"65b7f4e6-3f5e-419b-9761-c0fc78a4632d","Type":"ContainerStarted","Data":"dd54abba078225722845c23b0ed3e78e1c6d6ad13a659629a566589d96a35cd3"} Nov 24 09:07:25 crc kubenswrapper[4886]: I1124 09:07:25.337362 4886 generic.go:334] "Generic (PLEG): container finished" podID="606b0ae4-0857-44f3-a72a-aa8cfa5416ef" containerID="42803f854b1af557b3c6ed91baf22b6bb75aa4c3c8cc122b053f1b891d9e59bd" exitCode=0 Nov 24 09:07:25 crc kubenswrapper[4886]: I1124 09:07:25.337622 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-t7vln" event={"ID":"606b0ae4-0857-44f3-a72a-aa8cfa5416ef","Type":"ContainerDied","Data":"42803f854b1af557b3c6ed91baf22b6bb75aa4c3c8cc122b053f1b891d9e59bd"} Nov 24 09:07:25 crc kubenswrapper[4886]: I1124 09:07:25.339364 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-4sj8x" event={"ID":"f45428c8-b123-4e3e-9ba0-5ab11cf317a5","Type":"ContainerDied","Data":"5a01627cc6be978907eec89b811bf58c99d17dd220703d43c3735633a89210b8"} Nov 24 09:07:25 crc kubenswrapper[4886]: I1124 09:07:25.339396 4886 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5a01627cc6be978907eec89b811bf58c99d17dd220703d43c3735633a89210b8" Nov 24 09:07:25 crc kubenswrapper[4886]: I1124 09:07:25.339461 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-4sj8x" Nov 24 09:07:25 crc kubenswrapper[4886]: I1124 09:07:25.393188 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=38.655629263 podStartE2EDuration="47.393165385s" podCreationTimestamp="2025-11-24 09:06:38 +0000 UTC" firstStartedPulling="2025-11-24 09:07:14.639569884 +0000 UTC m=+1090.526308019" lastFinishedPulling="2025-11-24 09:07:23.377106006 +0000 UTC m=+1099.263844141" observedRunningTime="2025-11-24 09:07:25.383014085 +0000 UTC m=+1101.269752220" watchObservedRunningTime="2025-11-24 09:07:25.393165385 +0000 UTC m=+1101.279903520" Nov 24 09:07:25 crc kubenswrapper[4886]: I1124 09:07:25.711593 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-74dc88fc-vtw66"] Nov 24 09:07:25 crc kubenswrapper[4886]: E1124 09:07:25.712014 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa05ad9f-553a-4565-b074-6cae6220d5d1" containerName="mariadb-account-create" Nov 24 09:07:25 crc kubenswrapper[4886]: I1124 09:07:25.712031 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa05ad9f-553a-4565-b074-6cae6220d5d1" containerName="mariadb-account-create" Nov 24 09:07:25 crc kubenswrapper[4886]: E1124 09:07:25.712046 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f45428c8-b123-4e3e-9ba0-5ab11cf317a5" containerName="glance-db-sync" Nov 24 09:07:25 crc kubenswrapper[4886]: I1124 09:07:25.712052 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="f45428c8-b123-4e3e-9ba0-5ab11cf317a5" containerName="glance-db-sync" Nov 24 09:07:25 crc kubenswrapper[4886]: E1124 09:07:25.712071 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe94e3da-9230-46bd-9139-1ec416d11108" containerName="mariadb-database-create" Nov 24 09:07:25 crc kubenswrapper[4886]: I1124 09:07:25.712077 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe94e3da-9230-46bd-9139-1ec416d11108" containerName="mariadb-database-create" Nov 24 09:07:25 crc kubenswrapper[4886]: E1124 09:07:25.712086 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa3c02ae-6e3f-4d5d-bd36-57dc2ea8c131" containerName="mariadb-database-create" Nov 24 09:07:25 crc kubenswrapper[4886]: I1124 09:07:25.712092 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa3c02ae-6e3f-4d5d-bd36-57dc2ea8c131" containerName="mariadb-database-create" Nov 24 09:07:25 crc kubenswrapper[4886]: E1124 09:07:25.712108 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e2af7a5-08f0-4784-baec-f47dd090ac37" containerName="ovn-config" Nov 24 09:07:25 crc kubenswrapper[4886]: I1124 09:07:25.712114 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e2af7a5-08f0-4784-baec-f47dd090ac37" containerName="ovn-config" Nov 24 09:07:25 crc kubenswrapper[4886]: E1124 09:07:25.712123 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7659c776-9218-442b-b813-ebff19a5e5ee" containerName="mariadb-account-create" Nov 24 09:07:25 crc kubenswrapper[4886]: I1124 09:07:25.712129 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="7659c776-9218-442b-b813-ebff19a5e5ee" containerName="mariadb-account-create" Nov 24 09:07:25 crc kubenswrapper[4886]: E1124 09:07:25.712145 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="22909ed9-c35f-4768-ab83-9f8a3442718b" containerName="mariadb-database-create" Nov 24 09:07:25 crc kubenswrapper[4886]: I1124 09:07:25.712169 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="22909ed9-c35f-4768-ab83-9f8a3442718b" containerName="mariadb-database-create" Nov 24 09:07:25 crc kubenswrapper[4886]: E1124 09:07:25.712184 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="63eb5a8c-8a76-421f-9a44-a63c7ab43077" containerName="mariadb-account-create" Nov 24 09:07:25 crc kubenswrapper[4886]: I1124 09:07:25.712190 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="63eb5a8c-8a76-421f-9a44-a63c7ab43077" containerName="mariadb-account-create" Nov 24 09:07:25 crc kubenswrapper[4886]: I1124 09:07:25.712360 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="63eb5a8c-8a76-421f-9a44-a63c7ab43077" containerName="mariadb-account-create" Nov 24 09:07:25 crc kubenswrapper[4886]: I1124 09:07:25.712371 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="7659c776-9218-442b-b813-ebff19a5e5ee" containerName="mariadb-account-create" Nov 24 09:07:25 crc kubenswrapper[4886]: I1124 09:07:25.712383 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="22909ed9-c35f-4768-ab83-9f8a3442718b" containerName="mariadb-database-create" Nov 24 09:07:25 crc kubenswrapper[4886]: I1124 09:07:25.712392 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="fa05ad9f-553a-4565-b074-6cae6220d5d1" containerName="mariadb-account-create" Nov 24 09:07:25 crc kubenswrapper[4886]: I1124 09:07:25.712399 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e2af7a5-08f0-4784-baec-f47dd090ac37" containerName="ovn-config" Nov 24 09:07:25 crc kubenswrapper[4886]: I1124 09:07:25.712411 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="f45428c8-b123-4e3e-9ba0-5ab11cf317a5" containerName="glance-db-sync" Nov 24 09:07:25 crc kubenswrapper[4886]: I1124 09:07:25.712421 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="fa3c02ae-6e3f-4d5d-bd36-57dc2ea8c131" containerName="mariadb-database-create" Nov 24 09:07:25 crc kubenswrapper[4886]: I1124 09:07:25.712428 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="fe94e3da-9230-46bd-9139-1ec416d11108" containerName="mariadb-database-create" Nov 24 09:07:25 crc kubenswrapper[4886]: I1124 09:07:25.713350 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74dc88fc-vtw66" Nov 24 09:07:25 crc kubenswrapper[4886]: I1124 09:07:25.736489 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-74dc88fc-vtw66"] Nov 24 09:07:25 crc kubenswrapper[4886]: I1124 09:07:25.784423 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xg5cw\" (UniqueName: \"kubernetes.io/projected/f2446af2-09f6-446e-8baf-68b047d15d5f-kube-api-access-xg5cw\") pod \"dnsmasq-dns-74dc88fc-vtw66\" (UID: \"f2446af2-09f6-446e-8baf-68b047d15d5f\") " pod="openstack/dnsmasq-dns-74dc88fc-vtw66" Nov 24 09:07:25 crc kubenswrapper[4886]: I1124 09:07:25.788400 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f2446af2-09f6-446e-8baf-68b047d15d5f-ovsdbserver-sb\") pod \"dnsmasq-dns-74dc88fc-vtw66\" (UID: \"f2446af2-09f6-446e-8baf-68b047d15d5f\") " pod="openstack/dnsmasq-dns-74dc88fc-vtw66" Nov 24 09:07:25 crc kubenswrapper[4886]: I1124 09:07:25.789299 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f2446af2-09f6-446e-8baf-68b047d15d5f-ovsdbserver-nb\") pod \"dnsmasq-dns-74dc88fc-vtw66\" (UID: \"f2446af2-09f6-446e-8baf-68b047d15d5f\") " pod="openstack/dnsmasq-dns-74dc88fc-vtw66" Nov 24 09:07:25 crc kubenswrapper[4886]: I1124 09:07:25.790212 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f2446af2-09f6-446e-8baf-68b047d15d5f-dns-svc\") pod \"dnsmasq-dns-74dc88fc-vtw66\" (UID: \"f2446af2-09f6-446e-8baf-68b047d15d5f\") " pod="openstack/dnsmasq-dns-74dc88fc-vtw66" Nov 24 09:07:25 crc kubenswrapper[4886]: I1124 09:07:25.790760 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f2446af2-09f6-446e-8baf-68b047d15d5f-config\") pod \"dnsmasq-dns-74dc88fc-vtw66\" (UID: \"f2446af2-09f6-446e-8baf-68b047d15d5f\") " pod="openstack/dnsmasq-dns-74dc88fc-vtw66" Nov 24 09:07:25 crc kubenswrapper[4886]: I1124 09:07:25.894351 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f2446af2-09f6-446e-8baf-68b047d15d5f-config\") pod \"dnsmasq-dns-74dc88fc-vtw66\" (UID: \"f2446af2-09f6-446e-8baf-68b047d15d5f\") " pod="openstack/dnsmasq-dns-74dc88fc-vtw66" Nov 24 09:07:25 crc kubenswrapper[4886]: I1124 09:07:25.894441 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xg5cw\" (UniqueName: \"kubernetes.io/projected/f2446af2-09f6-446e-8baf-68b047d15d5f-kube-api-access-xg5cw\") pod \"dnsmasq-dns-74dc88fc-vtw66\" (UID: \"f2446af2-09f6-446e-8baf-68b047d15d5f\") " pod="openstack/dnsmasq-dns-74dc88fc-vtw66" Nov 24 09:07:25 crc kubenswrapper[4886]: I1124 09:07:25.894574 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f2446af2-09f6-446e-8baf-68b047d15d5f-ovsdbserver-sb\") pod \"dnsmasq-dns-74dc88fc-vtw66\" (UID: \"f2446af2-09f6-446e-8baf-68b047d15d5f\") " pod="openstack/dnsmasq-dns-74dc88fc-vtw66" Nov 24 09:07:25 crc kubenswrapper[4886]: I1124 09:07:25.894632 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f2446af2-09f6-446e-8baf-68b047d15d5f-ovsdbserver-nb\") pod \"dnsmasq-dns-74dc88fc-vtw66\" (UID: \"f2446af2-09f6-446e-8baf-68b047d15d5f\") " pod="openstack/dnsmasq-dns-74dc88fc-vtw66" Nov 24 09:07:25 crc kubenswrapper[4886]: I1124 09:07:25.894687 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f2446af2-09f6-446e-8baf-68b047d15d5f-dns-svc\") pod \"dnsmasq-dns-74dc88fc-vtw66\" (UID: \"f2446af2-09f6-446e-8baf-68b047d15d5f\") " pod="openstack/dnsmasq-dns-74dc88fc-vtw66" Nov 24 09:07:25 crc kubenswrapper[4886]: I1124 09:07:25.895974 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f2446af2-09f6-446e-8baf-68b047d15d5f-dns-svc\") pod \"dnsmasq-dns-74dc88fc-vtw66\" (UID: \"f2446af2-09f6-446e-8baf-68b047d15d5f\") " pod="openstack/dnsmasq-dns-74dc88fc-vtw66" Nov 24 09:07:25 crc kubenswrapper[4886]: I1124 09:07:25.896672 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f2446af2-09f6-446e-8baf-68b047d15d5f-config\") pod \"dnsmasq-dns-74dc88fc-vtw66\" (UID: \"f2446af2-09f6-446e-8baf-68b047d15d5f\") " pod="openstack/dnsmasq-dns-74dc88fc-vtw66" Nov 24 09:07:25 crc kubenswrapper[4886]: I1124 09:07:25.897777 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f2446af2-09f6-446e-8baf-68b047d15d5f-ovsdbserver-sb\") pod \"dnsmasq-dns-74dc88fc-vtw66\" (UID: \"f2446af2-09f6-446e-8baf-68b047d15d5f\") " pod="openstack/dnsmasq-dns-74dc88fc-vtw66" Nov 24 09:07:25 crc kubenswrapper[4886]: I1124 09:07:25.898265 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f2446af2-09f6-446e-8baf-68b047d15d5f-ovsdbserver-nb\") pod \"dnsmasq-dns-74dc88fc-vtw66\" (UID: \"f2446af2-09f6-446e-8baf-68b047d15d5f\") " pod="openstack/dnsmasq-dns-74dc88fc-vtw66" Nov 24 09:07:25 crc kubenswrapper[4886]: I1124 09:07:25.934647 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xg5cw\" (UniqueName: \"kubernetes.io/projected/f2446af2-09f6-446e-8baf-68b047d15d5f-kube-api-access-xg5cw\") pod \"dnsmasq-dns-74dc88fc-vtw66\" (UID: \"f2446af2-09f6-446e-8baf-68b047d15d5f\") " pod="openstack/dnsmasq-dns-74dc88fc-vtw66" Nov 24 09:07:25 crc kubenswrapper[4886]: I1124 09:07:25.944275 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-74dc88fc-vtw66"] Nov 24 09:07:25 crc kubenswrapper[4886]: I1124 09:07:25.945138 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74dc88fc-vtw66" Nov 24 09:07:25 crc kubenswrapper[4886]: I1124 09:07:25.981844 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5f59b8f679-hd8tz"] Nov 24 09:07:25 crc kubenswrapper[4886]: I1124 09:07:25.993217 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f59b8f679-hd8tz" Nov 24 09:07:26 crc kubenswrapper[4886]: I1124 09:07:25.996561 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Nov 24 09:07:26 crc kubenswrapper[4886]: I1124 09:07:26.044237 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5f59b8f679-hd8tz"] Nov 24 09:07:26 crc kubenswrapper[4886]: I1124 09:07:26.097663 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/001c7b85-45be-47cf-bef5-554a2710e240-config\") pod \"dnsmasq-dns-5f59b8f679-hd8tz\" (UID: \"001c7b85-45be-47cf-bef5-554a2710e240\") " pod="openstack/dnsmasq-dns-5f59b8f679-hd8tz" Nov 24 09:07:26 crc kubenswrapper[4886]: I1124 09:07:26.097728 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/001c7b85-45be-47cf-bef5-554a2710e240-ovsdbserver-nb\") pod \"dnsmasq-dns-5f59b8f679-hd8tz\" (UID: \"001c7b85-45be-47cf-bef5-554a2710e240\") " pod="openstack/dnsmasq-dns-5f59b8f679-hd8tz" Nov 24 09:07:26 crc kubenswrapper[4886]: I1124 09:07:26.097776 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/001c7b85-45be-47cf-bef5-554a2710e240-dns-svc\") pod \"dnsmasq-dns-5f59b8f679-hd8tz\" (UID: \"001c7b85-45be-47cf-bef5-554a2710e240\") " pod="openstack/dnsmasq-dns-5f59b8f679-hd8tz" Nov 24 09:07:26 crc kubenswrapper[4886]: I1124 09:07:26.097842 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/001c7b85-45be-47cf-bef5-554a2710e240-dns-swift-storage-0\") pod \"dnsmasq-dns-5f59b8f679-hd8tz\" (UID: \"001c7b85-45be-47cf-bef5-554a2710e240\") " pod="openstack/dnsmasq-dns-5f59b8f679-hd8tz" Nov 24 09:07:26 crc kubenswrapper[4886]: I1124 09:07:26.097868 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hvcgg\" (UniqueName: \"kubernetes.io/projected/001c7b85-45be-47cf-bef5-554a2710e240-kube-api-access-hvcgg\") pod \"dnsmasq-dns-5f59b8f679-hd8tz\" (UID: \"001c7b85-45be-47cf-bef5-554a2710e240\") " pod="openstack/dnsmasq-dns-5f59b8f679-hd8tz" Nov 24 09:07:26 crc kubenswrapper[4886]: I1124 09:07:26.097882 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/001c7b85-45be-47cf-bef5-554a2710e240-ovsdbserver-sb\") pod \"dnsmasq-dns-5f59b8f679-hd8tz\" (UID: \"001c7b85-45be-47cf-bef5-554a2710e240\") " pod="openstack/dnsmasq-dns-5f59b8f679-hd8tz" Nov 24 09:07:26 crc kubenswrapper[4886]: I1124 09:07:26.200287 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/001c7b85-45be-47cf-bef5-554a2710e240-dns-svc\") pod \"dnsmasq-dns-5f59b8f679-hd8tz\" (UID: \"001c7b85-45be-47cf-bef5-554a2710e240\") " pod="openstack/dnsmasq-dns-5f59b8f679-hd8tz" Nov 24 09:07:26 crc kubenswrapper[4886]: I1124 09:07:26.200413 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/001c7b85-45be-47cf-bef5-554a2710e240-dns-swift-storage-0\") pod \"dnsmasq-dns-5f59b8f679-hd8tz\" (UID: \"001c7b85-45be-47cf-bef5-554a2710e240\") " pod="openstack/dnsmasq-dns-5f59b8f679-hd8tz" Nov 24 09:07:26 crc kubenswrapper[4886]: I1124 09:07:26.200451 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hvcgg\" (UniqueName: \"kubernetes.io/projected/001c7b85-45be-47cf-bef5-554a2710e240-kube-api-access-hvcgg\") pod \"dnsmasq-dns-5f59b8f679-hd8tz\" (UID: \"001c7b85-45be-47cf-bef5-554a2710e240\") " pod="openstack/dnsmasq-dns-5f59b8f679-hd8tz" Nov 24 09:07:26 crc kubenswrapper[4886]: I1124 09:07:26.200477 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/001c7b85-45be-47cf-bef5-554a2710e240-ovsdbserver-sb\") pod \"dnsmasq-dns-5f59b8f679-hd8tz\" (UID: \"001c7b85-45be-47cf-bef5-554a2710e240\") " pod="openstack/dnsmasq-dns-5f59b8f679-hd8tz" Nov 24 09:07:26 crc kubenswrapper[4886]: I1124 09:07:26.200522 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/001c7b85-45be-47cf-bef5-554a2710e240-config\") pod \"dnsmasq-dns-5f59b8f679-hd8tz\" (UID: \"001c7b85-45be-47cf-bef5-554a2710e240\") " pod="openstack/dnsmasq-dns-5f59b8f679-hd8tz" Nov 24 09:07:26 crc kubenswrapper[4886]: I1124 09:07:26.200558 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/001c7b85-45be-47cf-bef5-554a2710e240-ovsdbserver-nb\") pod \"dnsmasq-dns-5f59b8f679-hd8tz\" (UID: \"001c7b85-45be-47cf-bef5-554a2710e240\") " pod="openstack/dnsmasq-dns-5f59b8f679-hd8tz" Nov 24 09:07:26 crc kubenswrapper[4886]: I1124 09:07:26.201762 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/001c7b85-45be-47cf-bef5-554a2710e240-ovsdbserver-nb\") pod \"dnsmasq-dns-5f59b8f679-hd8tz\" (UID: \"001c7b85-45be-47cf-bef5-554a2710e240\") " pod="openstack/dnsmasq-dns-5f59b8f679-hd8tz" Nov 24 09:07:26 crc kubenswrapper[4886]: I1124 09:07:26.201954 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/001c7b85-45be-47cf-bef5-554a2710e240-ovsdbserver-sb\") pod \"dnsmasq-dns-5f59b8f679-hd8tz\" (UID: \"001c7b85-45be-47cf-bef5-554a2710e240\") " pod="openstack/dnsmasq-dns-5f59b8f679-hd8tz" Nov 24 09:07:26 crc kubenswrapper[4886]: I1124 09:07:26.202246 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/001c7b85-45be-47cf-bef5-554a2710e240-dns-swift-storage-0\") pod \"dnsmasq-dns-5f59b8f679-hd8tz\" (UID: \"001c7b85-45be-47cf-bef5-554a2710e240\") " pod="openstack/dnsmasq-dns-5f59b8f679-hd8tz" Nov 24 09:07:26 crc kubenswrapper[4886]: I1124 09:07:26.202776 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/001c7b85-45be-47cf-bef5-554a2710e240-config\") pod \"dnsmasq-dns-5f59b8f679-hd8tz\" (UID: \"001c7b85-45be-47cf-bef5-554a2710e240\") " pod="openstack/dnsmasq-dns-5f59b8f679-hd8tz" Nov 24 09:07:26 crc kubenswrapper[4886]: I1124 09:07:26.203365 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/001c7b85-45be-47cf-bef5-554a2710e240-dns-svc\") pod \"dnsmasq-dns-5f59b8f679-hd8tz\" (UID: \"001c7b85-45be-47cf-bef5-554a2710e240\") " pod="openstack/dnsmasq-dns-5f59b8f679-hd8tz" Nov 24 09:07:26 crc kubenswrapper[4886]: I1124 09:07:26.224912 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hvcgg\" (UniqueName: \"kubernetes.io/projected/001c7b85-45be-47cf-bef5-554a2710e240-kube-api-access-hvcgg\") pod \"dnsmasq-dns-5f59b8f679-hd8tz\" (UID: \"001c7b85-45be-47cf-bef5-554a2710e240\") " pod="openstack/dnsmasq-dns-5f59b8f679-hd8tz" Nov 24 09:07:26 crc kubenswrapper[4886]: I1124 09:07:26.411307 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f59b8f679-hd8tz" Nov 24 09:07:26 crc kubenswrapper[4886]: W1124 09:07:26.679894 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf2446af2_09f6_446e_8baf_68b047d15d5f.slice/crio-9c8cbc17fbb632ddf22962dd5bba21a2f24f85e64c08e931dc71a49c0019a075 WatchSource:0}: Error finding container 9c8cbc17fbb632ddf22962dd5bba21a2f24f85e64c08e931dc71a49c0019a075: Status 404 returned error can't find the container with id 9c8cbc17fbb632ddf22962dd5bba21a2f24f85e64c08e931dc71a49c0019a075 Nov 24 09:07:26 crc kubenswrapper[4886]: I1124 09:07:26.685369 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-74dc88fc-vtw66"] Nov 24 09:07:26 crc kubenswrapper[4886]: I1124 09:07:26.697558 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-t7vln" Nov 24 09:07:26 crc kubenswrapper[4886]: I1124 09:07:26.813297 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-td9bp\" (UniqueName: \"kubernetes.io/projected/606b0ae4-0857-44f3-a72a-aa8cfa5416ef-kube-api-access-td9bp\") pod \"606b0ae4-0857-44f3-a72a-aa8cfa5416ef\" (UID: \"606b0ae4-0857-44f3-a72a-aa8cfa5416ef\") " Nov 24 09:07:26 crc kubenswrapper[4886]: I1124 09:07:26.813533 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/606b0ae4-0857-44f3-a72a-aa8cfa5416ef-config-data\") pod \"606b0ae4-0857-44f3-a72a-aa8cfa5416ef\" (UID: \"606b0ae4-0857-44f3-a72a-aa8cfa5416ef\") " Nov 24 09:07:26 crc kubenswrapper[4886]: I1124 09:07:26.813601 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/606b0ae4-0857-44f3-a72a-aa8cfa5416ef-combined-ca-bundle\") pod \"606b0ae4-0857-44f3-a72a-aa8cfa5416ef\" (UID: \"606b0ae4-0857-44f3-a72a-aa8cfa5416ef\") " Nov 24 09:07:26 crc kubenswrapper[4886]: I1124 09:07:26.818833 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/606b0ae4-0857-44f3-a72a-aa8cfa5416ef-kube-api-access-td9bp" (OuterVolumeSpecName: "kube-api-access-td9bp") pod "606b0ae4-0857-44f3-a72a-aa8cfa5416ef" (UID: "606b0ae4-0857-44f3-a72a-aa8cfa5416ef"). InnerVolumeSpecName "kube-api-access-td9bp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 09:07:26 crc kubenswrapper[4886]: I1124 09:07:26.848101 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/606b0ae4-0857-44f3-a72a-aa8cfa5416ef-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "606b0ae4-0857-44f3-a72a-aa8cfa5416ef" (UID: "606b0ae4-0857-44f3-a72a-aa8cfa5416ef"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:07:26 crc kubenswrapper[4886]: I1124 09:07:26.870806 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/606b0ae4-0857-44f3-a72a-aa8cfa5416ef-config-data" (OuterVolumeSpecName: "config-data") pod "606b0ae4-0857-44f3-a72a-aa8cfa5416ef" (UID: "606b0ae4-0857-44f3-a72a-aa8cfa5416ef"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:07:26 crc kubenswrapper[4886]: I1124 09:07:26.916274 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-td9bp\" (UniqueName: \"kubernetes.io/projected/606b0ae4-0857-44f3-a72a-aa8cfa5416ef-kube-api-access-td9bp\") on node \"crc\" DevicePath \"\"" Nov 24 09:07:26 crc kubenswrapper[4886]: I1124 09:07:26.916342 4886 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/606b0ae4-0857-44f3-a72a-aa8cfa5416ef-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 09:07:26 crc kubenswrapper[4886]: I1124 09:07:26.916358 4886 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/606b0ae4-0857-44f3-a72a-aa8cfa5416ef-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 09:07:27 crc kubenswrapper[4886]: I1124 09:07:27.006169 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5f59b8f679-hd8tz"] Nov 24 09:07:27 crc kubenswrapper[4886]: W1124 09:07:27.058127 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod001c7b85_45be_47cf_bef5_554a2710e240.slice/crio-28288d8f600640843c5c6bedba4d4bab58e65d221d89d26442c122de75174a1d WatchSource:0}: Error finding container 28288d8f600640843c5c6bedba4d4bab58e65d221d89d26442c122de75174a1d: Status 404 returned error can't find the container with id 28288d8f600640843c5c6bedba4d4bab58e65d221d89d26442c122de75174a1d Nov 24 09:07:27 crc kubenswrapper[4886]: I1124 09:07:27.358691 4886 generic.go:334] "Generic (PLEG): container finished" podID="001c7b85-45be-47cf-bef5-554a2710e240" containerID="74ddac89a8ebf800ac8b1d28f49ab68e2b82129941e7631954e2482573558ba7" exitCode=0 Nov 24 09:07:27 crc kubenswrapper[4886]: I1124 09:07:27.358802 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f59b8f679-hd8tz" event={"ID":"001c7b85-45be-47cf-bef5-554a2710e240","Type":"ContainerDied","Data":"74ddac89a8ebf800ac8b1d28f49ab68e2b82129941e7631954e2482573558ba7"} Nov 24 09:07:27 crc kubenswrapper[4886]: I1124 09:07:27.358853 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f59b8f679-hd8tz" event={"ID":"001c7b85-45be-47cf-bef5-554a2710e240","Type":"ContainerStarted","Data":"28288d8f600640843c5c6bedba4d4bab58e65d221d89d26442c122de75174a1d"} Nov 24 09:07:27 crc kubenswrapper[4886]: I1124 09:07:27.361622 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-t7vln" event={"ID":"606b0ae4-0857-44f3-a72a-aa8cfa5416ef","Type":"ContainerDied","Data":"7f8949908f56bbd8b1222aa63e88d37caf33e13ebd6d7b9e1d476736e5fe801d"} Nov 24 09:07:27 crc kubenswrapper[4886]: I1124 09:07:27.361682 4886 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7f8949908f56bbd8b1222aa63e88d37caf33e13ebd6d7b9e1d476736e5fe801d" Nov 24 09:07:27 crc kubenswrapper[4886]: I1124 09:07:27.361706 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-t7vln" Nov 24 09:07:27 crc kubenswrapper[4886]: I1124 09:07:27.367534 4886 generic.go:334] "Generic (PLEG): container finished" podID="f2446af2-09f6-446e-8baf-68b047d15d5f" containerID="2b0c6732f2e6ec40a1d33d5470a44597f8cccb918907bb95e23d09c191033650" exitCode=0 Nov 24 09:07:27 crc kubenswrapper[4886]: I1124 09:07:27.367597 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74dc88fc-vtw66" event={"ID":"f2446af2-09f6-446e-8baf-68b047d15d5f","Type":"ContainerDied","Data":"2b0c6732f2e6ec40a1d33d5470a44597f8cccb918907bb95e23d09c191033650"} Nov 24 09:07:27 crc kubenswrapper[4886]: I1124 09:07:27.367626 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74dc88fc-vtw66" event={"ID":"f2446af2-09f6-446e-8baf-68b047d15d5f","Type":"ContainerStarted","Data":"9c8cbc17fbb632ddf22962dd5bba21a2f24f85e64c08e931dc71a49c0019a075"} Nov 24 09:07:27 crc kubenswrapper[4886]: I1124 09:07:27.632947 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5f59b8f679-hd8tz"] Nov 24 09:07:27 crc kubenswrapper[4886]: I1124 09:07:27.701311 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-wsgjs"] Nov 24 09:07:27 crc kubenswrapper[4886]: E1124 09:07:27.701870 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="606b0ae4-0857-44f3-a72a-aa8cfa5416ef" containerName="keystone-db-sync" Nov 24 09:07:27 crc kubenswrapper[4886]: I1124 09:07:27.701892 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="606b0ae4-0857-44f3-a72a-aa8cfa5416ef" containerName="keystone-db-sync" Nov 24 09:07:27 crc kubenswrapper[4886]: I1124 09:07:27.702079 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="606b0ae4-0857-44f3-a72a-aa8cfa5416ef" containerName="keystone-db-sync" Nov 24 09:07:27 crc kubenswrapper[4886]: I1124 09:07:27.702951 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-wsgjs" Nov 24 09:07:27 crc kubenswrapper[4886]: I1124 09:07:27.708138 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Nov 24 09:07:27 crc kubenswrapper[4886]: I1124 09:07:27.708384 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Nov 24 09:07:27 crc kubenswrapper[4886]: I1124 09:07:27.708518 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Nov 24 09:07:27 crc kubenswrapper[4886]: I1124 09:07:27.714868 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-tfhvb" Nov 24 09:07:27 crc kubenswrapper[4886]: I1124 09:07:27.715165 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Nov 24 09:07:27 crc kubenswrapper[4886]: I1124 09:07:27.728270 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-wsgjs"] Nov 24 09:07:27 crc kubenswrapper[4886]: I1124 09:07:27.742811 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-bbf5cc879-gs4gc"] Nov 24 09:07:27 crc kubenswrapper[4886]: I1124 09:07:27.745569 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bbf5cc879-gs4gc" Nov 24 09:07:27 crc kubenswrapper[4886]: I1124 09:07:27.763016 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-bbf5cc879-gs4gc"] Nov 24 09:07:27 crc kubenswrapper[4886]: I1124 09:07:27.863555 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f97f59b9-899b-419b-88d9-7aa58d892ffc-config\") pod \"dnsmasq-dns-bbf5cc879-gs4gc\" (UID: \"f97f59b9-899b-419b-88d9-7aa58d892ffc\") " pod="openstack/dnsmasq-dns-bbf5cc879-gs4gc" Nov 24 09:07:27 crc kubenswrapper[4886]: I1124 09:07:27.863622 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5511ae3a-4fa5-400a-88c6-e9fca01787cf-combined-ca-bundle\") pod \"keystone-bootstrap-wsgjs\" (UID: \"5511ae3a-4fa5-400a-88c6-e9fca01787cf\") " pod="openstack/keystone-bootstrap-wsgjs" Nov 24 09:07:27 crc kubenswrapper[4886]: I1124 09:07:27.863646 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5511ae3a-4fa5-400a-88c6-e9fca01787cf-config-data\") pod \"keystone-bootstrap-wsgjs\" (UID: \"5511ae3a-4fa5-400a-88c6-e9fca01787cf\") " pod="openstack/keystone-bootstrap-wsgjs" Nov 24 09:07:27 crc kubenswrapper[4886]: I1124 09:07:27.863681 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f97f59b9-899b-419b-88d9-7aa58d892ffc-dns-swift-storage-0\") pod \"dnsmasq-dns-bbf5cc879-gs4gc\" (UID: \"f97f59b9-899b-419b-88d9-7aa58d892ffc\") " pod="openstack/dnsmasq-dns-bbf5cc879-gs4gc" Nov 24 09:07:27 crc kubenswrapper[4886]: I1124 09:07:27.863715 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f97f59b9-899b-419b-88d9-7aa58d892ffc-ovsdbserver-sb\") pod \"dnsmasq-dns-bbf5cc879-gs4gc\" (UID: \"f97f59b9-899b-419b-88d9-7aa58d892ffc\") " pod="openstack/dnsmasq-dns-bbf5cc879-gs4gc" Nov 24 09:07:27 crc kubenswrapper[4886]: I1124 09:07:27.863731 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d6m7x\" (UniqueName: \"kubernetes.io/projected/5511ae3a-4fa5-400a-88c6-e9fca01787cf-kube-api-access-d6m7x\") pod \"keystone-bootstrap-wsgjs\" (UID: \"5511ae3a-4fa5-400a-88c6-e9fca01787cf\") " pod="openstack/keystone-bootstrap-wsgjs" Nov 24 09:07:27 crc kubenswrapper[4886]: I1124 09:07:27.863753 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/5511ae3a-4fa5-400a-88c6-e9fca01787cf-fernet-keys\") pod \"keystone-bootstrap-wsgjs\" (UID: \"5511ae3a-4fa5-400a-88c6-e9fca01787cf\") " pod="openstack/keystone-bootstrap-wsgjs" Nov 24 09:07:27 crc kubenswrapper[4886]: I1124 09:07:27.863784 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5511ae3a-4fa5-400a-88c6-e9fca01787cf-scripts\") pod \"keystone-bootstrap-wsgjs\" (UID: \"5511ae3a-4fa5-400a-88c6-e9fca01787cf\") " pod="openstack/keystone-bootstrap-wsgjs" Nov 24 09:07:27 crc kubenswrapper[4886]: I1124 09:07:27.863836 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/5511ae3a-4fa5-400a-88c6-e9fca01787cf-credential-keys\") pod \"keystone-bootstrap-wsgjs\" (UID: \"5511ae3a-4fa5-400a-88c6-e9fca01787cf\") " pod="openstack/keystone-bootstrap-wsgjs" Nov 24 09:07:27 crc kubenswrapper[4886]: I1124 09:07:27.863889 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8twnt\" (UniqueName: \"kubernetes.io/projected/f97f59b9-899b-419b-88d9-7aa58d892ffc-kube-api-access-8twnt\") pod \"dnsmasq-dns-bbf5cc879-gs4gc\" (UID: \"f97f59b9-899b-419b-88d9-7aa58d892ffc\") " pod="openstack/dnsmasq-dns-bbf5cc879-gs4gc" Nov 24 09:07:27 crc kubenswrapper[4886]: I1124 09:07:27.863908 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f97f59b9-899b-419b-88d9-7aa58d892ffc-ovsdbserver-nb\") pod \"dnsmasq-dns-bbf5cc879-gs4gc\" (UID: \"f97f59b9-899b-419b-88d9-7aa58d892ffc\") " pod="openstack/dnsmasq-dns-bbf5cc879-gs4gc" Nov 24 09:07:27 crc kubenswrapper[4886]: I1124 09:07:27.863929 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f97f59b9-899b-419b-88d9-7aa58d892ffc-dns-svc\") pod \"dnsmasq-dns-bbf5cc879-gs4gc\" (UID: \"f97f59b9-899b-419b-88d9-7aa58d892ffc\") " pod="openstack/dnsmasq-dns-bbf5cc879-gs4gc" Nov 24 09:07:27 crc kubenswrapper[4886]: I1124 09:07:27.925480 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74dc88fc-vtw66" Nov 24 09:07:27 crc kubenswrapper[4886]: I1124 09:07:27.967772 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8twnt\" (UniqueName: \"kubernetes.io/projected/f97f59b9-899b-419b-88d9-7aa58d892ffc-kube-api-access-8twnt\") pod \"dnsmasq-dns-bbf5cc879-gs4gc\" (UID: \"f97f59b9-899b-419b-88d9-7aa58d892ffc\") " pod="openstack/dnsmasq-dns-bbf5cc879-gs4gc" Nov 24 09:07:27 crc kubenswrapper[4886]: I1124 09:07:27.967823 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f97f59b9-899b-419b-88d9-7aa58d892ffc-ovsdbserver-nb\") pod \"dnsmasq-dns-bbf5cc879-gs4gc\" (UID: \"f97f59b9-899b-419b-88d9-7aa58d892ffc\") " pod="openstack/dnsmasq-dns-bbf5cc879-gs4gc" Nov 24 09:07:27 crc kubenswrapper[4886]: I1124 09:07:27.967858 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f97f59b9-899b-419b-88d9-7aa58d892ffc-dns-svc\") pod \"dnsmasq-dns-bbf5cc879-gs4gc\" (UID: \"f97f59b9-899b-419b-88d9-7aa58d892ffc\") " pod="openstack/dnsmasq-dns-bbf5cc879-gs4gc" Nov 24 09:07:27 crc kubenswrapper[4886]: I1124 09:07:27.967910 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f97f59b9-899b-419b-88d9-7aa58d892ffc-config\") pod \"dnsmasq-dns-bbf5cc879-gs4gc\" (UID: \"f97f59b9-899b-419b-88d9-7aa58d892ffc\") " pod="openstack/dnsmasq-dns-bbf5cc879-gs4gc" Nov 24 09:07:27 crc kubenswrapper[4886]: I1124 09:07:27.967968 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5511ae3a-4fa5-400a-88c6-e9fca01787cf-combined-ca-bundle\") pod \"keystone-bootstrap-wsgjs\" (UID: \"5511ae3a-4fa5-400a-88c6-e9fca01787cf\") " pod="openstack/keystone-bootstrap-wsgjs" Nov 24 09:07:27 crc kubenswrapper[4886]: I1124 09:07:27.967989 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5511ae3a-4fa5-400a-88c6-e9fca01787cf-config-data\") pod \"keystone-bootstrap-wsgjs\" (UID: \"5511ae3a-4fa5-400a-88c6-e9fca01787cf\") " pod="openstack/keystone-bootstrap-wsgjs" Nov 24 09:07:27 crc kubenswrapper[4886]: I1124 09:07:27.968025 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f97f59b9-899b-419b-88d9-7aa58d892ffc-dns-swift-storage-0\") pod \"dnsmasq-dns-bbf5cc879-gs4gc\" (UID: \"f97f59b9-899b-419b-88d9-7aa58d892ffc\") " pod="openstack/dnsmasq-dns-bbf5cc879-gs4gc" Nov 24 09:07:27 crc kubenswrapper[4886]: I1124 09:07:27.968268 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-5dc555666f-6hzb6"] Nov 24 09:07:27 crc kubenswrapper[4886]: I1124 09:07:27.968317 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f97f59b9-899b-419b-88d9-7aa58d892ffc-ovsdbserver-sb\") pod \"dnsmasq-dns-bbf5cc879-gs4gc\" (UID: \"f97f59b9-899b-419b-88d9-7aa58d892ffc\") " pod="openstack/dnsmasq-dns-bbf5cc879-gs4gc" Nov 24 09:07:27 crc kubenswrapper[4886]: I1124 09:07:27.968416 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d6m7x\" (UniqueName: \"kubernetes.io/projected/5511ae3a-4fa5-400a-88c6-e9fca01787cf-kube-api-access-d6m7x\") pod \"keystone-bootstrap-wsgjs\" (UID: \"5511ae3a-4fa5-400a-88c6-e9fca01787cf\") " pod="openstack/keystone-bootstrap-wsgjs" Nov 24 09:07:27 crc kubenswrapper[4886]: I1124 09:07:27.968473 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/5511ae3a-4fa5-400a-88c6-e9fca01787cf-fernet-keys\") pod \"keystone-bootstrap-wsgjs\" (UID: \"5511ae3a-4fa5-400a-88c6-e9fca01787cf\") " pod="openstack/keystone-bootstrap-wsgjs" Nov 24 09:07:27 crc kubenswrapper[4886]: I1124 09:07:27.968596 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5511ae3a-4fa5-400a-88c6-e9fca01787cf-scripts\") pod \"keystone-bootstrap-wsgjs\" (UID: \"5511ae3a-4fa5-400a-88c6-e9fca01787cf\") " pod="openstack/keystone-bootstrap-wsgjs" Nov 24 09:07:27 crc kubenswrapper[4886]: I1124 09:07:27.968830 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/5511ae3a-4fa5-400a-88c6-e9fca01787cf-credential-keys\") pod \"keystone-bootstrap-wsgjs\" (UID: \"5511ae3a-4fa5-400a-88c6-e9fca01787cf\") " pod="openstack/keystone-bootstrap-wsgjs" Nov 24 09:07:27 crc kubenswrapper[4886]: E1124 09:07:27.969042 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2446af2-09f6-446e-8baf-68b047d15d5f" containerName="init" Nov 24 09:07:27 crc kubenswrapper[4886]: I1124 09:07:27.969061 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2446af2-09f6-446e-8baf-68b047d15d5f" containerName="init" Nov 24 09:07:27 crc kubenswrapper[4886]: I1124 09:07:27.969436 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="f2446af2-09f6-446e-8baf-68b047d15d5f" containerName="init" Nov 24 09:07:27 crc kubenswrapper[4886]: I1124 09:07:27.971464 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5dc555666f-6hzb6" Nov 24 09:07:27 crc kubenswrapper[4886]: I1124 09:07:27.978887 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f97f59b9-899b-419b-88d9-7aa58d892ffc-ovsdbserver-sb\") pod \"dnsmasq-dns-bbf5cc879-gs4gc\" (UID: \"f97f59b9-899b-419b-88d9-7aa58d892ffc\") " pod="openstack/dnsmasq-dns-bbf5cc879-gs4gc" Nov 24 09:07:27 crc kubenswrapper[4886]: I1124 09:07:27.979534 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon" Nov 24 09:07:27 crc kubenswrapper[4886]: I1124 09:07:27.979907 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-scripts" Nov 24 09:07:27 crc kubenswrapper[4886]: I1124 09:07:27.979987 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-config-data" Nov 24 09:07:27 crc kubenswrapper[4886]: I1124 09:07:27.980312 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon-horizon-dockercfg-62trf" Nov 24 09:07:27 crc kubenswrapper[4886]: I1124 09:07:27.986605 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f97f59b9-899b-419b-88d9-7aa58d892ffc-dns-swift-storage-0\") pod \"dnsmasq-dns-bbf5cc879-gs4gc\" (UID: \"f97f59b9-899b-419b-88d9-7aa58d892ffc\") " pod="openstack/dnsmasq-dns-bbf5cc879-gs4gc" Nov 24 09:07:27 crc kubenswrapper[4886]: I1124 09:07:27.997848 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f97f59b9-899b-419b-88d9-7aa58d892ffc-dns-svc\") pod \"dnsmasq-dns-bbf5cc879-gs4gc\" (UID: \"f97f59b9-899b-419b-88d9-7aa58d892ffc\") " pod="openstack/dnsmasq-dns-bbf5cc879-gs4gc" Nov 24 09:07:28 crc kubenswrapper[4886]: I1124 09:07:28.000780 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f97f59b9-899b-419b-88d9-7aa58d892ffc-ovsdbserver-nb\") pod \"dnsmasq-dns-bbf5cc879-gs4gc\" (UID: \"f97f59b9-899b-419b-88d9-7aa58d892ffc\") " pod="openstack/dnsmasq-dns-bbf5cc879-gs4gc" Nov 24 09:07:28 crc kubenswrapper[4886]: I1124 09:07:28.012862 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5511ae3a-4fa5-400a-88c6-e9fca01787cf-scripts\") pod \"keystone-bootstrap-wsgjs\" (UID: \"5511ae3a-4fa5-400a-88c6-e9fca01787cf\") " pod="openstack/keystone-bootstrap-wsgjs" Nov 24 09:07:28 crc kubenswrapper[4886]: I1124 09:07:28.016559 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/5511ae3a-4fa5-400a-88c6-e9fca01787cf-credential-keys\") pod \"keystone-bootstrap-wsgjs\" (UID: \"5511ae3a-4fa5-400a-88c6-e9fca01787cf\") " pod="openstack/keystone-bootstrap-wsgjs" Nov 24 09:07:28 crc kubenswrapper[4886]: I1124 09:07:28.035128 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5511ae3a-4fa5-400a-88c6-e9fca01787cf-combined-ca-bundle\") pod \"keystone-bootstrap-wsgjs\" (UID: \"5511ae3a-4fa5-400a-88c6-e9fca01787cf\") " pod="openstack/keystone-bootstrap-wsgjs" Nov 24 09:07:28 crc kubenswrapper[4886]: I1124 09:07:28.035732 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/5511ae3a-4fa5-400a-88c6-e9fca01787cf-fernet-keys\") pod \"keystone-bootstrap-wsgjs\" (UID: \"5511ae3a-4fa5-400a-88c6-e9fca01787cf\") " pod="openstack/keystone-bootstrap-wsgjs" Nov 24 09:07:28 crc kubenswrapper[4886]: I1124 09:07:28.038254 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f97f59b9-899b-419b-88d9-7aa58d892ffc-config\") pod \"dnsmasq-dns-bbf5cc879-gs4gc\" (UID: \"f97f59b9-899b-419b-88d9-7aa58d892ffc\") " pod="openstack/dnsmasq-dns-bbf5cc879-gs4gc" Nov 24 09:07:28 crc kubenswrapper[4886]: I1124 09:07:28.049508 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d6m7x\" (UniqueName: \"kubernetes.io/projected/5511ae3a-4fa5-400a-88c6-e9fca01787cf-kube-api-access-d6m7x\") pod \"keystone-bootstrap-wsgjs\" (UID: \"5511ae3a-4fa5-400a-88c6-e9fca01787cf\") " pod="openstack/keystone-bootstrap-wsgjs" Nov 24 09:07:28 crc kubenswrapper[4886]: I1124 09:07:28.058023 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5511ae3a-4fa5-400a-88c6-e9fca01787cf-config-data\") pod \"keystone-bootstrap-wsgjs\" (UID: \"5511ae3a-4fa5-400a-88c6-e9fca01787cf\") " pod="openstack/keystone-bootstrap-wsgjs" Nov 24 09:07:28 crc kubenswrapper[4886]: I1124 09:07:28.072007 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-ph4gr"] Nov 24 09:07:28 crc kubenswrapper[4886]: I1124 09:07:28.077971 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-ph4gr" Nov 24 09:07:28 crc kubenswrapper[4886]: I1124 09:07:28.080428 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-wsgjs" Nov 24 09:07:28 crc kubenswrapper[4886]: I1124 09:07:28.083036 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8twnt\" (UniqueName: \"kubernetes.io/projected/f97f59b9-899b-419b-88d9-7aa58d892ffc-kube-api-access-8twnt\") pod \"dnsmasq-dns-bbf5cc879-gs4gc\" (UID: \"f97f59b9-899b-419b-88d9-7aa58d892ffc\") " pod="openstack/dnsmasq-dns-bbf5cc879-gs4gc" Nov 24 09:07:28 crc kubenswrapper[4886]: I1124 09:07:28.102893 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Nov 24 09:07:28 crc kubenswrapper[4886]: I1124 09:07:28.103166 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Nov 24 09:07:28 crc kubenswrapper[4886]: I1124 09:07:28.103300 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-h4lgq" Nov 24 09:07:28 crc kubenswrapper[4886]: I1124 09:07:28.104330 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bbf5cc879-gs4gc" Nov 24 09:07:28 crc kubenswrapper[4886]: I1124 09:07:28.105224 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f2446af2-09f6-446e-8baf-68b047d15d5f-ovsdbserver-nb\") pod \"f2446af2-09f6-446e-8baf-68b047d15d5f\" (UID: \"f2446af2-09f6-446e-8baf-68b047d15d5f\") " Nov 24 09:07:28 crc kubenswrapper[4886]: I1124 09:07:28.105391 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f2446af2-09f6-446e-8baf-68b047d15d5f-config\") pod \"f2446af2-09f6-446e-8baf-68b047d15d5f\" (UID: \"f2446af2-09f6-446e-8baf-68b047d15d5f\") " Nov 24 09:07:28 crc kubenswrapper[4886]: I1124 09:07:28.105431 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f2446af2-09f6-446e-8baf-68b047d15d5f-ovsdbserver-sb\") pod \"f2446af2-09f6-446e-8baf-68b047d15d5f\" (UID: \"f2446af2-09f6-446e-8baf-68b047d15d5f\") " Nov 24 09:07:28 crc kubenswrapper[4886]: I1124 09:07:28.105474 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f2446af2-09f6-446e-8baf-68b047d15d5f-dns-svc\") pod \"f2446af2-09f6-446e-8baf-68b047d15d5f\" (UID: \"f2446af2-09f6-446e-8baf-68b047d15d5f\") " Nov 24 09:07:28 crc kubenswrapper[4886]: I1124 09:07:28.105592 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xg5cw\" (UniqueName: \"kubernetes.io/projected/f2446af2-09f6-446e-8baf-68b047d15d5f-kube-api-access-xg5cw\") pod \"f2446af2-09f6-446e-8baf-68b047d15d5f\" (UID: \"f2446af2-09f6-446e-8baf-68b047d15d5f\") " Nov 24 09:07:28 crc kubenswrapper[4886]: I1124 09:07:28.105930 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9c272c26-c042-4e0d-a35d-2ff5d329f901-scripts\") pod \"horizon-5dc555666f-6hzb6\" (UID: \"9c272c26-c042-4e0d-a35d-2ff5d329f901\") " pod="openstack/horizon-5dc555666f-6hzb6" Nov 24 09:07:28 crc kubenswrapper[4886]: I1124 09:07:28.106024 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h6r2b\" (UniqueName: \"kubernetes.io/projected/9c272c26-c042-4e0d-a35d-2ff5d329f901-kube-api-access-h6r2b\") pod \"horizon-5dc555666f-6hzb6\" (UID: \"9c272c26-c042-4e0d-a35d-2ff5d329f901\") " pod="openstack/horizon-5dc555666f-6hzb6" Nov 24 09:07:28 crc kubenswrapper[4886]: I1124 09:07:28.106060 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9c272c26-c042-4e0d-a35d-2ff5d329f901-config-data\") pod \"horizon-5dc555666f-6hzb6\" (UID: \"9c272c26-c042-4e0d-a35d-2ff5d329f901\") " pod="openstack/horizon-5dc555666f-6hzb6" Nov 24 09:07:28 crc kubenswrapper[4886]: I1124 09:07:28.106116 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9c272c26-c042-4e0d-a35d-2ff5d329f901-logs\") pod \"horizon-5dc555666f-6hzb6\" (UID: \"9c272c26-c042-4e0d-a35d-2ff5d329f901\") " pod="openstack/horizon-5dc555666f-6hzb6" Nov 24 09:07:28 crc kubenswrapper[4886]: I1124 09:07:28.119298 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/9c272c26-c042-4e0d-a35d-2ff5d329f901-horizon-secret-key\") pod \"horizon-5dc555666f-6hzb6\" (UID: \"9c272c26-c042-4e0d-a35d-2ff5d329f901\") " pod="openstack/horizon-5dc555666f-6hzb6" Nov 24 09:07:28 crc kubenswrapper[4886]: I1124 09:07:28.147168 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-ph4gr"] Nov 24 09:07:28 crc kubenswrapper[4886]: I1124 09:07:28.183165 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f2446af2-09f6-446e-8baf-68b047d15d5f-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "f2446af2-09f6-446e-8baf-68b047d15d5f" (UID: "f2446af2-09f6-446e-8baf-68b047d15d5f"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 09:07:28 crc kubenswrapper[4886]: I1124 09:07:28.183563 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-5dc555666f-6hzb6"] Nov 24 09:07:28 crc kubenswrapper[4886]: I1124 09:07:28.187637 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f2446af2-09f6-446e-8baf-68b047d15d5f-kube-api-access-xg5cw" (OuterVolumeSpecName: "kube-api-access-xg5cw") pod "f2446af2-09f6-446e-8baf-68b047d15d5f" (UID: "f2446af2-09f6-446e-8baf-68b047d15d5f"). InnerVolumeSpecName "kube-api-access-xg5cw". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 09:07:28 crc kubenswrapper[4886]: I1124 09:07:28.205385 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f2446af2-09f6-446e-8baf-68b047d15d5f-config" (OuterVolumeSpecName: "config") pod "f2446af2-09f6-446e-8baf-68b047d15d5f" (UID: "f2446af2-09f6-446e-8baf-68b047d15d5f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 09:07:28 crc kubenswrapper[4886]: I1124 09:07:28.221992 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9c272c26-c042-4e0d-a35d-2ff5d329f901-scripts\") pod \"horizon-5dc555666f-6hzb6\" (UID: \"9c272c26-c042-4e0d-a35d-2ff5d329f901\") " pod="openstack/horizon-5dc555666f-6hzb6" Nov 24 09:07:28 crc kubenswrapper[4886]: I1124 09:07:28.222071 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7ca0ca62-7545-4e1a-9969-121899a789b0-scripts\") pod \"cinder-db-sync-ph4gr\" (UID: \"7ca0ca62-7545-4e1a-9969-121899a789b0\") " pod="openstack/cinder-db-sync-ph4gr" Nov 24 09:07:28 crc kubenswrapper[4886]: I1124 09:07:28.222098 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cqwfs\" (UniqueName: \"kubernetes.io/projected/7ca0ca62-7545-4e1a-9969-121899a789b0-kube-api-access-cqwfs\") pod \"cinder-db-sync-ph4gr\" (UID: \"7ca0ca62-7545-4e1a-9969-121899a789b0\") " pod="openstack/cinder-db-sync-ph4gr" Nov 24 09:07:28 crc kubenswrapper[4886]: I1124 09:07:28.222166 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ca0ca62-7545-4e1a-9969-121899a789b0-config-data\") pod \"cinder-db-sync-ph4gr\" (UID: \"7ca0ca62-7545-4e1a-9969-121899a789b0\") " pod="openstack/cinder-db-sync-ph4gr" Nov 24 09:07:28 crc kubenswrapper[4886]: I1124 09:07:28.222203 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7ca0ca62-7545-4e1a-9969-121899a789b0-etc-machine-id\") pod \"cinder-db-sync-ph4gr\" (UID: \"7ca0ca62-7545-4e1a-9969-121899a789b0\") " pod="openstack/cinder-db-sync-ph4gr" Nov 24 09:07:28 crc kubenswrapper[4886]: I1124 09:07:28.222223 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ca0ca62-7545-4e1a-9969-121899a789b0-combined-ca-bundle\") pod \"cinder-db-sync-ph4gr\" (UID: \"7ca0ca62-7545-4e1a-9969-121899a789b0\") " pod="openstack/cinder-db-sync-ph4gr" Nov 24 09:07:28 crc kubenswrapper[4886]: I1124 09:07:28.222243 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h6r2b\" (UniqueName: \"kubernetes.io/projected/9c272c26-c042-4e0d-a35d-2ff5d329f901-kube-api-access-h6r2b\") pod \"horizon-5dc555666f-6hzb6\" (UID: \"9c272c26-c042-4e0d-a35d-2ff5d329f901\") " pod="openstack/horizon-5dc555666f-6hzb6" Nov 24 09:07:28 crc kubenswrapper[4886]: I1124 09:07:28.222262 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9c272c26-c042-4e0d-a35d-2ff5d329f901-config-data\") pod \"horizon-5dc555666f-6hzb6\" (UID: \"9c272c26-c042-4e0d-a35d-2ff5d329f901\") " pod="openstack/horizon-5dc555666f-6hzb6" Nov 24 09:07:28 crc kubenswrapper[4886]: I1124 09:07:28.222548 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9c272c26-c042-4e0d-a35d-2ff5d329f901-logs\") pod \"horizon-5dc555666f-6hzb6\" (UID: \"9c272c26-c042-4e0d-a35d-2ff5d329f901\") " pod="openstack/horizon-5dc555666f-6hzb6" Nov 24 09:07:28 crc kubenswrapper[4886]: I1124 09:07:28.222683 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/9c272c26-c042-4e0d-a35d-2ff5d329f901-horizon-secret-key\") pod \"horizon-5dc555666f-6hzb6\" (UID: \"9c272c26-c042-4e0d-a35d-2ff5d329f901\") " pod="openstack/horizon-5dc555666f-6hzb6" Nov 24 09:07:28 crc kubenswrapper[4886]: I1124 09:07:28.222813 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/7ca0ca62-7545-4e1a-9969-121899a789b0-db-sync-config-data\") pod \"cinder-db-sync-ph4gr\" (UID: \"7ca0ca62-7545-4e1a-9969-121899a789b0\") " pod="openstack/cinder-db-sync-ph4gr" Nov 24 09:07:28 crc kubenswrapper[4886]: I1124 09:07:28.223046 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xg5cw\" (UniqueName: \"kubernetes.io/projected/f2446af2-09f6-446e-8baf-68b047d15d5f-kube-api-access-xg5cw\") on node \"crc\" DevicePath \"\"" Nov 24 09:07:28 crc kubenswrapper[4886]: I1124 09:07:28.223066 4886 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f2446af2-09f6-446e-8baf-68b047d15d5f-config\") on node \"crc\" DevicePath \"\"" Nov 24 09:07:28 crc kubenswrapper[4886]: I1124 09:07:28.223079 4886 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f2446af2-09f6-446e-8baf-68b047d15d5f-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 24 09:07:28 crc kubenswrapper[4886]: I1124 09:07:28.223449 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9c272c26-c042-4e0d-a35d-2ff5d329f901-config-data\") pod \"horizon-5dc555666f-6hzb6\" (UID: \"9c272c26-c042-4e0d-a35d-2ff5d329f901\") " pod="openstack/horizon-5dc555666f-6hzb6" Nov 24 09:07:28 crc kubenswrapper[4886]: I1124 09:07:28.224004 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9c272c26-c042-4e0d-a35d-2ff5d329f901-scripts\") pod \"horizon-5dc555666f-6hzb6\" (UID: \"9c272c26-c042-4e0d-a35d-2ff5d329f901\") " pod="openstack/horizon-5dc555666f-6hzb6" Nov 24 09:07:28 crc kubenswrapper[4886]: I1124 09:07:28.224087 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9c272c26-c042-4e0d-a35d-2ff5d329f901-logs\") pod \"horizon-5dc555666f-6hzb6\" (UID: \"9c272c26-c042-4e0d-a35d-2ff5d329f901\") " pod="openstack/horizon-5dc555666f-6hzb6" Nov 24 09:07:28 crc kubenswrapper[4886]: I1124 09:07:28.253102 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/9c272c26-c042-4e0d-a35d-2ff5d329f901-horizon-secret-key\") pod \"horizon-5dc555666f-6hzb6\" (UID: \"9c272c26-c042-4e0d-a35d-2ff5d329f901\") " pod="openstack/horizon-5dc555666f-6hzb6" Nov 24 09:07:28 crc kubenswrapper[4886]: I1124 09:07:28.272866 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h6r2b\" (UniqueName: \"kubernetes.io/projected/9c272c26-c042-4e0d-a35d-2ff5d329f901-kube-api-access-h6r2b\") pod \"horizon-5dc555666f-6hzb6\" (UID: \"9c272c26-c042-4e0d-a35d-2ff5d329f901\") " pod="openstack/horizon-5dc555666f-6hzb6" Nov 24 09:07:28 crc kubenswrapper[4886]: I1124 09:07:28.279518 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 24 09:07:28 crc kubenswrapper[4886]: I1124 09:07:28.294508 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f2446af2-09f6-446e-8baf-68b047d15d5f-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "f2446af2-09f6-446e-8baf-68b047d15d5f" (UID: "f2446af2-09f6-446e-8baf-68b047d15d5f"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 09:07:28 crc kubenswrapper[4886]: I1124 09:07:28.300000 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 09:07:28 crc kubenswrapper[4886]: I1124 09:07:28.301971 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f2446af2-09f6-446e-8baf-68b047d15d5f-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "f2446af2-09f6-446e-8baf-68b047d15d5f" (UID: "f2446af2-09f6-446e-8baf-68b047d15d5f"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 09:07:28 crc kubenswrapper[4886]: I1124 09:07:28.316510 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 24 09:07:28 crc kubenswrapper[4886]: I1124 09:07:28.318091 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 24 09:07:28 crc kubenswrapper[4886]: I1124 09:07:28.326907 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ca0ca62-7545-4e1a-9969-121899a789b0-config-data\") pod \"cinder-db-sync-ph4gr\" (UID: \"7ca0ca62-7545-4e1a-9969-121899a789b0\") " pod="openstack/cinder-db-sync-ph4gr" Nov 24 09:07:28 crc kubenswrapper[4886]: I1124 09:07:28.326980 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7ca0ca62-7545-4e1a-9969-121899a789b0-etc-machine-id\") pod \"cinder-db-sync-ph4gr\" (UID: \"7ca0ca62-7545-4e1a-9969-121899a789b0\") " pod="openstack/cinder-db-sync-ph4gr" Nov 24 09:07:28 crc kubenswrapper[4886]: I1124 09:07:28.327023 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ca0ca62-7545-4e1a-9969-121899a789b0-combined-ca-bundle\") pod \"cinder-db-sync-ph4gr\" (UID: \"7ca0ca62-7545-4e1a-9969-121899a789b0\") " pod="openstack/cinder-db-sync-ph4gr" Nov 24 09:07:28 crc kubenswrapper[4886]: I1124 09:07:28.327179 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/7ca0ca62-7545-4e1a-9969-121899a789b0-db-sync-config-data\") pod \"cinder-db-sync-ph4gr\" (UID: \"7ca0ca62-7545-4e1a-9969-121899a789b0\") " pod="openstack/cinder-db-sync-ph4gr" Nov 24 09:07:28 crc kubenswrapper[4886]: I1124 09:07:28.327261 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7ca0ca62-7545-4e1a-9969-121899a789b0-scripts\") pod \"cinder-db-sync-ph4gr\" (UID: \"7ca0ca62-7545-4e1a-9969-121899a789b0\") " pod="openstack/cinder-db-sync-ph4gr" Nov 24 09:07:28 crc kubenswrapper[4886]: I1124 09:07:28.327289 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqwfs\" (UniqueName: \"kubernetes.io/projected/7ca0ca62-7545-4e1a-9969-121899a789b0-kube-api-access-cqwfs\") pod \"cinder-db-sync-ph4gr\" (UID: \"7ca0ca62-7545-4e1a-9969-121899a789b0\") " pod="openstack/cinder-db-sync-ph4gr" Nov 24 09:07:28 crc kubenswrapper[4886]: I1124 09:07:28.327367 4886 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f2446af2-09f6-446e-8baf-68b047d15d5f-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 24 09:07:28 crc kubenswrapper[4886]: I1124 09:07:28.327379 4886 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f2446af2-09f6-446e-8baf-68b047d15d5f-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 24 09:07:28 crc kubenswrapper[4886]: I1124 09:07:28.327945 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7ca0ca62-7545-4e1a-9969-121899a789b0-etc-machine-id\") pod \"cinder-db-sync-ph4gr\" (UID: \"7ca0ca62-7545-4e1a-9969-121899a789b0\") " pod="openstack/cinder-db-sync-ph4gr" Nov 24 09:07:28 crc kubenswrapper[4886]: I1124 09:07:28.338709 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/7ca0ca62-7545-4e1a-9969-121899a789b0-db-sync-config-data\") pod \"cinder-db-sync-ph4gr\" (UID: \"7ca0ca62-7545-4e1a-9969-121899a789b0\") " pod="openstack/cinder-db-sync-ph4gr" Nov 24 09:07:28 crc kubenswrapper[4886]: I1124 09:07:28.339889 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7ca0ca62-7545-4e1a-9969-121899a789b0-scripts\") pod \"cinder-db-sync-ph4gr\" (UID: \"7ca0ca62-7545-4e1a-9969-121899a789b0\") " pod="openstack/cinder-db-sync-ph4gr" Nov 24 09:07:28 crc kubenswrapper[4886]: I1124 09:07:28.342327 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ca0ca62-7545-4e1a-9969-121899a789b0-config-data\") pod \"cinder-db-sync-ph4gr\" (UID: \"7ca0ca62-7545-4e1a-9969-121899a789b0\") " pod="openstack/cinder-db-sync-ph4gr" Nov 24 09:07:28 crc kubenswrapper[4886]: I1124 09:07:28.349987 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ca0ca62-7545-4e1a-9969-121899a789b0-combined-ca-bundle\") pod \"cinder-db-sync-ph4gr\" (UID: \"7ca0ca62-7545-4e1a-9969-121899a789b0\") " pod="openstack/cinder-db-sync-ph4gr" Nov 24 09:07:28 crc kubenswrapper[4886]: I1124 09:07:28.365372 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqwfs\" (UniqueName: \"kubernetes.io/projected/7ca0ca62-7545-4e1a-9969-121899a789b0-kube-api-access-cqwfs\") pod \"cinder-db-sync-ph4gr\" (UID: \"7ca0ca62-7545-4e1a-9969-121899a789b0\") " pod="openstack/cinder-db-sync-ph4gr" Nov 24 09:07:28 crc kubenswrapper[4886]: I1124 09:07:28.392772 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 24 09:07:28 crc kubenswrapper[4886]: I1124 09:07:28.400625 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74dc88fc-vtw66" event={"ID":"f2446af2-09f6-446e-8baf-68b047d15d5f","Type":"ContainerDied","Data":"9c8cbc17fbb632ddf22962dd5bba21a2f24f85e64c08e931dc71a49c0019a075"} Nov 24 09:07:28 crc kubenswrapper[4886]: I1124 09:07:28.400696 4886 scope.go:117] "RemoveContainer" containerID="2b0c6732f2e6ec40a1d33d5470a44597f8cccb918907bb95e23d09c191033650" Nov 24 09:07:28 crc kubenswrapper[4886]: I1124 09:07:28.400857 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74dc88fc-vtw66" Nov 24 09:07:28 crc kubenswrapper[4886]: I1124 09:07:28.417284 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-bbf5cc879-gs4gc"] Nov 24 09:07:28 crc kubenswrapper[4886]: I1124 09:07:28.428914 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e8de0fe-585f-4cb8-8deb-d788e443fbd2-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"6e8de0fe-585f-4cb8-8deb-d788e443fbd2\") " pod="openstack/ceilometer-0" Nov 24 09:07:28 crc kubenswrapper[4886]: I1124 09:07:28.429091 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6e8de0fe-585f-4cb8-8deb-d788e443fbd2-scripts\") pod \"ceilometer-0\" (UID: \"6e8de0fe-585f-4cb8-8deb-d788e443fbd2\") " pod="openstack/ceilometer-0" Nov 24 09:07:28 crc kubenswrapper[4886]: I1124 09:07:28.429127 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6e8de0fe-585f-4cb8-8deb-d788e443fbd2-log-httpd\") pod \"ceilometer-0\" (UID: \"6e8de0fe-585f-4cb8-8deb-d788e443fbd2\") " pod="openstack/ceilometer-0" Nov 24 09:07:28 crc kubenswrapper[4886]: I1124 09:07:28.429186 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6e8de0fe-585f-4cb8-8deb-d788e443fbd2-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"6e8de0fe-585f-4cb8-8deb-d788e443fbd2\") " pod="openstack/ceilometer-0" Nov 24 09:07:28 crc kubenswrapper[4886]: I1124 09:07:28.429252 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6e8de0fe-585f-4cb8-8deb-d788e443fbd2-config-data\") pod \"ceilometer-0\" (UID: \"6e8de0fe-585f-4cb8-8deb-d788e443fbd2\") " pod="openstack/ceilometer-0" Nov 24 09:07:28 crc kubenswrapper[4886]: I1124 09:07:28.429277 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6e8de0fe-585f-4cb8-8deb-d788e443fbd2-run-httpd\") pod \"ceilometer-0\" (UID: \"6e8de0fe-585f-4cb8-8deb-d788e443fbd2\") " pod="openstack/ceilometer-0" Nov 24 09:07:28 crc kubenswrapper[4886]: I1124 09:07:28.429337 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sqlz2\" (UniqueName: \"kubernetes.io/projected/6e8de0fe-585f-4cb8-8deb-d788e443fbd2-kube-api-access-sqlz2\") pod \"ceilometer-0\" (UID: \"6e8de0fe-585f-4cb8-8deb-d788e443fbd2\") " pod="openstack/ceilometer-0" Nov 24 09:07:28 crc kubenswrapper[4886]: I1124 09:07:28.433432 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-c858856cc-mpd52"] Nov 24 09:07:28 crc kubenswrapper[4886]: I1124 09:07:28.435580 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-c858856cc-mpd52" Nov 24 09:07:28 crc kubenswrapper[4886]: I1124 09:07:28.437775 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f59b8f679-hd8tz" event={"ID":"001c7b85-45be-47cf-bef5-554a2710e240","Type":"ContainerStarted","Data":"6ede237ae345f4416d9aae4efeb47470ccb1feef23b3ebac6335209cd319d73a"} Nov 24 09:07:28 crc kubenswrapper[4886]: I1124 09:07:28.437919 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5f59b8f679-hd8tz" podUID="001c7b85-45be-47cf-bef5-554a2710e240" containerName="dnsmasq-dns" containerID="cri-o://6ede237ae345f4416d9aae4efeb47470ccb1feef23b3ebac6335209cd319d73a" gracePeriod=10 Nov 24 09:07:28 crc kubenswrapper[4886]: I1124 09:07:28.437968 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5f59b8f679-hd8tz" Nov 24 09:07:28 crc kubenswrapper[4886]: I1124 09:07:28.443317 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-lqtgn"] Nov 24 09:07:28 crc kubenswrapper[4886]: I1124 09:07:28.448588 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-lqtgn" Nov 24 09:07:28 crc kubenswrapper[4886]: I1124 09:07:28.454071 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-8hkhc"] Nov 24 09:07:28 crc kubenswrapper[4886]: I1124 09:07:28.456123 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Nov 24 09:07:28 crc kubenswrapper[4886]: I1124 09:07:28.456123 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Nov 24 09:07:28 crc kubenswrapper[4886]: I1124 09:07:28.457104 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-pjfbc" Nov 24 09:07:28 crc kubenswrapper[4886]: I1124 09:07:28.463558 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-8hkhc" Nov 24 09:07:28 crc kubenswrapper[4886]: I1124 09:07:28.471503 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-5m576" Nov 24 09:07:28 crc kubenswrapper[4886]: I1124 09:07:28.471990 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-lqtgn"] Nov 24 09:07:28 crc kubenswrapper[4886]: I1124 09:07:28.482900 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Nov 24 09:07:28 crc kubenswrapper[4886]: I1124 09:07:28.483477 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-8bzdg"] Nov 24 09:07:28 crc kubenswrapper[4886]: I1124 09:07:28.493804 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-8bzdg" Nov 24 09:07:28 crc kubenswrapper[4886]: I1124 09:07:28.496191 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-8hkhc"] Nov 24 09:07:28 crc kubenswrapper[4886]: I1124 09:07:28.499951 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Nov 24 09:07:28 crc kubenswrapper[4886]: I1124 09:07:28.500293 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-wwn5r" Nov 24 09:07:28 crc kubenswrapper[4886]: I1124 09:07:28.502043 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Nov 24 09:07:28 crc kubenswrapper[4886]: I1124 09:07:28.504894 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-c858856cc-mpd52"] Nov 24 09:07:28 crc kubenswrapper[4886]: I1124 09:07:28.505817 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5dc555666f-6hzb6" Nov 24 09:07:28 crc kubenswrapper[4886]: I1124 09:07:28.515539 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-8bzdg"] Nov 24 09:07:28 crc kubenswrapper[4886]: I1124 09:07:28.528263 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-8wwcw"] Nov 24 09:07:28 crc kubenswrapper[4886]: I1124 09:07:28.530827 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56df8fb6b7-8wwcw" Nov 24 09:07:28 crc kubenswrapper[4886]: I1124 09:07:28.533327 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/fe0358bf-ca1c-425a-91eb-4a2a6435f618-horizon-secret-key\") pod \"horizon-c858856cc-mpd52\" (UID: \"fe0358bf-ca1c-425a-91eb-4a2a6435f618\") " pod="openstack/horizon-c858856cc-mpd52" Nov 24 09:07:28 crc kubenswrapper[4886]: I1124 09:07:28.533376 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/fe0358bf-ca1c-425a-91eb-4a2a6435f618-config-data\") pod \"horizon-c858856cc-mpd52\" (UID: \"fe0358bf-ca1c-425a-91eb-4a2a6435f618\") " pod="openstack/horizon-c858856cc-mpd52" Nov 24 09:07:28 crc kubenswrapper[4886]: I1124 09:07:28.533433 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s8vn5\" (UniqueName: \"kubernetes.io/projected/fe0358bf-ca1c-425a-91eb-4a2a6435f618-kube-api-access-s8vn5\") pod \"horizon-c858856cc-mpd52\" (UID: \"fe0358bf-ca1c-425a-91eb-4a2a6435f618\") " pod="openstack/horizon-c858856cc-mpd52" Nov 24 09:07:28 crc kubenswrapper[4886]: I1124 09:07:28.533521 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6e8de0fe-585f-4cb8-8deb-d788e443fbd2-scripts\") pod \"ceilometer-0\" (UID: \"6e8de0fe-585f-4cb8-8deb-d788e443fbd2\") " pod="openstack/ceilometer-0" Nov 24 09:07:28 crc kubenswrapper[4886]: I1124 09:07:28.533544 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6e8de0fe-585f-4cb8-8deb-d788e443fbd2-log-httpd\") pod \"ceilometer-0\" (UID: \"6e8de0fe-585f-4cb8-8deb-d788e443fbd2\") " pod="openstack/ceilometer-0" Nov 24 09:07:28 crc kubenswrapper[4886]: I1124 09:07:28.533577 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6e8de0fe-585f-4cb8-8deb-d788e443fbd2-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"6e8de0fe-585f-4cb8-8deb-d788e443fbd2\") " pod="openstack/ceilometer-0" Nov 24 09:07:28 crc kubenswrapper[4886]: I1124 09:07:28.533603 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6e8de0fe-585f-4cb8-8deb-d788e443fbd2-config-data\") pod \"ceilometer-0\" (UID: \"6e8de0fe-585f-4cb8-8deb-d788e443fbd2\") " pod="openstack/ceilometer-0" Nov 24 09:07:28 crc kubenswrapper[4886]: I1124 09:07:28.533625 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6e8de0fe-585f-4cb8-8deb-d788e443fbd2-run-httpd\") pod \"ceilometer-0\" (UID: \"6e8de0fe-585f-4cb8-8deb-d788e443fbd2\") " pod="openstack/ceilometer-0" Nov 24 09:07:28 crc kubenswrapper[4886]: I1124 09:07:28.533658 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/390e7d30-f337-4255-a488-3b5b345235ed-combined-ca-bundle\") pod \"neutron-db-sync-lqtgn\" (UID: \"390e7d30-f337-4255-a488-3b5b345235ed\") " pod="openstack/neutron-db-sync-lqtgn" Nov 24 09:07:28 crc kubenswrapper[4886]: I1124 09:07:28.533685 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/fe0358bf-ca1c-425a-91eb-4a2a6435f618-scripts\") pod \"horizon-c858856cc-mpd52\" (UID: \"fe0358bf-ca1c-425a-91eb-4a2a6435f618\") " pod="openstack/horizon-c858856cc-mpd52" Nov 24 09:07:28 crc kubenswrapper[4886]: I1124 09:07:28.533739 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v5x4t\" (UniqueName: \"kubernetes.io/projected/390e7d30-f337-4255-a488-3b5b345235ed-kube-api-access-v5x4t\") pod \"neutron-db-sync-lqtgn\" (UID: \"390e7d30-f337-4255-a488-3b5b345235ed\") " pod="openstack/neutron-db-sync-lqtgn" Nov 24 09:07:28 crc kubenswrapper[4886]: I1124 09:07:28.533784 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/390e7d30-f337-4255-a488-3b5b345235ed-config\") pod \"neutron-db-sync-lqtgn\" (UID: \"390e7d30-f337-4255-a488-3b5b345235ed\") " pod="openstack/neutron-db-sync-lqtgn" Nov 24 09:07:28 crc kubenswrapper[4886]: I1124 09:07:28.533805 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sqlz2\" (UniqueName: \"kubernetes.io/projected/6e8de0fe-585f-4cb8-8deb-d788e443fbd2-kube-api-access-sqlz2\") pod \"ceilometer-0\" (UID: \"6e8de0fe-585f-4cb8-8deb-d788e443fbd2\") " pod="openstack/ceilometer-0" Nov 24 09:07:28 crc kubenswrapper[4886]: I1124 09:07:28.533884 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fe0358bf-ca1c-425a-91eb-4a2a6435f618-logs\") pod \"horizon-c858856cc-mpd52\" (UID: \"fe0358bf-ca1c-425a-91eb-4a2a6435f618\") " pod="openstack/horizon-c858856cc-mpd52" Nov 24 09:07:28 crc kubenswrapper[4886]: I1124 09:07:28.533908 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e8de0fe-585f-4cb8-8deb-d788e443fbd2-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"6e8de0fe-585f-4cb8-8deb-d788e443fbd2\") " pod="openstack/ceilometer-0" Nov 24 09:07:28 crc kubenswrapper[4886]: I1124 09:07:28.536789 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6e8de0fe-585f-4cb8-8deb-d788e443fbd2-run-httpd\") pod \"ceilometer-0\" (UID: \"6e8de0fe-585f-4cb8-8deb-d788e443fbd2\") " pod="openstack/ceilometer-0" Nov 24 09:07:28 crc kubenswrapper[4886]: I1124 09:07:28.537197 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6e8de0fe-585f-4cb8-8deb-d788e443fbd2-log-httpd\") pod \"ceilometer-0\" (UID: \"6e8de0fe-585f-4cb8-8deb-d788e443fbd2\") " pod="openstack/ceilometer-0" Nov 24 09:07:28 crc kubenswrapper[4886]: I1124 09:07:28.550045 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e8de0fe-585f-4cb8-8deb-d788e443fbd2-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"6e8de0fe-585f-4cb8-8deb-d788e443fbd2\") " pod="openstack/ceilometer-0" Nov 24 09:07:28 crc kubenswrapper[4886]: I1124 09:07:28.550402 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6e8de0fe-585f-4cb8-8deb-d788e443fbd2-scripts\") pod \"ceilometer-0\" (UID: \"6e8de0fe-585f-4cb8-8deb-d788e443fbd2\") " pod="openstack/ceilometer-0" Nov 24 09:07:28 crc kubenswrapper[4886]: I1124 09:07:28.550565 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6e8de0fe-585f-4cb8-8deb-d788e443fbd2-config-data\") pod \"ceilometer-0\" (UID: \"6e8de0fe-585f-4cb8-8deb-d788e443fbd2\") " pod="openstack/ceilometer-0" Nov 24 09:07:28 crc kubenswrapper[4886]: I1124 09:07:28.556636 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6e8de0fe-585f-4cb8-8deb-d788e443fbd2-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"6e8de0fe-585f-4cb8-8deb-d788e443fbd2\") " pod="openstack/ceilometer-0" Nov 24 09:07:28 crc kubenswrapper[4886]: I1124 09:07:28.563345 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-8wwcw"] Nov 24 09:07:28 crc kubenswrapper[4886]: I1124 09:07:28.563850 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-ph4gr" Nov 24 09:07:28 crc kubenswrapper[4886]: I1124 09:07:28.584750 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sqlz2\" (UniqueName: \"kubernetes.io/projected/6e8de0fe-585f-4cb8-8deb-d788e443fbd2-kube-api-access-sqlz2\") pod \"ceilometer-0\" (UID: \"6e8de0fe-585f-4cb8-8deb-d788e443fbd2\") " pod="openstack/ceilometer-0" Nov 24 09:07:28 crc kubenswrapper[4886]: I1124 09:07:28.588139 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Nov 24 09:07:28 crc kubenswrapper[4886]: I1124 09:07:28.634469 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 24 09:07:28 crc kubenswrapper[4886]: I1124 09:07:28.635331 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 24 09:07:28 crc kubenswrapper[4886]: I1124 09:07:28.635937 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d02e92b5-fa15-43ce-a8aa-c3dc06490056-dns-svc\") pod \"dnsmasq-dns-56df8fb6b7-8wwcw\" (UID: \"d02e92b5-fa15-43ce-a8aa-c3dc06490056\") " pod="openstack/dnsmasq-dns-56df8fb6b7-8wwcw" Nov 24 09:07:28 crc kubenswrapper[4886]: I1124 09:07:28.644427 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/390e7d30-f337-4255-a488-3b5b345235ed-combined-ca-bundle\") pod \"neutron-db-sync-lqtgn\" (UID: \"390e7d30-f337-4255-a488-3b5b345235ed\") " pod="openstack/neutron-db-sync-lqtgn" Nov 24 09:07:28 crc kubenswrapper[4886]: I1124 09:07:28.644504 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/fe0358bf-ca1c-425a-91eb-4a2a6435f618-scripts\") pod \"horizon-c858856cc-mpd52\" (UID: \"fe0358bf-ca1c-425a-91eb-4a2a6435f618\") " pod="openstack/horizon-c858856cc-mpd52" Nov 24 09:07:28 crc kubenswrapper[4886]: I1124 09:07:28.644604 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v5x4t\" (UniqueName: \"kubernetes.io/projected/390e7d30-f337-4255-a488-3b5b345235ed-kube-api-access-v5x4t\") pod \"neutron-db-sync-lqtgn\" (UID: \"390e7d30-f337-4255-a488-3b5b345235ed\") " pod="openstack/neutron-db-sync-lqtgn" Nov 24 09:07:28 crc kubenswrapper[4886]: I1124 09:07:28.644691 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/390e7d30-f337-4255-a488-3b5b345235ed-config\") pod \"neutron-db-sync-lqtgn\" (UID: \"390e7d30-f337-4255-a488-3b5b345235ed\") " pod="openstack/neutron-db-sync-lqtgn" Nov 24 09:07:28 crc kubenswrapper[4886]: I1124 09:07:28.644775 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9603d94-2b25-4cdc-bab2-daeae2b9f8a7-combined-ca-bundle\") pod \"placement-db-sync-8bzdg\" (UID: \"d9603d94-2b25-4cdc-bab2-daeae2b9f8a7\") " pod="openstack/placement-db-sync-8bzdg" Nov 24 09:07:28 crc kubenswrapper[4886]: I1124 09:07:28.644805 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d02e92b5-fa15-43ce-a8aa-c3dc06490056-dns-swift-storage-0\") pod \"dnsmasq-dns-56df8fb6b7-8wwcw\" (UID: \"d02e92b5-fa15-43ce-a8aa-c3dc06490056\") " pod="openstack/dnsmasq-dns-56df8fb6b7-8wwcw" Nov 24 09:07:28 crc kubenswrapper[4886]: I1124 09:07:28.644826 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d02e92b5-fa15-43ce-a8aa-c3dc06490056-ovsdbserver-sb\") pod \"dnsmasq-dns-56df8fb6b7-8wwcw\" (UID: \"d02e92b5-fa15-43ce-a8aa-c3dc06490056\") " pod="openstack/dnsmasq-dns-56df8fb6b7-8wwcw" Nov 24 09:07:28 crc kubenswrapper[4886]: I1124 09:07:28.644860 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d9603d94-2b25-4cdc-bab2-daeae2b9f8a7-config-data\") pod \"placement-db-sync-8bzdg\" (UID: \"d9603d94-2b25-4cdc-bab2-daeae2b9f8a7\") " pod="openstack/placement-db-sync-8bzdg" Nov 24 09:07:28 crc kubenswrapper[4886]: I1124 09:07:28.644923 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d02e92b5-fa15-43ce-a8aa-c3dc06490056-config\") pod \"dnsmasq-dns-56df8fb6b7-8wwcw\" (UID: \"d02e92b5-fa15-43ce-a8aa-c3dc06490056\") " pod="openstack/dnsmasq-dns-56df8fb6b7-8wwcw" Nov 24 09:07:28 crc kubenswrapper[4886]: I1124 09:07:28.644952 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fe0358bf-ca1c-425a-91eb-4a2a6435f618-logs\") pod \"horizon-c858856cc-mpd52\" (UID: \"fe0358bf-ca1c-425a-91eb-4a2a6435f618\") " pod="openstack/horizon-c858856cc-mpd52" Nov 24 09:07:28 crc kubenswrapper[4886]: I1124 09:07:28.644984 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2e82e3b-0acc-454e-b8b5-cf584f3298b4-combined-ca-bundle\") pod \"barbican-db-sync-8hkhc\" (UID: \"a2e82e3b-0acc-454e-b8b5-cf584f3298b4\") " pod="openstack/barbican-db-sync-8hkhc" Nov 24 09:07:28 crc kubenswrapper[4886]: I1124 09:07:28.645019 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a2e82e3b-0acc-454e-b8b5-cf584f3298b4-db-sync-config-data\") pod \"barbican-db-sync-8hkhc\" (UID: \"a2e82e3b-0acc-454e-b8b5-cf584f3298b4\") " pod="openstack/barbican-db-sync-8hkhc" Nov 24 09:07:28 crc kubenswrapper[4886]: I1124 09:07:28.645062 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d9603d94-2b25-4cdc-bab2-daeae2b9f8a7-scripts\") pod \"placement-db-sync-8bzdg\" (UID: \"d9603d94-2b25-4cdc-bab2-daeae2b9f8a7\") " pod="openstack/placement-db-sync-8bzdg" Nov 24 09:07:28 crc kubenswrapper[4886]: I1124 09:07:28.645135 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/fe0358bf-ca1c-425a-91eb-4a2a6435f618-horizon-secret-key\") pod \"horizon-c858856cc-mpd52\" (UID: \"fe0358bf-ca1c-425a-91eb-4a2a6435f618\") " pod="openstack/horizon-c858856cc-mpd52" Nov 24 09:07:28 crc kubenswrapper[4886]: I1124 09:07:28.645203 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/fe0358bf-ca1c-425a-91eb-4a2a6435f618-config-data\") pod \"horizon-c858856cc-mpd52\" (UID: \"fe0358bf-ca1c-425a-91eb-4a2a6435f618\") " pod="openstack/horizon-c858856cc-mpd52" Nov 24 09:07:28 crc kubenswrapper[4886]: I1124 09:07:28.645293 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s8vn5\" (UniqueName: \"kubernetes.io/projected/fe0358bf-ca1c-425a-91eb-4a2a6435f618-kube-api-access-s8vn5\") pod \"horizon-c858856cc-mpd52\" (UID: \"fe0358bf-ca1c-425a-91eb-4a2a6435f618\") " pod="openstack/horizon-c858856cc-mpd52" Nov 24 09:07:28 crc kubenswrapper[4886]: I1124 09:07:28.645331 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d02e92b5-fa15-43ce-a8aa-c3dc06490056-ovsdbserver-nb\") pod \"dnsmasq-dns-56df8fb6b7-8wwcw\" (UID: \"d02e92b5-fa15-43ce-a8aa-c3dc06490056\") " pod="openstack/dnsmasq-dns-56df8fb6b7-8wwcw" Nov 24 09:07:28 crc kubenswrapper[4886]: I1124 09:07:28.645446 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vjjwb\" (UniqueName: \"kubernetes.io/projected/d9603d94-2b25-4cdc-bab2-daeae2b9f8a7-kube-api-access-vjjwb\") pod \"placement-db-sync-8bzdg\" (UID: \"d9603d94-2b25-4cdc-bab2-daeae2b9f8a7\") " pod="openstack/placement-db-sync-8bzdg" Nov 24 09:07:28 crc kubenswrapper[4886]: I1124 09:07:28.645478 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qs2t9\" (UniqueName: \"kubernetes.io/projected/a2e82e3b-0acc-454e-b8b5-cf584f3298b4-kube-api-access-qs2t9\") pod \"barbican-db-sync-8hkhc\" (UID: \"a2e82e3b-0acc-454e-b8b5-cf584f3298b4\") " pod="openstack/barbican-db-sync-8hkhc" Nov 24 09:07:28 crc kubenswrapper[4886]: I1124 09:07:28.645533 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lx9wt\" (UniqueName: \"kubernetes.io/projected/d02e92b5-fa15-43ce-a8aa-c3dc06490056-kube-api-access-lx9wt\") pod \"dnsmasq-dns-56df8fb6b7-8wwcw\" (UID: \"d02e92b5-fa15-43ce-a8aa-c3dc06490056\") " pod="openstack/dnsmasq-dns-56df8fb6b7-8wwcw" Nov 24 09:07:28 crc kubenswrapper[4886]: I1124 09:07:28.645588 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d9603d94-2b25-4cdc-bab2-daeae2b9f8a7-logs\") pod \"placement-db-sync-8bzdg\" (UID: \"d9603d94-2b25-4cdc-bab2-daeae2b9f8a7\") " pod="openstack/placement-db-sync-8bzdg" Nov 24 09:07:28 crc kubenswrapper[4886]: I1124 09:07:28.650525 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fe0358bf-ca1c-425a-91eb-4a2a6435f618-logs\") pod \"horizon-c858856cc-mpd52\" (UID: \"fe0358bf-ca1c-425a-91eb-4a2a6435f618\") " pod="openstack/horizon-c858856cc-mpd52" Nov 24 09:07:28 crc kubenswrapper[4886]: I1124 09:07:28.651193 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/fe0358bf-ca1c-425a-91eb-4a2a6435f618-scripts\") pod \"horizon-c858856cc-mpd52\" (UID: \"fe0358bf-ca1c-425a-91eb-4a2a6435f618\") " pod="openstack/horizon-c858856cc-mpd52" Nov 24 09:07:28 crc kubenswrapper[4886]: I1124 09:07:28.654486 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/390e7d30-f337-4255-a488-3b5b345235ed-combined-ca-bundle\") pod \"neutron-db-sync-lqtgn\" (UID: \"390e7d30-f337-4255-a488-3b5b345235ed\") " pod="openstack/neutron-db-sync-lqtgn" Nov 24 09:07:28 crc kubenswrapper[4886]: I1124 09:07:28.656213 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/fe0358bf-ca1c-425a-91eb-4a2a6435f618-config-data\") pod \"horizon-c858856cc-mpd52\" (UID: \"fe0358bf-ca1c-425a-91eb-4a2a6435f618\") " pod="openstack/horizon-c858856cc-mpd52" Nov 24 09:07:28 crc kubenswrapper[4886]: I1124 09:07:28.658058 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/fe0358bf-ca1c-425a-91eb-4a2a6435f618-horizon-secret-key\") pod \"horizon-c858856cc-mpd52\" (UID: \"fe0358bf-ca1c-425a-91eb-4a2a6435f618\") " pod="openstack/horizon-c858856cc-mpd52" Nov 24 09:07:28 crc kubenswrapper[4886]: I1124 09:07:28.658720 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-ld45n" Nov 24 09:07:28 crc kubenswrapper[4886]: I1124 09:07:28.658748 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Nov 24 09:07:28 crc kubenswrapper[4886]: I1124 09:07:28.658792 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Nov 24 09:07:28 crc kubenswrapper[4886]: I1124 09:07:28.660960 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/390e7d30-f337-4255-a488-3b5b345235ed-config\") pod \"neutron-db-sync-lqtgn\" (UID: \"390e7d30-f337-4255-a488-3b5b345235ed\") " pod="openstack/neutron-db-sync-lqtgn" Nov 24 09:07:28 crc kubenswrapper[4886]: I1124 09:07:28.661574 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 09:07:28 crc kubenswrapper[4886]: I1124 09:07:28.672315 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-74dc88fc-vtw66"] Nov 24 09:07:28 crc kubenswrapper[4886]: I1124 09:07:28.682019 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v5x4t\" (UniqueName: \"kubernetes.io/projected/390e7d30-f337-4255-a488-3b5b345235ed-kube-api-access-v5x4t\") pod \"neutron-db-sync-lqtgn\" (UID: \"390e7d30-f337-4255-a488-3b5b345235ed\") " pod="openstack/neutron-db-sync-lqtgn" Nov 24 09:07:28 crc kubenswrapper[4886]: I1124 09:07:28.692220 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-74dc88fc-vtw66"] Nov 24 09:07:28 crc kubenswrapper[4886]: I1124 09:07:28.696045 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s8vn5\" (UniqueName: \"kubernetes.io/projected/fe0358bf-ca1c-425a-91eb-4a2a6435f618-kube-api-access-s8vn5\") pod \"horizon-c858856cc-mpd52\" (UID: \"fe0358bf-ca1c-425a-91eb-4a2a6435f618\") " pod="openstack/horizon-c858856cc-mpd52" Nov 24 09:07:28 crc kubenswrapper[4886]: I1124 09:07:28.700137 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5f59b8f679-hd8tz" podStartSLOduration=3.700113912 podStartE2EDuration="3.700113912s" podCreationTimestamp="2025-11-24 09:07:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 09:07:28.571900274 +0000 UTC m=+1104.458638409" watchObservedRunningTime="2025-11-24 09:07:28.700113912 +0000 UTC m=+1104.586852047" Nov 24 09:07:28 crc kubenswrapper[4886]: I1124 09:07:28.751723 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d02e92b5-fa15-43ce-a8aa-c3dc06490056-ovsdbserver-nb\") pod \"dnsmasq-dns-56df8fb6b7-8wwcw\" (UID: \"d02e92b5-fa15-43ce-a8aa-c3dc06490056\") " pod="openstack/dnsmasq-dns-56df8fb6b7-8wwcw" Nov 24 09:07:28 crc kubenswrapper[4886]: I1124 09:07:28.751814 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vjjwb\" (UniqueName: \"kubernetes.io/projected/d9603d94-2b25-4cdc-bab2-daeae2b9f8a7-kube-api-access-vjjwb\") pod \"placement-db-sync-8bzdg\" (UID: \"d9603d94-2b25-4cdc-bab2-daeae2b9f8a7\") " pod="openstack/placement-db-sync-8bzdg" Nov 24 09:07:28 crc kubenswrapper[4886]: I1124 09:07:28.751852 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qs2t9\" (UniqueName: \"kubernetes.io/projected/a2e82e3b-0acc-454e-b8b5-cf584f3298b4-kube-api-access-qs2t9\") pod \"barbican-db-sync-8hkhc\" (UID: \"a2e82e3b-0acc-454e-b8b5-cf584f3298b4\") " pod="openstack/barbican-db-sync-8hkhc" Nov 24 09:07:28 crc kubenswrapper[4886]: I1124 09:07:28.751897 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lx9wt\" (UniqueName: \"kubernetes.io/projected/d02e92b5-fa15-43ce-a8aa-c3dc06490056-kube-api-access-lx9wt\") pod \"dnsmasq-dns-56df8fb6b7-8wwcw\" (UID: \"d02e92b5-fa15-43ce-a8aa-c3dc06490056\") " pod="openstack/dnsmasq-dns-56df8fb6b7-8wwcw" Nov 24 09:07:28 crc kubenswrapper[4886]: I1124 09:07:28.751934 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d9603d94-2b25-4cdc-bab2-daeae2b9f8a7-logs\") pod \"placement-db-sync-8bzdg\" (UID: \"d9603d94-2b25-4cdc-bab2-daeae2b9f8a7\") " pod="openstack/placement-db-sync-8bzdg" Nov 24 09:07:28 crc kubenswrapper[4886]: I1124 09:07:28.751966 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d02e92b5-fa15-43ce-a8aa-c3dc06490056-dns-svc\") pod \"dnsmasq-dns-56df8fb6b7-8wwcw\" (UID: \"d02e92b5-fa15-43ce-a8aa-c3dc06490056\") " pod="openstack/dnsmasq-dns-56df8fb6b7-8wwcw" Nov 24 09:07:28 crc kubenswrapper[4886]: I1124 09:07:28.752031 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bf964533-7535-4425-a880-9b95595188b4-config-data\") pod \"glance-default-external-api-0\" (UID: \"bf964533-7535-4425-a880-9b95595188b4\") " pod="openstack/glance-default-external-api-0" Nov 24 09:07:28 crc kubenswrapper[4886]: I1124 09:07:28.752115 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bf964533-7535-4425-a880-9b95595188b4-scripts\") pod \"glance-default-external-api-0\" (UID: \"bf964533-7535-4425-a880-9b95595188b4\") " pod="openstack/glance-default-external-api-0" Nov 24 09:07:28 crc kubenswrapper[4886]: I1124 09:07:28.752169 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-external-api-0\" (UID: \"bf964533-7535-4425-a880-9b95595188b4\") " pod="openstack/glance-default-external-api-0" Nov 24 09:07:28 crc kubenswrapper[4886]: I1124 09:07:28.752198 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9603d94-2b25-4cdc-bab2-daeae2b9f8a7-combined-ca-bundle\") pod \"placement-db-sync-8bzdg\" (UID: \"d9603d94-2b25-4cdc-bab2-daeae2b9f8a7\") " pod="openstack/placement-db-sync-8bzdg" Nov 24 09:07:28 crc kubenswrapper[4886]: I1124 09:07:28.752223 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d02e92b5-fa15-43ce-a8aa-c3dc06490056-dns-swift-storage-0\") pod \"dnsmasq-dns-56df8fb6b7-8wwcw\" (UID: \"d02e92b5-fa15-43ce-a8aa-c3dc06490056\") " pod="openstack/dnsmasq-dns-56df8fb6b7-8wwcw" Nov 24 09:07:28 crc kubenswrapper[4886]: I1124 09:07:28.752252 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d02e92b5-fa15-43ce-a8aa-c3dc06490056-ovsdbserver-sb\") pod \"dnsmasq-dns-56df8fb6b7-8wwcw\" (UID: \"d02e92b5-fa15-43ce-a8aa-c3dc06490056\") " pod="openstack/dnsmasq-dns-56df8fb6b7-8wwcw" Nov 24 09:07:28 crc kubenswrapper[4886]: I1124 09:07:28.752278 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d9603d94-2b25-4cdc-bab2-daeae2b9f8a7-config-data\") pod \"placement-db-sync-8bzdg\" (UID: \"d9603d94-2b25-4cdc-bab2-daeae2b9f8a7\") " pod="openstack/placement-db-sync-8bzdg" Nov 24 09:07:28 crc kubenswrapper[4886]: I1124 09:07:28.752304 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bf964533-7535-4425-a880-9b95595188b4-logs\") pod \"glance-default-external-api-0\" (UID: \"bf964533-7535-4425-a880-9b95595188b4\") " pod="openstack/glance-default-external-api-0" Nov 24 09:07:28 crc kubenswrapper[4886]: I1124 09:07:28.752340 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d02e92b5-fa15-43ce-a8aa-c3dc06490056-config\") pod \"dnsmasq-dns-56df8fb6b7-8wwcw\" (UID: \"d02e92b5-fa15-43ce-a8aa-c3dc06490056\") " pod="openstack/dnsmasq-dns-56df8fb6b7-8wwcw" Nov 24 09:07:28 crc kubenswrapper[4886]: I1124 09:07:28.752369 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2e82e3b-0acc-454e-b8b5-cf584f3298b4-combined-ca-bundle\") pod \"barbican-db-sync-8hkhc\" (UID: \"a2e82e3b-0acc-454e-b8b5-cf584f3298b4\") " pod="openstack/barbican-db-sync-8hkhc" Nov 24 09:07:28 crc kubenswrapper[4886]: I1124 09:07:28.752429 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ssf5d\" (UniqueName: \"kubernetes.io/projected/bf964533-7535-4425-a880-9b95595188b4-kube-api-access-ssf5d\") pod \"glance-default-external-api-0\" (UID: \"bf964533-7535-4425-a880-9b95595188b4\") " pod="openstack/glance-default-external-api-0" Nov 24 09:07:28 crc kubenswrapper[4886]: I1124 09:07:28.752461 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a2e82e3b-0acc-454e-b8b5-cf584f3298b4-db-sync-config-data\") pod \"barbican-db-sync-8hkhc\" (UID: \"a2e82e3b-0acc-454e-b8b5-cf584f3298b4\") " pod="openstack/barbican-db-sync-8hkhc" Nov 24 09:07:28 crc kubenswrapper[4886]: I1124 09:07:28.752489 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf964533-7535-4425-a880-9b95595188b4-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"bf964533-7535-4425-a880-9b95595188b4\") " pod="openstack/glance-default-external-api-0" Nov 24 09:07:28 crc kubenswrapper[4886]: I1124 09:07:28.752513 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d9603d94-2b25-4cdc-bab2-daeae2b9f8a7-scripts\") pod \"placement-db-sync-8bzdg\" (UID: \"d9603d94-2b25-4cdc-bab2-daeae2b9f8a7\") " pod="openstack/placement-db-sync-8bzdg" Nov 24 09:07:28 crc kubenswrapper[4886]: I1124 09:07:28.752575 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/bf964533-7535-4425-a880-9b95595188b4-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"bf964533-7535-4425-a880-9b95595188b4\") " pod="openstack/glance-default-external-api-0" Nov 24 09:07:28 crc kubenswrapper[4886]: I1124 09:07:28.754385 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d02e92b5-fa15-43ce-a8aa-c3dc06490056-ovsdbserver-nb\") pod \"dnsmasq-dns-56df8fb6b7-8wwcw\" (UID: \"d02e92b5-fa15-43ce-a8aa-c3dc06490056\") " pod="openstack/dnsmasq-dns-56df8fb6b7-8wwcw" Nov 24 09:07:28 crc kubenswrapper[4886]: I1124 09:07:28.755649 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d9603d94-2b25-4cdc-bab2-daeae2b9f8a7-logs\") pod \"placement-db-sync-8bzdg\" (UID: \"d9603d94-2b25-4cdc-bab2-daeae2b9f8a7\") " pod="openstack/placement-db-sync-8bzdg" Nov 24 09:07:28 crc kubenswrapper[4886]: I1124 09:07:28.756336 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d02e92b5-fa15-43ce-a8aa-c3dc06490056-dns-svc\") pod \"dnsmasq-dns-56df8fb6b7-8wwcw\" (UID: \"d02e92b5-fa15-43ce-a8aa-c3dc06490056\") " pod="openstack/dnsmasq-dns-56df8fb6b7-8wwcw" Nov 24 09:07:28 crc kubenswrapper[4886]: I1124 09:07:28.757916 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d02e92b5-fa15-43ce-a8aa-c3dc06490056-config\") pod \"dnsmasq-dns-56df8fb6b7-8wwcw\" (UID: \"d02e92b5-fa15-43ce-a8aa-c3dc06490056\") " pod="openstack/dnsmasq-dns-56df8fb6b7-8wwcw" Nov 24 09:07:28 crc kubenswrapper[4886]: I1124 09:07:28.760297 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d02e92b5-fa15-43ce-a8aa-c3dc06490056-ovsdbserver-sb\") pod \"dnsmasq-dns-56df8fb6b7-8wwcw\" (UID: \"d02e92b5-fa15-43ce-a8aa-c3dc06490056\") " pod="openstack/dnsmasq-dns-56df8fb6b7-8wwcw" Nov 24 09:07:28 crc kubenswrapper[4886]: I1124 09:07:28.761062 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d02e92b5-fa15-43ce-a8aa-c3dc06490056-dns-swift-storage-0\") pod \"dnsmasq-dns-56df8fb6b7-8wwcw\" (UID: \"d02e92b5-fa15-43ce-a8aa-c3dc06490056\") " pod="openstack/dnsmasq-dns-56df8fb6b7-8wwcw" Nov 24 09:07:28 crc kubenswrapper[4886]: I1124 09:07:28.772266 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d9603d94-2b25-4cdc-bab2-daeae2b9f8a7-scripts\") pod \"placement-db-sync-8bzdg\" (UID: \"d9603d94-2b25-4cdc-bab2-daeae2b9f8a7\") " pod="openstack/placement-db-sync-8bzdg" Nov 24 09:07:28 crc kubenswrapper[4886]: I1124 09:07:28.778893 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a2e82e3b-0acc-454e-b8b5-cf584f3298b4-db-sync-config-data\") pod \"barbican-db-sync-8hkhc\" (UID: \"a2e82e3b-0acc-454e-b8b5-cf584f3298b4\") " pod="openstack/barbican-db-sync-8hkhc" Nov 24 09:07:28 crc kubenswrapper[4886]: I1124 09:07:28.800863 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9603d94-2b25-4cdc-bab2-daeae2b9f8a7-combined-ca-bundle\") pod \"placement-db-sync-8bzdg\" (UID: \"d9603d94-2b25-4cdc-bab2-daeae2b9f8a7\") " pod="openstack/placement-db-sync-8bzdg" Nov 24 09:07:28 crc kubenswrapper[4886]: I1124 09:07:28.801331 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qs2t9\" (UniqueName: \"kubernetes.io/projected/a2e82e3b-0acc-454e-b8b5-cf584f3298b4-kube-api-access-qs2t9\") pod \"barbican-db-sync-8hkhc\" (UID: \"a2e82e3b-0acc-454e-b8b5-cf584f3298b4\") " pod="openstack/barbican-db-sync-8hkhc" Nov 24 09:07:28 crc kubenswrapper[4886]: I1124 09:07:28.802963 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2e82e3b-0acc-454e-b8b5-cf584f3298b4-combined-ca-bundle\") pod \"barbican-db-sync-8hkhc\" (UID: \"a2e82e3b-0acc-454e-b8b5-cf584f3298b4\") " pod="openstack/barbican-db-sync-8hkhc" Nov 24 09:07:28 crc kubenswrapper[4886]: I1124 09:07:28.803184 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vjjwb\" (UniqueName: \"kubernetes.io/projected/d9603d94-2b25-4cdc-bab2-daeae2b9f8a7-kube-api-access-vjjwb\") pod \"placement-db-sync-8bzdg\" (UID: \"d9603d94-2b25-4cdc-bab2-daeae2b9f8a7\") " pod="openstack/placement-db-sync-8bzdg" Nov 24 09:07:28 crc kubenswrapper[4886]: I1124 09:07:28.803217 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d9603d94-2b25-4cdc-bab2-daeae2b9f8a7-config-data\") pod \"placement-db-sync-8bzdg\" (UID: \"d9603d94-2b25-4cdc-bab2-daeae2b9f8a7\") " pod="openstack/placement-db-sync-8bzdg" Nov 24 09:07:28 crc kubenswrapper[4886]: I1124 09:07:28.803418 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lx9wt\" (UniqueName: \"kubernetes.io/projected/d02e92b5-fa15-43ce-a8aa-c3dc06490056-kube-api-access-lx9wt\") pod \"dnsmasq-dns-56df8fb6b7-8wwcw\" (UID: \"d02e92b5-fa15-43ce-a8aa-c3dc06490056\") " pod="openstack/dnsmasq-dns-56df8fb6b7-8wwcw" Nov 24 09:07:28 crc kubenswrapper[4886]: I1124 09:07:28.862571 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ssf5d\" (UniqueName: \"kubernetes.io/projected/bf964533-7535-4425-a880-9b95595188b4-kube-api-access-ssf5d\") pod \"glance-default-external-api-0\" (UID: \"bf964533-7535-4425-a880-9b95595188b4\") " pod="openstack/glance-default-external-api-0" Nov 24 09:07:28 crc kubenswrapper[4886]: I1124 09:07:28.862660 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf964533-7535-4425-a880-9b95595188b4-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"bf964533-7535-4425-a880-9b95595188b4\") " pod="openstack/glance-default-external-api-0" Nov 24 09:07:28 crc kubenswrapper[4886]: I1124 09:07:28.862771 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/bf964533-7535-4425-a880-9b95595188b4-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"bf964533-7535-4425-a880-9b95595188b4\") " pod="openstack/glance-default-external-api-0" Nov 24 09:07:28 crc kubenswrapper[4886]: I1124 09:07:28.863086 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bf964533-7535-4425-a880-9b95595188b4-config-data\") pod \"glance-default-external-api-0\" (UID: \"bf964533-7535-4425-a880-9b95595188b4\") " pod="openstack/glance-default-external-api-0" Nov 24 09:07:28 crc kubenswrapper[4886]: I1124 09:07:28.863233 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bf964533-7535-4425-a880-9b95595188b4-scripts\") pod \"glance-default-external-api-0\" (UID: \"bf964533-7535-4425-a880-9b95595188b4\") " pod="openstack/glance-default-external-api-0" Nov 24 09:07:28 crc kubenswrapper[4886]: I1124 09:07:28.863314 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-external-api-0\" (UID: \"bf964533-7535-4425-a880-9b95595188b4\") " pod="openstack/glance-default-external-api-0" Nov 24 09:07:28 crc kubenswrapper[4886]: I1124 09:07:28.863439 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bf964533-7535-4425-a880-9b95595188b4-logs\") pod \"glance-default-external-api-0\" (UID: \"bf964533-7535-4425-a880-9b95595188b4\") " pod="openstack/glance-default-external-api-0" Nov 24 09:07:28 crc kubenswrapper[4886]: I1124 09:07:28.868246 4886 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-external-api-0\" (UID: \"bf964533-7535-4425-a880-9b95595188b4\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/glance-default-external-api-0" Nov 24 09:07:28 crc kubenswrapper[4886]: I1124 09:07:28.869410 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bf964533-7535-4425-a880-9b95595188b4-logs\") pod \"glance-default-external-api-0\" (UID: \"bf964533-7535-4425-a880-9b95595188b4\") " pod="openstack/glance-default-external-api-0" Nov 24 09:07:28 crc kubenswrapper[4886]: I1124 09:07:28.876060 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-c858856cc-mpd52" Nov 24 09:07:28 crc kubenswrapper[4886]: I1124 09:07:28.877709 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/bf964533-7535-4425-a880-9b95595188b4-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"bf964533-7535-4425-a880-9b95595188b4\") " pod="openstack/glance-default-external-api-0" Nov 24 09:07:28 crc kubenswrapper[4886]: I1124 09:07:28.879467 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-lqtgn" Nov 24 09:07:28 crc kubenswrapper[4886]: I1124 09:07:28.880842 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf964533-7535-4425-a880-9b95595188b4-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"bf964533-7535-4425-a880-9b95595188b4\") " pod="openstack/glance-default-external-api-0" Nov 24 09:07:28 crc kubenswrapper[4886]: I1124 09:07:28.893511 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bf964533-7535-4425-a880-9b95595188b4-config-data\") pod \"glance-default-external-api-0\" (UID: \"bf964533-7535-4425-a880-9b95595188b4\") " pod="openstack/glance-default-external-api-0" Nov 24 09:07:28 crc kubenswrapper[4886]: I1124 09:07:28.899779 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bf964533-7535-4425-a880-9b95595188b4-scripts\") pod \"glance-default-external-api-0\" (UID: \"bf964533-7535-4425-a880-9b95595188b4\") " pod="openstack/glance-default-external-api-0" Nov 24 09:07:28 crc kubenswrapper[4886]: I1124 09:07:28.910851 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f2446af2-09f6-446e-8baf-68b047d15d5f" path="/var/lib/kubelet/pods/f2446af2-09f6-446e-8baf-68b047d15d5f/volumes" Nov 24 09:07:28 crc kubenswrapper[4886]: I1124 09:07:28.911457 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ssf5d\" (UniqueName: \"kubernetes.io/projected/bf964533-7535-4425-a880-9b95595188b4-kube-api-access-ssf5d\") pod \"glance-default-external-api-0\" (UID: \"bf964533-7535-4425-a880-9b95595188b4\") " pod="openstack/glance-default-external-api-0" Nov 24 09:07:28 crc kubenswrapper[4886]: I1124 09:07:28.941814 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-8hkhc" Nov 24 09:07:28 crc kubenswrapper[4886]: I1124 09:07:28.949875 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-external-api-0\" (UID: \"bf964533-7535-4425-a880-9b95595188b4\") " pod="openstack/glance-default-external-api-0" Nov 24 09:07:29 crc kubenswrapper[4886]: I1124 09:07:28.999261 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-8bzdg" Nov 24 09:07:29 crc kubenswrapper[4886]: I1124 09:07:29.012897 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56df8fb6b7-8wwcw" Nov 24 09:07:29 crc kubenswrapper[4886]: I1124 09:07:29.027731 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-bbf5cc879-gs4gc"] Nov 24 09:07:29 crc kubenswrapper[4886]: I1124 09:07:29.031816 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 24 09:07:29 crc kubenswrapper[4886]: I1124 09:07:29.066166 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 24 09:07:29 crc kubenswrapper[4886]: I1124 09:07:29.074652 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 24 09:07:29 crc kubenswrapper[4886]: I1124 09:07:29.078697 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Nov 24 09:07:29 crc kubenswrapper[4886]: I1124 09:07:29.125559 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 24 09:07:29 crc kubenswrapper[4886]: I1124 09:07:29.151508 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-wsgjs"] Nov 24 09:07:29 crc kubenswrapper[4886]: I1124 09:07:29.177726 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6bd3c5c6-a206-47a1-9eb6-283268b83e61-logs\") pod \"glance-default-internal-api-0\" (UID: \"6bd3c5c6-a206-47a1-9eb6-283268b83e61\") " pod="openstack/glance-default-internal-api-0" Nov 24 09:07:29 crc kubenswrapper[4886]: I1124 09:07:29.178176 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6bd3c5c6-a206-47a1-9eb6-283268b83e61-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"6bd3c5c6-a206-47a1-9eb6-283268b83e61\") " pod="openstack/glance-default-internal-api-0" Nov 24 09:07:29 crc kubenswrapper[4886]: I1124 09:07:29.178326 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6bd3c5c6-a206-47a1-9eb6-283268b83e61-scripts\") pod \"glance-default-internal-api-0\" (UID: \"6bd3c5c6-a206-47a1-9eb6-283268b83e61\") " pod="openstack/glance-default-internal-api-0" Nov 24 09:07:29 crc kubenswrapper[4886]: I1124 09:07:29.178405 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6bd3c5c6-a206-47a1-9eb6-283268b83e61-config-data\") pod \"glance-default-internal-api-0\" (UID: \"6bd3c5c6-a206-47a1-9eb6-283268b83e61\") " pod="openstack/glance-default-internal-api-0" Nov 24 09:07:29 crc kubenswrapper[4886]: I1124 09:07:29.178477 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7m5fd\" (UniqueName: \"kubernetes.io/projected/6bd3c5c6-a206-47a1-9eb6-283268b83e61-kube-api-access-7m5fd\") pod \"glance-default-internal-api-0\" (UID: \"6bd3c5c6-a206-47a1-9eb6-283268b83e61\") " pod="openstack/glance-default-internal-api-0" Nov 24 09:07:29 crc kubenswrapper[4886]: I1124 09:07:29.178589 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6bd3c5c6-a206-47a1-9eb6-283268b83e61-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"6bd3c5c6-a206-47a1-9eb6-283268b83e61\") " pod="openstack/glance-default-internal-api-0" Nov 24 09:07:29 crc kubenswrapper[4886]: I1124 09:07:29.179479 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"6bd3c5c6-a206-47a1-9eb6-283268b83e61\") " pod="openstack/glance-default-internal-api-0" Nov 24 09:07:29 crc kubenswrapper[4886]: I1124 09:07:29.281624 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"6bd3c5c6-a206-47a1-9eb6-283268b83e61\") " pod="openstack/glance-default-internal-api-0" Nov 24 09:07:29 crc kubenswrapper[4886]: I1124 09:07:29.282187 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6bd3c5c6-a206-47a1-9eb6-283268b83e61-logs\") pod \"glance-default-internal-api-0\" (UID: \"6bd3c5c6-a206-47a1-9eb6-283268b83e61\") " pod="openstack/glance-default-internal-api-0" Nov 24 09:07:29 crc kubenswrapper[4886]: I1124 09:07:29.282247 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6bd3c5c6-a206-47a1-9eb6-283268b83e61-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"6bd3c5c6-a206-47a1-9eb6-283268b83e61\") " pod="openstack/glance-default-internal-api-0" Nov 24 09:07:29 crc kubenswrapper[4886]: I1124 09:07:29.282306 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6bd3c5c6-a206-47a1-9eb6-283268b83e61-scripts\") pod \"glance-default-internal-api-0\" (UID: \"6bd3c5c6-a206-47a1-9eb6-283268b83e61\") " pod="openstack/glance-default-internal-api-0" Nov 24 09:07:29 crc kubenswrapper[4886]: I1124 09:07:29.282297 4886 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"6bd3c5c6-a206-47a1-9eb6-283268b83e61\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/glance-default-internal-api-0" Nov 24 09:07:29 crc kubenswrapper[4886]: I1124 09:07:29.282327 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6bd3c5c6-a206-47a1-9eb6-283268b83e61-config-data\") pod \"glance-default-internal-api-0\" (UID: \"6bd3c5c6-a206-47a1-9eb6-283268b83e61\") " pod="openstack/glance-default-internal-api-0" Nov 24 09:07:29 crc kubenswrapper[4886]: I1124 09:07:29.282348 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7m5fd\" (UniqueName: \"kubernetes.io/projected/6bd3c5c6-a206-47a1-9eb6-283268b83e61-kube-api-access-7m5fd\") pod \"glance-default-internal-api-0\" (UID: \"6bd3c5c6-a206-47a1-9eb6-283268b83e61\") " pod="openstack/glance-default-internal-api-0" Nov 24 09:07:29 crc kubenswrapper[4886]: I1124 09:07:29.282390 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6bd3c5c6-a206-47a1-9eb6-283268b83e61-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"6bd3c5c6-a206-47a1-9eb6-283268b83e61\") " pod="openstack/glance-default-internal-api-0" Nov 24 09:07:29 crc kubenswrapper[4886]: I1124 09:07:29.283517 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6bd3c5c6-a206-47a1-9eb6-283268b83e61-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"6bd3c5c6-a206-47a1-9eb6-283268b83e61\") " pod="openstack/glance-default-internal-api-0" Nov 24 09:07:29 crc kubenswrapper[4886]: I1124 09:07:29.288273 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6bd3c5c6-a206-47a1-9eb6-283268b83e61-logs\") pod \"glance-default-internal-api-0\" (UID: \"6bd3c5c6-a206-47a1-9eb6-283268b83e61\") " pod="openstack/glance-default-internal-api-0" Nov 24 09:07:29 crc kubenswrapper[4886]: I1124 09:07:29.292288 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6bd3c5c6-a206-47a1-9eb6-283268b83e61-scripts\") pod \"glance-default-internal-api-0\" (UID: \"6bd3c5c6-a206-47a1-9eb6-283268b83e61\") " pod="openstack/glance-default-internal-api-0" Nov 24 09:07:29 crc kubenswrapper[4886]: I1124 09:07:29.295933 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6bd3c5c6-a206-47a1-9eb6-283268b83e61-config-data\") pod \"glance-default-internal-api-0\" (UID: \"6bd3c5c6-a206-47a1-9eb6-283268b83e61\") " pod="openstack/glance-default-internal-api-0" Nov 24 09:07:29 crc kubenswrapper[4886]: I1124 09:07:29.303482 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f59b8f679-hd8tz" Nov 24 09:07:29 crc kubenswrapper[4886]: I1124 09:07:29.308558 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6bd3c5c6-a206-47a1-9eb6-283268b83e61-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"6bd3c5c6-a206-47a1-9eb6-283268b83e61\") " pod="openstack/glance-default-internal-api-0" Nov 24 09:07:29 crc kubenswrapper[4886]: I1124 09:07:29.310126 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7m5fd\" (UniqueName: \"kubernetes.io/projected/6bd3c5c6-a206-47a1-9eb6-283268b83e61-kube-api-access-7m5fd\") pod \"glance-default-internal-api-0\" (UID: \"6bd3c5c6-a206-47a1-9eb6-283268b83e61\") " pod="openstack/glance-default-internal-api-0" Nov 24 09:07:29 crc kubenswrapper[4886]: I1124 09:07:29.322201 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"6bd3c5c6-a206-47a1-9eb6-283268b83e61\") " pod="openstack/glance-default-internal-api-0" Nov 24 09:07:29 crc kubenswrapper[4886]: I1124 09:07:29.383607 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/001c7b85-45be-47cf-bef5-554a2710e240-ovsdbserver-nb\") pod \"001c7b85-45be-47cf-bef5-554a2710e240\" (UID: \"001c7b85-45be-47cf-bef5-554a2710e240\") " Nov 24 09:07:29 crc kubenswrapper[4886]: I1124 09:07:29.383886 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/001c7b85-45be-47cf-bef5-554a2710e240-config\") pod \"001c7b85-45be-47cf-bef5-554a2710e240\" (UID: \"001c7b85-45be-47cf-bef5-554a2710e240\") " Nov 24 09:07:29 crc kubenswrapper[4886]: I1124 09:07:29.383988 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/001c7b85-45be-47cf-bef5-554a2710e240-ovsdbserver-sb\") pod \"001c7b85-45be-47cf-bef5-554a2710e240\" (UID: \"001c7b85-45be-47cf-bef5-554a2710e240\") " Nov 24 09:07:29 crc kubenswrapper[4886]: I1124 09:07:29.384030 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/001c7b85-45be-47cf-bef5-554a2710e240-dns-swift-storage-0\") pod \"001c7b85-45be-47cf-bef5-554a2710e240\" (UID: \"001c7b85-45be-47cf-bef5-554a2710e240\") " Nov 24 09:07:29 crc kubenswrapper[4886]: I1124 09:07:29.384096 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/001c7b85-45be-47cf-bef5-554a2710e240-dns-svc\") pod \"001c7b85-45be-47cf-bef5-554a2710e240\" (UID: \"001c7b85-45be-47cf-bef5-554a2710e240\") " Nov 24 09:07:29 crc kubenswrapper[4886]: I1124 09:07:29.384163 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hvcgg\" (UniqueName: \"kubernetes.io/projected/001c7b85-45be-47cf-bef5-554a2710e240-kube-api-access-hvcgg\") pod \"001c7b85-45be-47cf-bef5-554a2710e240\" (UID: \"001c7b85-45be-47cf-bef5-554a2710e240\") " Nov 24 09:07:29 crc kubenswrapper[4886]: I1124 09:07:29.391282 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/001c7b85-45be-47cf-bef5-554a2710e240-kube-api-access-hvcgg" (OuterVolumeSpecName: "kube-api-access-hvcgg") pod "001c7b85-45be-47cf-bef5-554a2710e240" (UID: "001c7b85-45be-47cf-bef5-554a2710e240"). InnerVolumeSpecName "kube-api-access-hvcgg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 09:07:29 crc kubenswrapper[4886]: I1124 09:07:29.395080 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-ph4gr"] Nov 24 09:07:29 crc kubenswrapper[4886]: I1124 09:07:29.415028 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-5dc555666f-6hzb6"] Nov 24 09:07:29 crc kubenswrapper[4886]: I1124 09:07:29.467637 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-wsgjs" event={"ID":"5511ae3a-4fa5-400a-88c6-e9fca01787cf","Type":"ContainerStarted","Data":"5831e35c44b5aa40bbf4cb079bd39e59baf3e45cbdf2be91b82d038a3552b1f0"} Nov 24 09:07:29 crc kubenswrapper[4886]: I1124 09:07:29.477688 4886 generic.go:334] "Generic (PLEG): container finished" podID="001c7b85-45be-47cf-bef5-554a2710e240" containerID="6ede237ae345f4416d9aae4efeb47470ccb1feef23b3ebac6335209cd319d73a" exitCode=0 Nov 24 09:07:29 crc kubenswrapper[4886]: I1124 09:07:29.477889 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f59b8f679-hd8tz" event={"ID":"001c7b85-45be-47cf-bef5-554a2710e240","Type":"ContainerDied","Data":"6ede237ae345f4416d9aae4efeb47470ccb1feef23b3ebac6335209cd319d73a"} Nov 24 09:07:29 crc kubenswrapper[4886]: I1124 09:07:29.477934 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f59b8f679-hd8tz" event={"ID":"001c7b85-45be-47cf-bef5-554a2710e240","Type":"ContainerDied","Data":"28288d8f600640843c5c6bedba4d4bab58e65d221d89d26442c122de75174a1d"} Nov 24 09:07:29 crc kubenswrapper[4886]: I1124 09:07:29.477976 4886 scope.go:117] "RemoveContainer" containerID="6ede237ae345f4416d9aae4efeb47470ccb1feef23b3ebac6335209cd319d73a" Nov 24 09:07:29 crc kubenswrapper[4886]: I1124 09:07:29.478369 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f59b8f679-hd8tz" Nov 24 09:07:29 crc kubenswrapper[4886]: I1124 09:07:29.491877 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hvcgg\" (UniqueName: \"kubernetes.io/projected/001c7b85-45be-47cf-bef5-554a2710e240-kube-api-access-hvcgg\") on node \"crc\" DevicePath \"\"" Nov 24 09:07:29 crc kubenswrapper[4886]: I1124 09:07:29.510489 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/001c7b85-45be-47cf-bef5-554a2710e240-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "001c7b85-45be-47cf-bef5-554a2710e240" (UID: "001c7b85-45be-47cf-bef5-554a2710e240"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 09:07:29 crc kubenswrapper[4886]: I1124 09:07:29.512248 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/001c7b85-45be-47cf-bef5-554a2710e240-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "001c7b85-45be-47cf-bef5-554a2710e240" (UID: "001c7b85-45be-47cf-bef5-554a2710e240"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 09:07:29 crc kubenswrapper[4886]: I1124 09:07:29.519713 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-ph4gr" event={"ID":"7ca0ca62-7545-4e1a-9969-121899a789b0","Type":"ContainerStarted","Data":"98f958da8dece62769324b8e213afe34afbd39006950094f5969d58aa7ae1600"} Nov 24 09:07:29 crc kubenswrapper[4886]: I1124 09:07:29.521805 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bbf5cc879-gs4gc" event={"ID":"f97f59b9-899b-419b-88d9-7aa58d892ffc","Type":"ContainerStarted","Data":"92d10829e245d2042efb11e3e4677aba173d7ee04855904488b49f20c42f0fe4"} Nov 24 09:07:29 crc kubenswrapper[4886]: I1124 09:07:29.529877 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/001c7b85-45be-47cf-bef5-554a2710e240-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "001c7b85-45be-47cf-bef5-554a2710e240" (UID: "001c7b85-45be-47cf-bef5-554a2710e240"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 09:07:29 crc kubenswrapper[4886]: I1124 09:07:29.530373 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 24 09:07:29 crc kubenswrapper[4886]: I1124 09:07:29.543126 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/001c7b85-45be-47cf-bef5-554a2710e240-config" (OuterVolumeSpecName: "config") pod "001c7b85-45be-47cf-bef5-554a2710e240" (UID: "001c7b85-45be-47cf-bef5-554a2710e240"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 09:07:29 crc kubenswrapper[4886]: I1124 09:07:29.556719 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/001c7b85-45be-47cf-bef5-554a2710e240-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "001c7b85-45be-47cf-bef5-554a2710e240" (UID: "001c7b85-45be-47cf-bef5-554a2710e240"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 09:07:29 crc kubenswrapper[4886]: I1124 09:07:29.595952 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 24 09:07:29 crc kubenswrapper[4886]: I1124 09:07:29.603046 4886 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/001c7b85-45be-47cf-bef5-554a2710e240-config\") on node \"crc\" DevicePath \"\"" Nov 24 09:07:29 crc kubenswrapper[4886]: I1124 09:07:29.603079 4886 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/001c7b85-45be-47cf-bef5-554a2710e240-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 24 09:07:29 crc kubenswrapper[4886]: I1124 09:07:29.603091 4886 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/001c7b85-45be-47cf-bef5-554a2710e240-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 24 09:07:29 crc kubenswrapper[4886]: I1124 09:07:29.603100 4886 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/001c7b85-45be-47cf-bef5-554a2710e240-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 24 09:07:29 crc kubenswrapper[4886]: I1124 09:07:29.603108 4886 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/001c7b85-45be-47cf-bef5-554a2710e240-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 24 09:07:29 crc kubenswrapper[4886]: I1124 09:07:29.706995 4886 scope.go:117] "RemoveContainer" containerID="74ddac89a8ebf800ac8b1d28f49ab68e2b82129941e7631954e2482573558ba7" Nov 24 09:07:29 crc kubenswrapper[4886]: W1124 09:07:29.759597 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6e8de0fe_585f_4cb8_8deb_d788e443fbd2.slice/crio-e4fe14f89fe8f1c0e76de2a63b4b7641b5a9028dd172bdce6649992e9ff00e85 WatchSource:0}: Error finding container e4fe14f89fe8f1c0e76de2a63b4b7641b5a9028dd172bdce6649992e9ff00e85: Status 404 returned error can't find the container with id e4fe14f89fe8f1c0e76de2a63b4b7641b5a9028dd172bdce6649992e9ff00e85 Nov 24 09:07:29 crc kubenswrapper[4886]: I1124 09:07:29.768214 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-c858856cc-mpd52"] Nov 24 09:07:29 crc kubenswrapper[4886]: I1124 09:07:29.799740 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-lqtgn"] Nov 24 09:07:29 crc kubenswrapper[4886]: I1124 09:07:29.805318 4886 scope.go:117] "RemoveContainer" containerID="6ede237ae345f4416d9aae4efeb47470ccb1feef23b3ebac6335209cd319d73a" Nov 24 09:07:29 crc kubenswrapper[4886]: E1124 09:07:29.832487 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6ede237ae345f4416d9aae4efeb47470ccb1feef23b3ebac6335209cd319d73a\": container with ID starting with 6ede237ae345f4416d9aae4efeb47470ccb1feef23b3ebac6335209cd319d73a not found: ID does not exist" containerID="6ede237ae345f4416d9aae4efeb47470ccb1feef23b3ebac6335209cd319d73a" Nov 24 09:07:29 crc kubenswrapper[4886]: I1124 09:07:29.832970 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6ede237ae345f4416d9aae4efeb47470ccb1feef23b3ebac6335209cd319d73a"} err="failed to get container status \"6ede237ae345f4416d9aae4efeb47470ccb1feef23b3ebac6335209cd319d73a\": rpc error: code = NotFound desc = could not find container \"6ede237ae345f4416d9aae4efeb47470ccb1feef23b3ebac6335209cd319d73a\": container with ID starting with 6ede237ae345f4416d9aae4efeb47470ccb1feef23b3ebac6335209cd319d73a not found: ID does not exist" Nov 24 09:07:29 crc kubenswrapper[4886]: I1124 09:07:29.833006 4886 scope.go:117] "RemoveContainer" containerID="74ddac89a8ebf800ac8b1d28f49ab68e2b82129941e7631954e2482573558ba7" Nov 24 09:07:29 crc kubenswrapper[4886]: E1124 09:07:29.834903 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"74ddac89a8ebf800ac8b1d28f49ab68e2b82129941e7631954e2482573558ba7\": container with ID starting with 74ddac89a8ebf800ac8b1d28f49ab68e2b82129941e7631954e2482573558ba7 not found: ID does not exist" containerID="74ddac89a8ebf800ac8b1d28f49ab68e2b82129941e7631954e2482573558ba7" Nov 24 09:07:29 crc kubenswrapper[4886]: I1124 09:07:29.834963 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"74ddac89a8ebf800ac8b1d28f49ab68e2b82129941e7631954e2482573558ba7"} err="failed to get container status \"74ddac89a8ebf800ac8b1d28f49ab68e2b82129941e7631954e2482573558ba7\": rpc error: code = NotFound desc = could not find container \"74ddac89a8ebf800ac8b1d28f49ab68e2b82129941e7631954e2482573558ba7\": container with ID starting with 74ddac89a8ebf800ac8b1d28f49ab68e2b82129941e7631954e2482573558ba7 not found: ID does not exist" Nov 24 09:07:29 crc kubenswrapper[4886]: I1124 09:07:29.874271 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5f59b8f679-hd8tz"] Nov 24 09:07:29 crc kubenswrapper[4886]: I1124 09:07:29.890024 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5f59b8f679-hd8tz"] Nov 24 09:07:29 crc kubenswrapper[4886]: I1124 09:07:29.948987 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-8hkhc"] Nov 24 09:07:29 crc kubenswrapper[4886]: W1124 09:07:29.965138 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda2e82e3b_0acc_454e_b8b5_cf584f3298b4.slice/crio-799cd6f62389d70b2a893ebe5bcef353dfad1932e8e7517fb2b1188548f9faf7 WatchSource:0}: Error finding container 799cd6f62389d70b2a893ebe5bcef353dfad1932e8e7517fb2b1188548f9faf7: Status 404 returned error can't find the container with id 799cd6f62389d70b2a893ebe5bcef353dfad1932e8e7517fb2b1188548f9faf7 Nov 24 09:07:30 crc kubenswrapper[4886]: I1124 09:07:30.200225 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-8wwcw"] Nov 24 09:07:30 crc kubenswrapper[4886]: I1124 09:07:30.216074 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-8bzdg"] Nov 24 09:07:30 crc kubenswrapper[4886]: I1124 09:07:30.335493 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 24 09:07:30 crc kubenswrapper[4886]: I1124 09:07:30.618011 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-lqtgn" event={"ID":"390e7d30-f337-4255-a488-3b5b345235ed","Type":"ContainerStarted","Data":"c480d08fbd2e71950aed9f98c6ebcc8a3561ddbdab1354b5217a768a9b1eba27"} Nov 24 09:07:30 crc kubenswrapper[4886]: I1124 09:07:30.634731 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6e8de0fe-585f-4cb8-8deb-d788e443fbd2","Type":"ContainerStarted","Data":"e4fe14f89fe8f1c0e76de2a63b4b7641b5a9028dd172bdce6649992e9ff00e85"} Nov 24 09:07:30 crc kubenswrapper[4886]: I1124 09:07:30.658008 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-8hkhc" event={"ID":"a2e82e3b-0acc-454e-b8b5-cf584f3298b4","Type":"ContainerStarted","Data":"799cd6f62389d70b2a893ebe5bcef353dfad1932e8e7517fb2b1188548f9faf7"} Nov 24 09:07:30 crc kubenswrapper[4886]: I1124 09:07:30.689026 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-c858856cc-mpd52" event={"ID":"fe0358bf-ca1c-425a-91eb-4a2a6435f618","Type":"ContainerStarted","Data":"71f8438e6176c72b1d59c10883329cde381465513377a2a9747e24c4b5379575"} Nov 24 09:07:30 crc kubenswrapper[4886]: I1124 09:07:30.706892 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"6bd3c5c6-a206-47a1-9eb6-283268b83e61","Type":"ContainerStarted","Data":"319ed80aeb4d9ae11b0a60b7496685acd9f2d383d7d3463806ae2165d99ba7c5"} Nov 24 09:07:30 crc kubenswrapper[4886]: I1124 09:07:30.734577 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5dc555666f-6hzb6" event={"ID":"9c272c26-c042-4e0d-a35d-2ff5d329f901","Type":"ContainerStarted","Data":"9826f97e10a735a735636f97cd28b813e53398ff87d488f2603f81bd0598cf27"} Nov 24 09:07:30 crc kubenswrapper[4886]: I1124 09:07:30.743966 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56df8fb6b7-8wwcw" event={"ID":"d02e92b5-fa15-43ce-a8aa-c3dc06490056","Type":"ContainerStarted","Data":"4fd89efe314c245138763f1ece726d81324a2a53de93c942959af9ab1ac20570"} Nov 24 09:07:30 crc kubenswrapper[4886]: I1124 09:07:30.771907 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-8bzdg" event={"ID":"d9603d94-2b25-4cdc-bab2-daeae2b9f8a7","Type":"ContainerStarted","Data":"1167b45578b03296ec9f947936712436221f2689e4276ac7db0461dc9580911f"} Nov 24 09:07:30 crc kubenswrapper[4886]: I1124 09:07:30.867522 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="001c7b85-45be-47cf-bef5-554a2710e240" path="/var/lib/kubelet/pods/001c7b85-45be-47cf-bef5-554a2710e240/volumes" Nov 24 09:07:31 crc kubenswrapper[4886]: I1124 09:07:31.231822 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 24 09:07:31 crc kubenswrapper[4886]: I1124 09:07:31.580931 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 24 09:07:31 crc kubenswrapper[4886]: I1124 09:07:31.623494 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 24 09:07:31 crc kubenswrapper[4886]: I1124 09:07:31.641306 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-5dc555666f-6hzb6"] Nov 24 09:07:31 crc kubenswrapper[4886]: I1124 09:07:31.689849 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-5654c4b6cf-6mjl4"] Nov 24 09:07:31 crc kubenswrapper[4886]: E1124 09:07:31.693125 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="001c7b85-45be-47cf-bef5-554a2710e240" containerName="init" Nov 24 09:07:31 crc kubenswrapper[4886]: I1124 09:07:31.693171 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="001c7b85-45be-47cf-bef5-554a2710e240" containerName="init" Nov 24 09:07:31 crc kubenswrapper[4886]: E1124 09:07:31.693197 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="001c7b85-45be-47cf-bef5-554a2710e240" containerName="dnsmasq-dns" Nov 24 09:07:31 crc kubenswrapper[4886]: I1124 09:07:31.693206 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="001c7b85-45be-47cf-bef5-554a2710e240" containerName="dnsmasq-dns" Nov 24 09:07:31 crc kubenswrapper[4886]: I1124 09:07:31.693444 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="001c7b85-45be-47cf-bef5-554a2710e240" containerName="dnsmasq-dns" Nov 24 09:07:31 crc kubenswrapper[4886]: I1124 09:07:31.694544 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5654c4b6cf-6mjl4" Nov 24 09:07:31 crc kubenswrapper[4886]: I1124 09:07:31.717214 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-5654c4b6cf-6mjl4"] Nov 24 09:07:31 crc kubenswrapper[4886]: I1124 09:07:31.801261 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 24 09:07:31 crc kubenswrapper[4886]: I1124 09:07:31.806254 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2b936c64-69ae-43db-9d33-9ed58719be26-logs\") pod \"horizon-5654c4b6cf-6mjl4\" (UID: \"2b936c64-69ae-43db-9d33-9ed58719be26\") " pod="openstack/horizon-5654c4b6cf-6mjl4" Nov 24 09:07:31 crc kubenswrapper[4886]: I1124 09:07:31.806330 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2b936c64-69ae-43db-9d33-9ed58719be26-config-data\") pod \"horizon-5654c4b6cf-6mjl4\" (UID: \"2b936c64-69ae-43db-9d33-9ed58719be26\") " pod="openstack/horizon-5654c4b6cf-6mjl4" Nov 24 09:07:31 crc kubenswrapper[4886]: I1124 09:07:31.806423 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/2b936c64-69ae-43db-9d33-9ed58719be26-horizon-secret-key\") pod \"horizon-5654c4b6cf-6mjl4\" (UID: \"2b936c64-69ae-43db-9d33-9ed58719be26\") " pod="openstack/horizon-5654c4b6cf-6mjl4" Nov 24 09:07:31 crc kubenswrapper[4886]: I1124 09:07:31.806456 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2b936c64-69ae-43db-9d33-9ed58719be26-scripts\") pod \"horizon-5654c4b6cf-6mjl4\" (UID: \"2b936c64-69ae-43db-9d33-9ed58719be26\") " pod="openstack/horizon-5654c4b6cf-6mjl4" Nov 24 09:07:31 crc kubenswrapper[4886]: I1124 09:07:31.806622 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hfhgr\" (UniqueName: \"kubernetes.io/projected/2b936c64-69ae-43db-9d33-9ed58719be26-kube-api-access-hfhgr\") pod \"horizon-5654c4b6cf-6mjl4\" (UID: \"2b936c64-69ae-43db-9d33-9ed58719be26\") " pod="openstack/horizon-5654c4b6cf-6mjl4" Nov 24 09:07:31 crc kubenswrapper[4886]: I1124 09:07:31.819807 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"bf964533-7535-4425-a880-9b95595188b4","Type":"ContainerStarted","Data":"52226747dd4cce06f34c07109b1a3034304a1e79db1260bf01dbc923dcf2c274"} Nov 24 09:07:31 crc kubenswrapper[4886]: I1124 09:07:31.823108 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-wsgjs" event={"ID":"5511ae3a-4fa5-400a-88c6-e9fca01787cf","Type":"ContainerStarted","Data":"a753f4d959369346b62434e569d11133ba573b9a5cbd32cd9161eeffadf88a2b"} Nov 24 09:07:31 crc kubenswrapper[4886]: I1124 09:07:31.825988 4886 generic.go:334] "Generic (PLEG): container finished" podID="d02e92b5-fa15-43ce-a8aa-c3dc06490056" containerID="07682178d95fbd8e2bc70ffbdb25293b719ac0df1e69af24562f01b4a6ba79a8" exitCode=0 Nov 24 09:07:31 crc kubenswrapper[4886]: I1124 09:07:31.826115 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56df8fb6b7-8wwcw" event={"ID":"d02e92b5-fa15-43ce-a8aa-c3dc06490056","Type":"ContainerDied","Data":"07682178d95fbd8e2bc70ffbdb25293b719ac0df1e69af24562f01b4a6ba79a8"} Nov 24 09:07:31 crc kubenswrapper[4886]: I1124 09:07:31.833233 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"6bd3c5c6-a206-47a1-9eb6-283268b83e61","Type":"ContainerStarted","Data":"7485fe54ab25092f2dee41b85e54ada401bcf7466b136b5006d499ea64ac925a"} Nov 24 09:07:31 crc kubenswrapper[4886]: I1124 09:07:31.854292 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-wsgjs" podStartSLOduration=4.854252055 podStartE2EDuration="4.854252055s" podCreationTimestamp="2025-11-24 09:07:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 09:07:31.853596016 +0000 UTC m=+1107.740334151" watchObservedRunningTime="2025-11-24 09:07:31.854252055 +0000 UTC m=+1107.740990190" Nov 24 09:07:31 crc kubenswrapper[4886]: I1124 09:07:31.861001 4886 generic.go:334] "Generic (PLEG): container finished" podID="f97f59b9-899b-419b-88d9-7aa58d892ffc" containerID="f4ec5bde148fb86e781ccc8135d13284edaa28af332b597631343fb64f7a29bd" exitCode=0 Nov 24 09:07:31 crc kubenswrapper[4886]: I1124 09:07:31.861114 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bbf5cc879-gs4gc" event={"ID":"f97f59b9-899b-419b-88d9-7aa58d892ffc","Type":"ContainerDied","Data":"f4ec5bde148fb86e781ccc8135d13284edaa28af332b597631343fb64f7a29bd"} Nov 24 09:07:31 crc kubenswrapper[4886]: I1124 09:07:31.869346 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-lqtgn" event={"ID":"390e7d30-f337-4255-a488-3b5b345235ed","Type":"ContainerStarted","Data":"bad5972703be48ed87467a308a29d4c811db61eabcf9b35e13da11a1be3c6fe1"} Nov 24 09:07:31 crc kubenswrapper[4886]: I1124 09:07:31.908420 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hfhgr\" (UniqueName: \"kubernetes.io/projected/2b936c64-69ae-43db-9d33-9ed58719be26-kube-api-access-hfhgr\") pod \"horizon-5654c4b6cf-6mjl4\" (UID: \"2b936c64-69ae-43db-9d33-9ed58719be26\") " pod="openstack/horizon-5654c4b6cf-6mjl4" Nov 24 09:07:31 crc kubenswrapper[4886]: I1124 09:07:31.908547 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2b936c64-69ae-43db-9d33-9ed58719be26-logs\") pod \"horizon-5654c4b6cf-6mjl4\" (UID: \"2b936c64-69ae-43db-9d33-9ed58719be26\") " pod="openstack/horizon-5654c4b6cf-6mjl4" Nov 24 09:07:31 crc kubenswrapper[4886]: I1124 09:07:31.908607 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2b936c64-69ae-43db-9d33-9ed58719be26-config-data\") pod \"horizon-5654c4b6cf-6mjl4\" (UID: \"2b936c64-69ae-43db-9d33-9ed58719be26\") " pod="openstack/horizon-5654c4b6cf-6mjl4" Nov 24 09:07:31 crc kubenswrapper[4886]: I1124 09:07:31.908707 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/2b936c64-69ae-43db-9d33-9ed58719be26-horizon-secret-key\") pod \"horizon-5654c4b6cf-6mjl4\" (UID: \"2b936c64-69ae-43db-9d33-9ed58719be26\") " pod="openstack/horizon-5654c4b6cf-6mjl4" Nov 24 09:07:31 crc kubenswrapper[4886]: I1124 09:07:31.908739 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2b936c64-69ae-43db-9d33-9ed58719be26-scripts\") pod \"horizon-5654c4b6cf-6mjl4\" (UID: \"2b936c64-69ae-43db-9d33-9ed58719be26\") " pod="openstack/horizon-5654c4b6cf-6mjl4" Nov 24 09:07:31 crc kubenswrapper[4886]: I1124 09:07:31.913403 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2b936c64-69ae-43db-9d33-9ed58719be26-logs\") pod \"horizon-5654c4b6cf-6mjl4\" (UID: \"2b936c64-69ae-43db-9d33-9ed58719be26\") " pod="openstack/horizon-5654c4b6cf-6mjl4" Nov 24 09:07:31 crc kubenswrapper[4886]: I1124 09:07:31.914680 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2b936c64-69ae-43db-9d33-9ed58719be26-scripts\") pod \"horizon-5654c4b6cf-6mjl4\" (UID: \"2b936c64-69ae-43db-9d33-9ed58719be26\") " pod="openstack/horizon-5654c4b6cf-6mjl4" Nov 24 09:07:31 crc kubenswrapper[4886]: I1124 09:07:31.915992 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2b936c64-69ae-43db-9d33-9ed58719be26-config-data\") pod \"horizon-5654c4b6cf-6mjl4\" (UID: \"2b936c64-69ae-43db-9d33-9ed58719be26\") " pod="openstack/horizon-5654c4b6cf-6mjl4" Nov 24 09:07:31 crc kubenswrapper[4886]: I1124 09:07:31.924266 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/2b936c64-69ae-43db-9d33-9ed58719be26-horizon-secret-key\") pod \"horizon-5654c4b6cf-6mjl4\" (UID: \"2b936c64-69ae-43db-9d33-9ed58719be26\") " pod="openstack/horizon-5654c4b6cf-6mjl4" Nov 24 09:07:31 crc kubenswrapper[4886]: I1124 09:07:31.942226 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hfhgr\" (UniqueName: \"kubernetes.io/projected/2b936c64-69ae-43db-9d33-9ed58719be26-kube-api-access-hfhgr\") pod \"horizon-5654c4b6cf-6mjl4\" (UID: \"2b936c64-69ae-43db-9d33-9ed58719be26\") " pod="openstack/horizon-5654c4b6cf-6mjl4" Nov 24 09:07:31 crc kubenswrapper[4886]: I1124 09:07:31.943466 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-lqtgn" podStartSLOduration=3.943429945 podStartE2EDuration="3.943429945s" podCreationTimestamp="2025-11-24 09:07:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 09:07:31.936767928 +0000 UTC m=+1107.823506073" watchObservedRunningTime="2025-11-24 09:07:31.943429945 +0000 UTC m=+1107.830168090" Nov 24 09:07:32 crc kubenswrapper[4886]: I1124 09:07:32.033903 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5654c4b6cf-6mjl4" Nov 24 09:07:32 crc kubenswrapper[4886]: I1124 09:07:32.399216 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bbf5cc879-gs4gc" Nov 24 09:07:32 crc kubenswrapper[4886]: I1124 09:07:32.538838 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8twnt\" (UniqueName: \"kubernetes.io/projected/f97f59b9-899b-419b-88d9-7aa58d892ffc-kube-api-access-8twnt\") pod \"f97f59b9-899b-419b-88d9-7aa58d892ffc\" (UID: \"f97f59b9-899b-419b-88d9-7aa58d892ffc\") " Nov 24 09:07:32 crc kubenswrapper[4886]: I1124 09:07:32.538886 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f97f59b9-899b-419b-88d9-7aa58d892ffc-ovsdbserver-sb\") pod \"f97f59b9-899b-419b-88d9-7aa58d892ffc\" (UID: \"f97f59b9-899b-419b-88d9-7aa58d892ffc\") " Nov 24 09:07:32 crc kubenswrapper[4886]: I1124 09:07:32.538926 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f97f59b9-899b-419b-88d9-7aa58d892ffc-ovsdbserver-nb\") pod \"f97f59b9-899b-419b-88d9-7aa58d892ffc\" (UID: \"f97f59b9-899b-419b-88d9-7aa58d892ffc\") " Nov 24 09:07:32 crc kubenswrapper[4886]: I1124 09:07:32.539023 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f97f59b9-899b-419b-88d9-7aa58d892ffc-dns-svc\") pod \"f97f59b9-899b-419b-88d9-7aa58d892ffc\" (UID: \"f97f59b9-899b-419b-88d9-7aa58d892ffc\") " Nov 24 09:07:32 crc kubenswrapper[4886]: I1124 09:07:32.539062 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f97f59b9-899b-419b-88d9-7aa58d892ffc-dns-swift-storage-0\") pod \"f97f59b9-899b-419b-88d9-7aa58d892ffc\" (UID: \"f97f59b9-899b-419b-88d9-7aa58d892ffc\") " Nov 24 09:07:32 crc kubenswrapper[4886]: I1124 09:07:32.539094 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f97f59b9-899b-419b-88d9-7aa58d892ffc-config\") pod \"f97f59b9-899b-419b-88d9-7aa58d892ffc\" (UID: \"f97f59b9-899b-419b-88d9-7aa58d892ffc\") " Nov 24 09:07:32 crc kubenswrapper[4886]: I1124 09:07:32.550187 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f97f59b9-899b-419b-88d9-7aa58d892ffc-kube-api-access-8twnt" (OuterVolumeSpecName: "kube-api-access-8twnt") pod "f97f59b9-899b-419b-88d9-7aa58d892ffc" (UID: "f97f59b9-899b-419b-88d9-7aa58d892ffc"). InnerVolumeSpecName "kube-api-access-8twnt". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 09:07:32 crc kubenswrapper[4886]: I1124 09:07:32.586989 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f97f59b9-899b-419b-88d9-7aa58d892ffc-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "f97f59b9-899b-419b-88d9-7aa58d892ffc" (UID: "f97f59b9-899b-419b-88d9-7aa58d892ffc"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 09:07:32 crc kubenswrapper[4886]: I1124 09:07:32.593578 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f97f59b9-899b-419b-88d9-7aa58d892ffc-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "f97f59b9-899b-419b-88d9-7aa58d892ffc" (UID: "f97f59b9-899b-419b-88d9-7aa58d892ffc"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 09:07:32 crc kubenswrapper[4886]: I1124 09:07:32.602243 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f97f59b9-899b-419b-88d9-7aa58d892ffc-config" (OuterVolumeSpecName: "config") pod "f97f59b9-899b-419b-88d9-7aa58d892ffc" (UID: "f97f59b9-899b-419b-88d9-7aa58d892ffc"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 09:07:32 crc kubenswrapper[4886]: I1124 09:07:32.641790 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8twnt\" (UniqueName: \"kubernetes.io/projected/f97f59b9-899b-419b-88d9-7aa58d892ffc-kube-api-access-8twnt\") on node \"crc\" DevicePath \"\"" Nov 24 09:07:32 crc kubenswrapper[4886]: I1124 09:07:32.641828 4886 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f97f59b9-899b-419b-88d9-7aa58d892ffc-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 24 09:07:32 crc kubenswrapper[4886]: I1124 09:07:32.641842 4886 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f97f59b9-899b-419b-88d9-7aa58d892ffc-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 24 09:07:32 crc kubenswrapper[4886]: I1124 09:07:32.641851 4886 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f97f59b9-899b-419b-88d9-7aa58d892ffc-config\") on node \"crc\" DevicePath \"\"" Nov 24 09:07:32 crc kubenswrapper[4886]: I1124 09:07:32.644369 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f97f59b9-899b-419b-88d9-7aa58d892ffc-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "f97f59b9-899b-419b-88d9-7aa58d892ffc" (UID: "f97f59b9-899b-419b-88d9-7aa58d892ffc"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 09:07:32 crc kubenswrapper[4886]: I1124 09:07:32.671257 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f97f59b9-899b-419b-88d9-7aa58d892ffc-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "f97f59b9-899b-419b-88d9-7aa58d892ffc" (UID: "f97f59b9-899b-419b-88d9-7aa58d892ffc"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 09:07:32 crc kubenswrapper[4886]: I1124 09:07:32.744818 4886 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f97f59b9-899b-419b-88d9-7aa58d892ffc-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 24 09:07:32 crc kubenswrapper[4886]: I1124 09:07:32.744856 4886 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f97f59b9-899b-419b-88d9-7aa58d892ffc-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 24 09:07:32 crc kubenswrapper[4886]: I1124 09:07:32.929884 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bbf5cc879-gs4gc" Nov 24 09:07:32 crc kubenswrapper[4886]: I1124 09:07:32.930280 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bbf5cc879-gs4gc" event={"ID":"f97f59b9-899b-419b-88d9-7aa58d892ffc","Type":"ContainerDied","Data":"92d10829e245d2042efb11e3e4677aba173d7ee04855904488b49f20c42f0fe4"} Nov 24 09:07:32 crc kubenswrapper[4886]: I1124 09:07:32.930840 4886 scope.go:117] "RemoveContainer" containerID="f4ec5bde148fb86e781ccc8135d13284edaa28af332b597631343fb64f7a29bd" Nov 24 09:07:33 crc kubenswrapper[4886]: I1124 09:07:33.010007 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-5654c4b6cf-6mjl4"] Nov 24 09:07:33 crc kubenswrapper[4886]: I1124 09:07:33.123248 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-bbf5cc879-gs4gc"] Nov 24 09:07:33 crc kubenswrapper[4886]: I1124 09:07:33.147460 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-bbf5cc879-gs4gc"] Nov 24 09:07:34 crc kubenswrapper[4886]: I1124 09:07:34.875763 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f97f59b9-899b-419b-88d9-7aa58d892ffc" path="/var/lib/kubelet/pods/f97f59b9-899b-419b-88d9-7aa58d892ffc/volumes" Nov 24 09:07:35 crc kubenswrapper[4886]: I1124 09:07:35.981231 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5654c4b6cf-6mjl4" event={"ID":"2b936c64-69ae-43db-9d33-9ed58719be26","Type":"ContainerStarted","Data":"8b06a7c5fafc3293b46c9472a0530b27467a672c2189bfe8a5b4bf7ab36d68c3"} Nov 24 09:07:37 crc kubenswrapper[4886]: I1124 09:07:37.010396 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56df8fb6b7-8wwcw" event={"ID":"d02e92b5-fa15-43ce-a8aa-c3dc06490056","Type":"ContainerStarted","Data":"eab15089d73b98cd36e88c196689d7cf11cf9012d2d1eaf43de60cdd938e3822"} Nov 24 09:07:37 crc kubenswrapper[4886]: I1124 09:07:37.010577 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-56df8fb6b7-8wwcw" Nov 24 09:07:37 crc kubenswrapper[4886]: I1124 09:07:37.022383 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"6bd3c5c6-a206-47a1-9eb6-283268b83e61","Type":"ContainerStarted","Data":"afe740630b709ba6ed8b450d6ac5752362f76259ab9a899a8bf8a08abdc93b6d"} Nov 24 09:07:37 crc kubenswrapper[4886]: I1124 09:07:37.022596 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="6bd3c5c6-a206-47a1-9eb6-283268b83e61" containerName="glance-log" containerID="cri-o://7485fe54ab25092f2dee41b85e54ada401bcf7466b136b5006d499ea64ac925a" gracePeriod=30 Nov 24 09:07:37 crc kubenswrapper[4886]: I1124 09:07:37.022720 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="6bd3c5c6-a206-47a1-9eb6-283268b83e61" containerName="glance-httpd" containerID="cri-o://afe740630b709ba6ed8b450d6ac5752362f76259ab9a899a8bf8a08abdc93b6d" gracePeriod=30 Nov 24 09:07:37 crc kubenswrapper[4886]: I1124 09:07:37.039355 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"bf964533-7535-4425-a880-9b95595188b4","Type":"ContainerStarted","Data":"ca45e518e732ccb8752a7945fedfd2abc7db31393667870fd9da4b132a0bff5b"} Nov 24 09:07:37 crc kubenswrapper[4886]: I1124 09:07:37.066022 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-56df8fb6b7-8wwcw" podStartSLOduration=9.066000839 podStartE2EDuration="9.066000839s" podCreationTimestamp="2025-11-24 09:07:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 09:07:37.065765573 +0000 UTC m=+1112.952503708" watchObservedRunningTime="2025-11-24 09:07:37.066000839 +0000 UTC m=+1112.952738974" Nov 24 09:07:37 crc kubenswrapper[4886]: I1124 09:07:37.109705 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=10.109671783 podStartE2EDuration="10.109671783s" podCreationTimestamp="2025-11-24 09:07:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 09:07:37.105767884 +0000 UTC m=+1112.992506009" watchObservedRunningTime="2025-11-24 09:07:37.109671783 +0000 UTC m=+1112.996409918" Nov 24 09:07:38 crc kubenswrapper[4886]: I1124 09:07:38.059034 4886 generic.go:334] "Generic (PLEG): container finished" podID="6bd3c5c6-a206-47a1-9eb6-283268b83e61" containerID="afe740630b709ba6ed8b450d6ac5752362f76259ab9a899a8bf8a08abdc93b6d" exitCode=143 Nov 24 09:07:38 crc kubenswrapper[4886]: I1124 09:07:38.059548 4886 generic.go:334] "Generic (PLEG): container finished" podID="6bd3c5c6-a206-47a1-9eb6-283268b83e61" containerID="7485fe54ab25092f2dee41b85e54ada401bcf7466b136b5006d499ea64ac925a" exitCode=143 Nov 24 09:07:38 crc kubenswrapper[4886]: I1124 09:07:38.059267 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"6bd3c5c6-a206-47a1-9eb6-283268b83e61","Type":"ContainerDied","Data":"afe740630b709ba6ed8b450d6ac5752362f76259ab9a899a8bf8a08abdc93b6d"} Nov 24 09:07:38 crc kubenswrapper[4886]: I1124 09:07:38.059966 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"6bd3c5c6-a206-47a1-9eb6-283268b83e61","Type":"ContainerDied","Data":"7485fe54ab25092f2dee41b85e54ada401bcf7466b136b5006d499ea64ac925a"} Nov 24 09:07:39 crc kubenswrapper[4886]: I1124 09:07:39.072975 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"bf964533-7535-4425-a880-9b95595188b4","Type":"ContainerStarted","Data":"a9e609372ed8f7c41485e93de92f4c0f886ac1348e1a3d5e436e93097f5f35cf"} Nov 24 09:07:39 crc kubenswrapper[4886]: I1124 09:07:39.073358 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="bf964533-7535-4425-a880-9b95595188b4" containerName="glance-log" containerID="cri-o://ca45e518e732ccb8752a7945fedfd2abc7db31393667870fd9da4b132a0bff5b" gracePeriod=30 Nov 24 09:07:39 crc kubenswrapper[4886]: I1124 09:07:39.073997 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="bf964533-7535-4425-a880-9b95595188b4" containerName="glance-httpd" containerID="cri-o://a9e609372ed8f7c41485e93de92f4c0f886ac1348e1a3d5e436e93097f5f35cf" gracePeriod=30 Nov 24 09:07:39 crc kubenswrapper[4886]: I1124 09:07:39.117881 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=11.117852602 podStartE2EDuration="11.117852602s" podCreationTimestamp="2025-11-24 09:07:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 09:07:39.109790916 +0000 UTC m=+1114.996529051" watchObservedRunningTime="2025-11-24 09:07:39.117852602 +0000 UTC m=+1115.004590737" Nov 24 09:07:39 crc kubenswrapper[4886]: I1124 09:07:39.409916 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-c858856cc-mpd52"] Nov 24 09:07:39 crc kubenswrapper[4886]: I1124 09:07:39.471247 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-75ffb75746-pwc5g"] Nov 24 09:07:39 crc kubenswrapper[4886]: E1124 09:07:39.472753 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f97f59b9-899b-419b-88d9-7aa58d892ffc" containerName="init" Nov 24 09:07:39 crc kubenswrapper[4886]: I1124 09:07:39.472789 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="f97f59b9-899b-419b-88d9-7aa58d892ffc" containerName="init" Nov 24 09:07:39 crc kubenswrapper[4886]: I1124 09:07:39.473391 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="f97f59b9-899b-419b-88d9-7aa58d892ffc" containerName="init" Nov 24 09:07:39 crc kubenswrapper[4886]: I1124 09:07:39.481985 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-75ffb75746-pwc5g" Nov 24 09:07:39 crc kubenswrapper[4886]: I1124 09:07:39.491115 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-horizon-svc" Nov 24 09:07:39 crc kubenswrapper[4886]: I1124 09:07:39.500043 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-75ffb75746-pwc5g"] Nov 24 09:07:39 crc kubenswrapper[4886]: I1124 09:07:39.568802 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-5654c4b6cf-6mjl4"] Nov 24 09:07:39 crc kubenswrapper[4886]: I1124 09:07:39.580623 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-664f9d77dd-zw4gm"] Nov 24 09:07:39 crc kubenswrapper[4886]: I1124 09:07:39.585879 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-664f9d77dd-zw4gm" Nov 24 09:07:39 crc kubenswrapper[4886]: I1124 09:07:39.608818 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-664f9d77dd-zw4gm"] Nov 24 09:07:39 crc kubenswrapper[4886]: I1124 09:07:39.653647 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mdnrn\" (UniqueName: \"kubernetes.io/projected/19e275c2-5fd6-4ea7-a023-6d7478ae5750-kube-api-access-mdnrn\") pod \"horizon-664f9d77dd-zw4gm\" (UID: \"19e275c2-5fd6-4ea7-a023-6d7478ae5750\") " pod="openstack/horizon-664f9d77dd-zw4gm" Nov 24 09:07:39 crc kubenswrapper[4886]: I1124 09:07:39.653711 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xr2rc\" (UniqueName: \"kubernetes.io/projected/54dfe4a2-2d0f-417f-b1d6-df18e5581bfc-kube-api-access-xr2rc\") pod \"horizon-75ffb75746-pwc5g\" (UID: \"54dfe4a2-2d0f-417f-b1d6-df18e5581bfc\") " pod="openstack/horizon-75ffb75746-pwc5g" Nov 24 09:07:39 crc kubenswrapper[4886]: I1124 09:07:39.653791 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/54dfe4a2-2d0f-417f-b1d6-df18e5581bfc-horizon-secret-key\") pod \"horizon-75ffb75746-pwc5g\" (UID: \"54dfe4a2-2d0f-417f-b1d6-df18e5581bfc\") " pod="openstack/horizon-75ffb75746-pwc5g" Nov 24 09:07:39 crc kubenswrapper[4886]: I1124 09:07:39.654021 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/54dfe4a2-2d0f-417f-b1d6-df18e5581bfc-combined-ca-bundle\") pod \"horizon-75ffb75746-pwc5g\" (UID: \"54dfe4a2-2d0f-417f-b1d6-df18e5581bfc\") " pod="openstack/horizon-75ffb75746-pwc5g" Nov 24 09:07:39 crc kubenswrapper[4886]: I1124 09:07:39.654110 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/19e275c2-5fd6-4ea7-a023-6d7478ae5750-horizon-secret-key\") pod \"horizon-664f9d77dd-zw4gm\" (UID: \"19e275c2-5fd6-4ea7-a023-6d7478ae5750\") " pod="openstack/horizon-664f9d77dd-zw4gm" Nov 24 09:07:39 crc kubenswrapper[4886]: I1124 09:07:39.654401 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/54dfe4a2-2d0f-417f-b1d6-df18e5581bfc-scripts\") pod \"horizon-75ffb75746-pwc5g\" (UID: \"54dfe4a2-2d0f-417f-b1d6-df18e5581bfc\") " pod="openstack/horizon-75ffb75746-pwc5g" Nov 24 09:07:39 crc kubenswrapper[4886]: I1124 09:07:39.654458 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/54dfe4a2-2d0f-417f-b1d6-df18e5581bfc-config-data\") pod \"horizon-75ffb75746-pwc5g\" (UID: \"54dfe4a2-2d0f-417f-b1d6-df18e5581bfc\") " pod="openstack/horizon-75ffb75746-pwc5g" Nov 24 09:07:39 crc kubenswrapper[4886]: I1124 09:07:39.654489 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/19e275c2-5fd6-4ea7-a023-6d7478ae5750-combined-ca-bundle\") pod \"horizon-664f9d77dd-zw4gm\" (UID: \"19e275c2-5fd6-4ea7-a023-6d7478ae5750\") " pod="openstack/horizon-664f9d77dd-zw4gm" Nov 24 09:07:39 crc kubenswrapper[4886]: I1124 09:07:39.654563 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/19e275c2-5fd6-4ea7-a023-6d7478ae5750-scripts\") pod \"horizon-664f9d77dd-zw4gm\" (UID: \"19e275c2-5fd6-4ea7-a023-6d7478ae5750\") " pod="openstack/horizon-664f9d77dd-zw4gm" Nov 24 09:07:39 crc kubenswrapper[4886]: I1124 09:07:39.654597 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/19e275c2-5fd6-4ea7-a023-6d7478ae5750-logs\") pod \"horizon-664f9d77dd-zw4gm\" (UID: \"19e275c2-5fd6-4ea7-a023-6d7478ae5750\") " pod="openstack/horizon-664f9d77dd-zw4gm" Nov 24 09:07:39 crc kubenswrapper[4886]: I1124 09:07:39.654680 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/54dfe4a2-2d0f-417f-b1d6-df18e5581bfc-horizon-tls-certs\") pod \"horizon-75ffb75746-pwc5g\" (UID: \"54dfe4a2-2d0f-417f-b1d6-df18e5581bfc\") " pod="openstack/horizon-75ffb75746-pwc5g" Nov 24 09:07:39 crc kubenswrapper[4886]: I1124 09:07:39.654855 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/54dfe4a2-2d0f-417f-b1d6-df18e5581bfc-logs\") pod \"horizon-75ffb75746-pwc5g\" (UID: \"54dfe4a2-2d0f-417f-b1d6-df18e5581bfc\") " pod="openstack/horizon-75ffb75746-pwc5g" Nov 24 09:07:39 crc kubenswrapper[4886]: I1124 09:07:39.655036 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/19e275c2-5fd6-4ea7-a023-6d7478ae5750-horizon-tls-certs\") pod \"horizon-664f9d77dd-zw4gm\" (UID: \"19e275c2-5fd6-4ea7-a023-6d7478ae5750\") " pod="openstack/horizon-664f9d77dd-zw4gm" Nov 24 09:07:39 crc kubenswrapper[4886]: I1124 09:07:39.655084 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/19e275c2-5fd6-4ea7-a023-6d7478ae5750-config-data\") pod \"horizon-664f9d77dd-zw4gm\" (UID: \"19e275c2-5fd6-4ea7-a023-6d7478ae5750\") " pod="openstack/horizon-664f9d77dd-zw4gm" Nov 24 09:07:39 crc kubenswrapper[4886]: I1124 09:07:39.758022 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/19e275c2-5fd6-4ea7-a023-6d7478ae5750-horizon-tls-certs\") pod \"horizon-664f9d77dd-zw4gm\" (UID: \"19e275c2-5fd6-4ea7-a023-6d7478ae5750\") " pod="openstack/horizon-664f9d77dd-zw4gm" Nov 24 09:07:39 crc kubenswrapper[4886]: I1124 09:07:39.758093 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/19e275c2-5fd6-4ea7-a023-6d7478ae5750-config-data\") pod \"horizon-664f9d77dd-zw4gm\" (UID: \"19e275c2-5fd6-4ea7-a023-6d7478ae5750\") " pod="openstack/horizon-664f9d77dd-zw4gm" Nov 24 09:07:39 crc kubenswrapper[4886]: I1124 09:07:39.758217 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mdnrn\" (UniqueName: \"kubernetes.io/projected/19e275c2-5fd6-4ea7-a023-6d7478ae5750-kube-api-access-mdnrn\") pod \"horizon-664f9d77dd-zw4gm\" (UID: \"19e275c2-5fd6-4ea7-a023-6d7478ae5750\") " pod="openstack/horizon-664f9d77dd-zw4gm" Nov 24 09:07:39 crc kubenswrapper[4886]: I1124 09:07:39.758278 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xr2rc\" (UniqueName: \"kubernetes.io/projected/54dfe4a2-2d0f-417f-b1d6-df18e5581bfc-kube-api-access-xr2rc\") pod \"horizon-75ffb75746-pwc5g\" (UID: \"54dfe4a2-2d0f-417f-b1d6-df18e5581bfc\") " pod="openstack/horizon-75ffb75746-pwc5g" Nov 24 09:07:39 crc kubenswrapper[4886]: I1124 09:07:39.758343 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/54dfe4a2-2d0f-417f-b1d6-df18e5581bfc-horizon-secret-key\") pod \"horizon-75ffb75746-pwc5g\" (UID: \"54dfe4a2-2d0f-417f-b1d6-df18e5581bfc\") " pod="openstack/horizon-75ffb75746-pwc5g" Nov 24 09:07:39 crc kubenswrapper[4886]: I1124 09:07:39.758373 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/54dfe4a2-2d0f-417f-b1d6-df18e5581bfc-combined-ca-bundle\") pod \"horizon-75ffb75746-pwc5g\" (UID: \"54dfe4a2-2d0f-417f-b1d6-df18e5581bfc\") " pod="openstack/horizon-75ffb75746-pwc5g" Nov 24 09:07:39 crc kubenswrapper[4886]: I1124 09:07:39.758395 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/19e275c2-5fd6-4ea7-a023-6d7478ae5750-horizon-secret-key\") pod \"horizon-664f9d77dd-zw4gm\" (UID: \"19e275c2-5fd6-4ea7-a023-6d7478ae5750\") " pod="openstack/horizon-664f9d77dd-zw4gm" Nov 24 09:07:39 crc kubenswrapper[4886]: I1124 09:07:39.758415 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/54dfe4a2-2d0f-417f-b1d6-df18e5581bfc-scripts\") pod \"horizon-75ffb75746-pwc5g\" (UID: \"54dfe4a2-2d0f-417f-b1d6-df18e5581bfc\") " pod="openstack/horizon-75ffb75746-pwc5g" Nov 24 09:07:39 crc kubenswrapper[4886]: I1124 09:07:39.758441 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/54dfe4a2-2d0f-417f-b1d6-df18e5581bfc-config-data\") pod \"horizon-75ffb75746-pwc5g\" (UID: \"54dfe4a2-2d0f-417f-b1d6-df18e5581bfc\") " pod="openstack/horizon-75ffb75746-pwc5g" Nov 24 09:07:39 crc kubenswrapper[4886]: I1124 09:07:39.758461 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/19e275c2-5fd6-4ea7-a023-6d7478ae5750-combined-ca-bundle\") pod \"horizon-664f9d77dd-zw4gm\" (UID: \"19e275c2-5fd6-4ea7-a023-6d7478ae5750\") " pod="openstack/horizon-664f9d77dd-zw4gm" Nov 24 09:07:39 crc kubenswrapper[4886]: I1124 09:07:39.758486 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/19e275c2-5fd6-4ea7-a023-6d7478ae5750-scripts\") pod \"horizon-664f9d77dd-zw4gm\" (UID: \"19e275c2-5fd6-4ea7-a023-6d7478ae5750\") " pod="openstack/horizon-664f9d77dd-zw4gm" Nov 24 09:07:39 crc kubenswrapper[4886]: I1124 09:07:39.758505 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/19e275c2-5fd6-4ea7-a023-6d7478ae5750-logs\") pod \"horizon-664f9d77dd-zw4gm\" (UID: \"19e275c2-5fd6-4ea7-a023-6d7478ae5750\") " pod="openstack/horizon-664f9d77dd-zw4gm" Nov 24 09:07:39 crc kubenswrapper[4886]: I1124 09:07:39.758530 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/54dfe4a2-2d0f-417f-b1d6-df18e5581bfc-horizon-tls-certs\") pod \"horizon-75ffb75746-pwc5g\" (UID: \"54dfe4a2-2d0f-417f-b1d6-df18e5581bfc\") " pod="openstack/horizon-75ffb75746-pwc5g" Nov 24 09:07:39 crc kubenswrapper[4886]: I1124 09:07:39.758550 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/54dfe4a2-2d0f-417f-b1d6-df18e5581bfc-logs\") pod \"horizon-75ffb75746-pwc5g\" (UID: \"54dfe4a2-2d0f-417f-b1d6-df18e5581bfc\") " pod="openstack/horizon-75ffb75746-pwc5g" Nov 24 09:07:39 crc kubenswrapper[4886]: I1124 09:07:39.760004 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/19e275c2-5fd6-4ea7-a023-6d7478ae5750-config-data\") pod \"horizon-664f9d77dd-zw4gm\" (UID: \"19e275c2-5fd6-4ea7-a023-6d7478ae5750\") " pod="openstack/horizon-664f9d77dd-zw4gm" Nov 24 09:07:39 crc kubenswrapper[4886]: I1124 09:07:39.760309 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/54dfe4a2-2d0f-417f-b1d6-df18e5581bfc-logs\") pod \"horizon-75ffb75746-pwc5g\" (UID: \"54dfe4a2-2d0f-417f-b1d6-df18e5581bfc\") " pod="openstack/horizon-75ffb75746-pwc5g" Nov 24 09:07:39 crc kubenswrapper[4886]: I1124 09:07:39.760626 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/19e275c2-5fd6-4ea7-a023-6d7478ae5750-logs\") pod \"horizon-664f9d77dd-zw4gm\" (UID: \"19e275c2-5fd6-4ea7-a023-6d7478ae5750\") " pod="openstack/horizon-664f9d77dd-zw4gm" Nov 24 09:07:39 crc kubenswrapper[4886]: I1124 09:07:39.760664 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/19e275c2-5fd6-4ea7-a023-6d7478ae5750-scripts\") pod \"horizon-664f9d77dd-zw4gm\" (UID: \"19e275c2-5fd6-4ea7-a023-6d7478ae5750\") " pod="openstack/horizon-664f9d77dd-zw4gm" Nov 24 09:07:39 crc kubenswrapper[4886]: I1124 09:07:39.761736 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/54dfe4a2-2d0f-417f-b1d6-df18e5581bfc-config-data\") pod \"horizon-75ffb75746-pwc5g\" (UID: \"54dfe4a2-2d0f-417f-b1d6-df18e5581bfc\") " pod="openstack/horizon-75ffb75746-pwc5g" Nov 24 09:07:39 crc kubenswrapper[4886]: I1124 09:07:39.765943 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/54dfe4a2-2d0f-417f-b1d6-df18e5581bfc-scripts\") pod \"horizon-75ffb75746-pwc5g\" (UID: \"54dfe4a2-2d0f-417f-b1d6-df18e5581bfc\") " pod="openstack/horizon-75ffb75746-pwc5g" Nov 24 09:07:39 crc kubenswrapper[4886]: I1124 09:07:39.771913 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/54dfe4a2-2d0f-417f-b1d6-df18e5581bfc-horizon-tls-certs\") pod \"horizon-75ffb75746-pwc5g\" (UID: \"54dfe4a2-2d0f-417f-b1d6-df18e5581bfc\") " pod="openstack/horizon-75ffb75746-pwc5g" Nov 24 09:07:39 crc kubenswrapper[4886]: I1124 09:07:39.777908 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/54dfe4a2-2d0f-417f-b1d6-df18e5581bfc-combined-ca-bundle\") pod \"horizon-75ffb75746-pwc5g\" (UID: \"54dfe4a2-2d0f-417f-b1d6-df18e5581bfc\") " pod="openstack/horizon-75ffb75746-pwc5g" Nov 24 09:07:39 crc kubenswrapper[4886]: I1124 09:07:39.778378 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/19e275c2-5fd6-4ea7-a023-6d7478ae5750-combined-ca-bundle\") pod \"horizon-664f9d77dd-zw4gm\" (UID: \"19e275c2-5fd6-4ea7-a023-6d7478ae5750\") " pod="openstack/horizon-664f9d77dd-zw4gm" Nov 24 09:07:39 crc kubenswrapper[4886]: I1124 09:07:39.779831 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/54dfe4a2-2d0f-417f-b1d6-df18e5581bfc-horizon-secret-key\") pod \"horizon-75ffb75746-pwc5g\" (UID: \"54dfe4a2-2d0f-417f-b1d6-df18e5581bfc\") " pod="openstack/horizon-75ffb75746-pwc5g" Nov 24 09:07:39 crc kubenswrapper[4886]: I1124 09:07:39.780872 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xr2rc\" (UniqueName: \"kubernetes.io/projected/54dfe4a2-2d0f-417f-b1d6-df18e5581bfc-kube-api-access-xr2rc\") pod \"horizon-75ffb75746-pwc5g\" (UID: \"54dfe4a2-2d0f-417f-b1d6-df18e5581bfc\") " pod="openstack/horizon-75ffb75746-pwc5g" Nov 24 09:07:39 crc kubenswrapper[4886]: I1124 09:07:39.787388 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mdnrn\" (UniqueName: \"kubernetes.io/projected/19e275c2-5fd6-4ea7-a023-6d7478ae5750-kube-api-access-mdnrn\") pod \"horizon-664f9d77dd-zw4gm\" (UID: \"19e275c2-5fd6-4ea7-a023-6d7478ae5750\") " pod="openstack/horizon-664f9d77dd-zw4gm" Nov 24 09:07:39 crc kubenswrapper[4886]: I1124 09:07:39.789880 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/19e275c2-5fd6-4ea7-a023-6d7478ae5750-horizon-secret-key\") pod \"horizon-664f9d77dd-zw4gm\" (UID: \"19e275c2-5fd6-4ea7-a023-6d7478ae5750\") " pod="openstack/horizon-664f9d77dd-zw4gm" Nov 24 09:07:39 crc kubenswrapper[4886]: I1124 09:07:39.799536 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/19e275c2-5fd6-4ea7-a023-6d7478ae5750-horizon-tls-certs\") pod \"horizon-664f9d77dd-zw4gm\" (UID: \"19e275c2-5fd6-4ea7-a023-6d7478ae5750\") " pod="openstack/horizon-664f9d77dd-zw4gm" Nov 24 09:07:39 crc kubenswrapper[4886]: I1124 09:07:39.865207 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-75ffb75746-pwc5g" Nov 24 09:07:39 crc kubenswrapper[4886]: I1124 09:07:39.907407 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-664f9d77dd-zw4gm" Nov 24 09:07:40 crc kubenswrapper[4886]: I1124 09:07:40.085273 4886 generic.go:334] "Generic (PLEG): container finished" podID="bf964533-7535-4425-a880-9b95595188b4" containerID="a9e609372ed8f7c41485e93de92f4c0f886ac1348e1a3d5e436e93097f5f35cf" exitCode=0 Nov 24 09:07:40 crc kubenswrapper[4886]: I1124 09:07:40.085318 4886 generic.go:334] "Generic (PLEG): container finished" podID="bf964533-7535-4425-a880-9b95595188b4" containerID="ca45e518e732ccb8752a7945fedfd2abc7db31393667870fd9da4b132a0bff5b" exitCode=143 Nov 24 09:07:40 crc kubenswrapper[4886]: I1124 09:07:40.085397 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"bf964533-7535-4425-a880-9b95595188b4","Type":"ContainerDied","Data":"a9e609372ed8f7c41485e93de92f4c0f886ac1348e1a3d5e436e93097f5f35cf"} Nov 24 09:07:40 crc kubenswrapper[4886]: I1124 09:07:40.085501 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"bf964533-7535-4425-a880-9b95595188b4","Type":"ContainerDied","Data":"ca45e518e732ccb8752a7945fedfd2abc7db31393667870fd9da4b132a0bff5b"} Nov 24 09:07:40 crc kubenswrapper[4886]: I1124 09:07:40.089169 4886 generic.go:334] "Generic (PLEG): container finished" podID="5511ae3a-4fa5-400a-88c6-e9fca01787cf" containerID="a753f4d959369346b62434e569d11133ba573b9a5cbd32cd9161eeffadf88a2b" exitCode=0 Nov 24 09:07:40 crc kubenswrapper[4886]: I1124 09:07:40.089228 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-wsgjs" event={"ID":"5511ae3a-4fa5-400a-88c6-e9fca01787cf","Type":"ContainerDied","Data":"a753f4d959369346b62434e569d11133ba573b9a5cbd32cd9161eeffadf88a2b"} Nov 24 09:07:44 crc kubenswrapper[4886]: I1124 09:07:44.015548 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-56df8fb6b7-8wwcw" Nov 24 09:07:44 crc kubenswrapper[4886]: I1124 09:07:44.095505 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-ts9fz"] Nov 24 09:07:44 crc kubenswrapper[4886]: I1124 09:07:44.096293 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-b8fbc5445-ts9fz" podUID="028c4a78-6e79-412a-954c-abf1cdf4d5a2" containerName="dnsmasq-dns" containerID="cri-o://2a57bdaff3d59792302a81061c7531f811a5a9fae61be7ced5e8a140e8d01508" gracePeriod=10 Nov 24 09:07:45 crc kubenswrapper[4886]: I1124 09:07:45.164637 4886 generic.go:334] "Generic (PLEG): container finished" podID="028c4a78-6e79-412a-954c-abf1cdf4d5a2" containerID="2a57bdaff3d59792302a81061c7531f811a5a9fae61be7ced5e8a140e8d01508" exitCode=0 Nov 24 09:07:45 crc kubenswrapper[4886]: I1124 09:07:45.164708 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-ts9fz" event={"ID":"028c4a78-6e79-412a-954c-abf1cdf4d5a2","Type":"ContainerDied","Data":"2a57bdaff3d59792302a81061c7531f811a5a9fae61be7ced5e8a140e8d01508"} Nov 24 09:07:47 crc kubenswrapper[4886]: E1124 09:07:47.567410 4886 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-horizon:current-podified" Nov 24 09:07:47 crc kubenswrapper[4886]: E1124 09:07:47.568565 4886 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n7h577h5c9h5bbh98h669hf6h55ch5bfh594h58bh546h97h554hcch54h588hfh575hc5h549h66dhd6h65fh594h6fhbchd4hdbh8bh68h5b5q,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:no,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s8vn5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-c858856cc-mpd52_openstack(fe0358bf-ca1c-425a-91eb-4a2a6435f618): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 24 09:07:47 crc kubenswrapper[4886]: E1124 09:07:47.571048 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\"]" pod="openstack/horizon-c858856cc-mpd52" podUID="fe0358bf-ca1c-425a-91eb-4a2a6435f618" Nov 24 09:07:47 crc kubenswrapper[4886]: I1124 09:07:47.666920 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 24 09:07:47 crc kubenswrapper[4886]: I1124 09:07:47.747643 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6bd3c5c6-a206-47a1-9eb6-283268b83e61-scripts\") pod \"6bd3c5c6-a206-47a1-9eb6-283268b83e61\" (UID: \"6bd3c5c6-a206-47a1-9eb6-283268b83e61\") " Nov 24 09:07:47 crc kubenswrapper[4886]: I1124 09:07:47.747755 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6bd3c5c6-a206-47a1-9eb6-283268b83e61-combined-ca-bundle\") pod \"6bd3c5c6-a206-47a1-9eb6-283268b83e61\" (UID: \"6bd3c5c6-a206-47a1-9eb6-283268b83e61\") " Nov 24 09:07:47 crc kubenswrapper[4886]: I1124 09:07:47.747875 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6bd3c5c6-a206-47a1-9eb6-283268b83e61-config-data\") pod \"6bd3c5c6-a206-47a1-9eb6-283268b83e61\" (UID: \"6bd3c5c6-a206-47a1-9eb6-283268b83e61\") " Nov 24 09:07:47 crc kubenswrapper[4886]: I1124 09:07:47.748043 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6bd3c5c6-a206-47a1-9eb6-283268b83e61-logs\") pod \"6bd3c5c6-a206-47a1-9eb6-283268b83e61\" (UID: \"6bd3c5c6-a206-47a1-9eb6-283268b83e61\") " Nov 24 09:07:47 crc kubenswrapper[4886]: I1124 09:07:47.748082 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7m5fd\" (UniqueName: \"kubernetes.io/projected/6bd3c5c6-a206-47a1-9eb6-283268b83e61-kube-api-access-7m5fd\") pod \"6bd3c5c6-a206-47a1-9eb6-283268b83e61\" (UID: \"6bd3c5c6-a206-47a1-9eb6-283268b83e61\") " Nov 24 09:07:47 crc kubenswrapper[4886]: I1124 09:07:47.748124 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6bd3c5c6-a206-47a1-9eb6-283268b83e61-httpd-run\") pod \"6bd3c5c6-a206-47a1-9eb6-283268b83e61\" (UID: \"6bd3c5c6-a206-47a1-9eb6-283268b83e61\") " Nov 24 09:07:47 crc kubenswrapper[4886]: I1124 09:07:47.748214 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"6bd3c5c6-a206-47a1-9eb6-283268b83e61\" (UID: \"6bd3c5c6-a206-47a1-9eb6-283268b83e61\") " Nov 24 09:07:47 crc kubenswrapper[4886]: I1124 09:07:47.749335 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6bd3c5c6-a206-47a1-9eb6-283268b83e61-logs" (OuterVolumeSpecName: "logs") pod "6bd3c5c6-a206-47a1-9eb6-283268b83e61" (UID: "6bd3c5c6-a206-47a1-9eb6-283268b83e61"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 09:07:47 crc kubenswrapper[4886]: I1124 09:07:47.749931 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6bd3c5c6-a206-47a1-9eb6-283268b83e61-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "6bd3c5c6-a206-47a1-9eb6-283268b83e61" (UID: "6bd3c5c6-a206-47a1-9eb6-283268b83e61"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 09:07:47 crc kubenswrapper[4886]: I1124 09:07:47.759708 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6bd3c5c6-a206-47a1-9eb6-283268b83e61-scripts" (OuterVolumeSpecName: "scripts") pod "6bd3c5c6-a206-47a1-9eb6-283268b83e61" (UID: "6bd3c5c6-a206-47a1-9eb6-283268b83e61"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:07:47 crc kubenswrapper[4886]: I1124 09:07:47.782548 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage10-crc" (OuterVolumeSpecName: "glance") pod "6bd3c5c6-a206-47a1-9eb6-283268b83e61" (UID: "6bd3c5c6-a206-47a1-9eb6-283268b83e61"). InnerVolumeSpecName "local-storage10-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 24 09:07:47 crc kubenswrapper[4886]: I1124 09:07:47.797230 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6bd3c5c6-a206-47a1-9eb6-283268b83e61-kube-api-access-7m5fd" (OuterVolumeSpecName: "kube-api-access-7m5fd") pod "6bd3c5c6-a206-47a1-9eb6-283268b83e61" (UID: "6bd3c5c6-a206-47a1-9eb6-283268b83e61"). InnerVolumeSpecName "kube-api-access-7m5fd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 09:07:47 crc kubenswrapper[4886]: I1124 09:07:47.821825 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6bd3c5c6-a206-47a1-9eb6-283268b83e61-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6bd3c5c6-a206-47a1-9eb6-283268b83e61" (UID: "6bd3c5c6-a206-47a1-9eb6-283268b83e61"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:07:47 crc kubenswrapper[4886]: I1124 09:07:47.842740 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6bd3c5c6-a206-47a1-9eb6-283268b83e61-config-data" (OuterVolumeSpecName: "config-data") pod "6bd3c5c6-a206-47a1-9eb6-283268b83e61" (UID: "6bd3c5c6-a206-47a1-9eb6-283268b83e61"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:07:47 crc kubenswrapper[4886]: I1124 09:07:47.850385 4886 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6bd3c5c6-a206-47a1-9eb6-283268b83e61-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 09:07:47 crc kubenswrapper[4886]: I1124 09:07:47.850433 4886 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6bd3c5c6-a206-47a1-9eb6-283268b83e61-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 09:07:47 crc kubenswrapper[4886]: I1124 09:07:47.850451 4886 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6bd3c5c6-a206-47a1-9eb6-283268b83e61-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 09:07:47 crc kubenswrapper[4886]: I1124 09:07:47.850464 4886 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6bd3c5c6-a206-47a1-9eb6-283268b83e61-logs\") on node \"crc\" DevicePath \"\"" Nov 24 09:07:47 crc kubenswrapper[4886]: I1124 09:07:47.850476 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7m5fd\" (UniqueName: \"kubernetes.io/projected/6bd3c5c6-a206-47a1-9eb6-283268b83e61-kube-api-access-7m5fd\") on node \"crc\" DevicePath \"\"" Nov 24 09:07:47 crc kubenswrapper[4886]: I1124 09:07:47.850486 4886 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6bd3c5c6-a206-47a1-9eb6-283268b83e61-httpd-run\") on node \"crc\" DevicePath \"\"" Nov 24 09:07:47 crc kubenswrapper[4886]: I1124 09:07:47.850533 4886 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" " Nov 24 09:07:47 crc kubenswrapper[4886]: I1124 09:07:47.875187 4886 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage10-crc" (UniqueName: "kubernetes.io/local-volume/local-storage10-crc") on node "crc" Nov 24 09:07:47 crc kubenswrapper[4886]: I1124 09:07:47.953586 4886 reconciler_common.go:293] "Volume detached for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" DevicePath \"\"" Nov 24 09:07:48 crc kubenswrapper[4886]: I1124 09:07:48.200978 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 24 09:07:48 crc kubenswrapper[4886]: I1124 09:07:48.210861 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"6bd3c5c6-a206-47a1-9eb6-283268b83e61","Type":"ContainerDied","Data":"319ed80aeb4d9ae11b0a60b7496685acd9f2d383d7d3463806ae2165d99ba7c5"} Nov 24 09:07:48 crc kubenswrapper[4886]: I1124 09:07:48.210930 4886 scope.go:117] "RemoveContainer" containerID="afe740630b709ba6ed8b450d6ac5752362f76259ab9a899a8bf8a08abdc93b6d" Nov 24 09:07:48 crc kubenswrapper[4886]: I1124 09:07:48.297961 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 24 09:07:48 crc kubenswrapper[4886]: I1124 09:07:48.309708 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 24 09:07:48 crc kubenswrapper[4886]: I1124 09:07:48.328600 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 24 09:07:48 crc kubenswrapper[4886]: E1124 09:07:48.329302 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6bd3c5c6-a206-47a1-9eb6-283268b83e61" containerName="glance-httpd" Nov 24 09:07:48 crc kubenswrapper[4886]: I1124 09:07:48.329323 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="6bd3c5c6-a206-47a1-9eb6-283268b83e61" containerName="glance-httpd" Nov 24 09:07:48 crc kubenswrapper[4886]: E1124 09:07:48.329344 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6bd3c5c6-a206-47a1-9eb6-283268b83e61" containerName="glance-log" Nov 24 09:07:48 crc kubenswrapper[4886]: I1124 09:07:48.329353 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="6bd3c5c6-a206-47a1-9eb6-283268b83e61" containerName="glance-log" Nov 24 09:07:48 crc kubenswrapper[4886]: I1124 09:07:48.329656 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="6bd3c5c6-a206-47a1-9eb6-283268b83e61" containerName="glance-log" Nov 24 09:07:48 crc kubenswrapper[4886]: I1124 09:07:48.329690 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="6bd3c5c6-a206-47a1-9eb6-283268b83e61" containerName="glance-httpd" Nov 24 09:07:48 crc kubenswrapper[4886]: I1124 09:07:48.331249 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 24 09:07:48 crc kubenswrapper[4886]: I1124 09:07:48.336818 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Nov 24 09:07:48 crc kubenswrapper[4886]: I1124 09:07:48.336987 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Nov 24 09:07:48 crc kubenswrapper[4886]: I1124 09:07:48.341007 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 24 09:07:48 crc kubenswrapper[4886]: I1124 09:07:48.477756 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4a3df3e-e493-448e-afb1-b52e1a50437a-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"b4a3df3e-e493-448e-afb1-b52e1a50437a\") " pod="openstack/glance-default-internal-api-0" Nov 24 09:07:48 crc kubenswrapper[4886]: I1124 09:07:48.477807 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"b4a3df3e-e493-448e-afb1-b52e1a50437a\") " pod="openstack/glance-default-internal-api-0" Nov 24 09:07:48 crc kubenswrapper[4886]: I1124 09:07:48.477829 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b4a3df3e-e493-448e-afb1-b52e1a50437a-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"b4a3df3e-e493-448e-afb1-b52e1a50437a\") " pod="openstack/glance-default-internal-api-0" Nov 24 09:07:48 crc kubenswrapper[4886]: I1124 09:07:48.477858 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b4a3df3e-e493-448e-afb1-b52e1a50437a-logs\") pod \"glance-default-internal-api-0\" (UID: \"b4a3df3e-e493-448e-afb1-b52e1a50437a\") " pod="openstack/glance-default-internal-api-0" Nov 24 09:07:48 crc kubenswrapper[4886]: I1124 09:07:48.477899 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b4a3df3e-e493-448e-afb1-b52e1a50437a-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"b4a3df3e-e493-448e-afb1-b52e1a50437a\") " pod="openstack/glance-default-internal-api-0" Nov 24 09:07:48 crc kubenswrapper[4886]: I1124 09:07:48.477933 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6vnmd\" (UniqueName: \"kubernetes.io/projected/b4a3df3e-e493-448e-afb1-b52e1a50437a-kube-api-access-6vnmd\") pod \"glance-default-internal-api-0\" (UID: \"b4a3df3e-e493-448e-afb1-b52e1a50437a\") " pod="openstack/glance-default-internal-api-0" Nov 24 09:07:48 crc kubenswrapper[4886]: I1124 09:07:48.477962 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b4a3df3e-e493-448e-afb1-b52e1a50437a-config-data\") pod \"glance-default-internal-api-0\" (UID: \"b4a3df3e-e493-448e-afb1-b52e1a50437a\") " pod="openstack/glance-default-internal-api-0" Nov 24 09:07:48 crc kubenswrapper[4886]: I1124 09:07:48.477994 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b4a3df3e-e493-448e-afb1-b52e1a50437a-scripts\") pod \"glance-default-internal-api-0\" (UID: \"b4a3df3e-e493-448e-afb1-b52e1a50437a\") " pod="openstack/glance-default-internal-api-0" Nov 24 09:07:48 crc kubenswrapper[4886]: I1124 09:07:48.582510 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6vnmd\" (UniqueName: \"kubernetes.io/projected/b4a3df3e-e493-448e-afb1-b52e1a50437a-kube-api-access-6vnmd\") pod \"glance-default-internal-api-0\" (UID: \"b4a3df3e-e493-448e-afb1-b52e1a50437a\") " pod="openstack/glance-default-internal-api-0" Nov 24 09:07:48 crc kubenswrapper[4886]: I1124 09:07:48.582586 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b4a3df3e-e493-448e-afb1-b52e1a50437a-config-data\") pod \"glance-default-internal-api-0\" (UID: \"b4a3df3e-e493-448e-afb1-b52e1a50437a\") " pod="openstack/glance-default-internal-api-0" Nov 24 09:07:48 crc kubenswrapper[4886]: I1124 09:07:48.582630 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b4a3df3e-e493-448e-afb1-b52e1a50437a-scripts\") pod \"glance-default-internal-api-0\" (UID: \"b4a3df3e-e493-448e-afb1-b52e1a50437a\") " pod="openstack/glance-default-internal-api-0" Nov 24 09:07:48 crc kubenswrapper[4886]: I1124 09:07:48.582712 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4a3df3e-e493-448e-afb1-b52e1a50437a-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"b4a3df3e-e493-448e-afb1-b52e1a50437a\") " pod="openstack/glance-default-internal-api-0" Nov 24 09:07:48 crc kubenswrapper[4886]: I1124 09:07:48.582752 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"b4a3df3e-e493-448e-afb1-b52e1a50437a\") " pod="openstack/glance-default-internal-api-0" Nov 24 09:07:48 crc kubenswrapper[4886]: I1124 09:07:48.582772 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b4a3df3e-e493-448e-afb1-b52e1a50437a-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"b4a3df3e-e493-448e-afb1-b52e1a50437a\") " pod="openstack/glance-default-internal-api-0" Nov 24 09:07:48 crc kubenswrapper[4886]: I1124 09:07:48.582802 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b4a3df3e-e493-448e-afb1-b52e1a50437a-logs\") pod \"glance-default-internal-api-0\" (UID: \"b4a3df3e-e493-448e-afb1-b52e1a50437a\") " pod="openstack/glance-default-internal-api-0" Nov 24 09:07:48 crc kubenswrapper[4886]: I1124 09:07:48.582844 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b4a3df3e-e493-448e-afb1-b52e1a50437a-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"b4a3df3e-e493-448e-afb1-b52e1a50437a\") " pod="openstack/glance-default-internal-api-0" Nov 24 09:07:48 crc kubenswrapper[4886]: I1124 09:07:48.584393 4886 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"b4a3df3e-e493-448e-afb1-b52e1a50437a\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/glance-default-internal-api-0" Nov 24 09:07:48 crc kubenswrapper[4886]: I1124 09:07:48.590524 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4a3df3e-e493-448e-afb1-b52e1a50437a-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"b4a3df3e-e493-448e-afb1-b52e1a50437a\") " pod="openstack/glance-default-internal-api-0" Nov 24 09:07:48 crc kubenswrapper[4886]: I1124 09:07:48.592654 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b4a3df3e-e493-448e-afb1-b52e1a50437a-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"b4a3df3e-e493-448e-afb1-b52e1a50437a\") " pod="openstack/glance-default-internal-api-0" Nov 24 09:07:48 crc kubenswrapper[4886]: I1124 09:07:48.595547 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b4a3df3e-e493-448e-afb1-b52e1a50437a-logs\") pod \"glance-default-internal-api-0\" (UID: \"b4a3df3e-e493-448e-afb1-b52e1a50437a\") " pod="openstack/glance-default-internal-api-0" Nov 24 09:07:48 crc kubenswrapper[4886]: I1124 09:07:48.612761 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b4a3df3e-e493-448e-afb1-b52e1a50437a-scripts\") pod \"glance-default-internal-api-0\" (UID: \"b4a3df3e-e493-448e-afb1-b52e1a50437a\") " pod="openstack/glance-default-internal-api-0" Nov 24 09:07:48 crc kubenswrapper[4886]: I1124 09:07:48.615915 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b4a3df3e-e493-448e-afb1-b52e1a50437a-config-data\") pod \"glance-default-internal-api-0\" (UID: \"b4a3df3e-e493-448e-afb1-b52e1a50437a\") " pod="openstack/glance-default-internal-api-0" Nov 24 09:07:48 crc kubenswrapper[4886]: I1124 09:07:48.616983 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b4a3df3e-e493-448e-afb1-b52e1a50437a-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"b4a3df3e-e493-448e-afb1-b52e1a50437a\") " pod="openstack/glance-default-internal-api-0" Nov 24 09:07:48 crc kubenswrapper[4886]: I1124 09:07:48.643812 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6vnmd\" (UniqueName: \"kubernetes.io/projected/b4a3df3e-e493-448e-afb1-b52e1a50437a-kube-api-access-6vnmd\") pod \"glance-default-internal-api-0\" (UID: \"b4a3df3e-e493-448e-afb1-b52e1a50437a\") " pod="openstack/glance-default-internal-api-0" Nov 24 09:07:48 crc kubenswrapper[4886]: I1124 09:07:48.649431 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"b4a3df3e-e493-448e-afb1-b52e1a50437a\") " pod="openstack/glance-default-internal-api-0" Nov 24 09:07:48 crc kubenswrapper[4886]: I1124 09:07:48.662902 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 24 09:07:48 crc kubenswrapper[4886]: I1124 09:07:48.866486 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6bd3c5c6-a206-47a1-9eb6-283268b83e61" path="/var/lib/kubelet/pods/6bd3c5c6-a206-47a1-9eb6-283268b83e61/volumes" Nov 24 09:07:50 crc kubenswrapper[4886]: I1124 09:07:50.226989 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-wsgjs" event={"ID":"5511ae3a-4fa5-400a-88c6-e9fca01787cf","Type":"ContainerDied","Data":"5831e35c44b5aa40bbf4cb079bd39e59baf3e45cbdf2be91b82d038a3552b1f0"} Nov 24 09:07:50 crc kubenswrapper[4886]: I1124 09:07:50.227384 4886 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5831e35c44b5aa40bbf4cb079bd39e59baf3e45cbdf2be91b82d038a3552b1f0" Nov 24 09:07:50 crc kubenswrapper[4886]: I1124 09:07:50.295907 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-wsgjs" Nov 24 09:07:50 crc kubenswrapper[4886]: I1124 09:07:50.432843 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6m7x\" (UniqueName: \"kubernetes.io/projected/5511ae3a-4fa5-400a-88c6-e9fca01787cf-kube-api-access-d6m7x\") pod \"5511ae3a-4fa5-400a-88c6-e9fca01787cf\" (UID: \"5511ae3a-4fa5-400a-88c6-e9fca01787cf\") " Nov 24 09:07:50 crc kubenswrapper[4886]: I1124 09:07:50.432926 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/5511ae3a-4fa5-400a-88c6-e9fca01787cf-fernet-keys\") pod \"5511ae3a-4fa5-400a-88c6-e9fca01787cf\" (UID: \"5511ae3a-4fa5-400a-88c6-e9fca01787cf\") " Nov 24 09:07:50 crc kubenswrapper[4886]: I1124 09:07:50.433070 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5511ae3a-4fa5-400a-88c6-e9fca01787cf-scripts\") pod \"5511ae3a-4fa5-400a-88c6-e9fca01787cf\" (UID: \"5511ae3a-4fa5-400a-88c6-e9fca01787cf\") " Nov 24 09:07:50 crc kubenswrapper[4886]: I1124 09:07:50.433303 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5511ae3a-4fa5-400a-88c6-e9fca01787cf-combined-ca-bundle\") pod \"5511ae3a-4fa5-400a-88c6-e9fca01787cf\" (UID: \"5511ae3a-4fa5-400a-88c6-e9fca01787cf\") " Nov 24 09:07:50 crc kubenswrapper[4886]: I1124 09:07:50.433428 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/5511ae3a-4fa5-400a-88c6-e9fca01787cf-credential-keys\") pod \"5511ae3a-4fa5-400a-88c6-e9fca01787cf\" (UID: \"5511ae3a-4fa5-400a-88c6-e9fca01787cf\") " Nov 24 09:07:50 crc kubenswrapper[4886]: I1124 09:07:50.433517 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5511ae3a-4fa5-400a-88c6-e9fca01787cf-config-data\") pod \"5511ae3a-4fa5-400a-88c6-e9fca01787cf\" (UID: \"5511ae3a-4fa5-400a-88c6-e9fca01787cf\") " Nov 24 09:07:50 crc kubenswrapper[4886]: I1124 09:07:50.441202 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5511ae3a-4fa5-400a-88c6-e9fca01787cf-scripts" (OuterVolumeSpecName: "scripts") pod "5511ae3a-4fa5-400a-88c6-e9fca01787cf" (UID: "5511ae3a-4fa5-400a-88c6-e9fca01787cf"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:07:50 crc kubenswrapper[4886]: I1124 09:07:50.441389 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5511ae3a-4fa5-400a-88c6-e9fca01787cf-kube-api-access-d6m7x" (OuterVolumeSpecName: "kube-api-access-d6m7x") pod "5511ae3a-4fa5-400a-88c6-e9fca01787cf" (UID: "5511ae3a-4fa5-400a-88c6-e9fca01787cf"). InnerVolumeSpecName "kube-api-access-d6m7x". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 09:07:50 crc kubenswrapper[4886]: I1124 09:07:50.441419 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5511ae3a-4fa5-400a-88c6-e9fca01787cf-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "5511ae3a-4fa5-400a-88c6-e9fca01787cf" (UID: "5511ae3a-4fa5-400a-88c6-e9fca01787cf"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:07:50 crc kubenswrapper[4886]: I1124 09:07:50.441578 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5511ae3a-4fa5-400a-88c6-e9fca01787cf-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "5511ae3a-4fa5-400a-88c6-e9fca01787cf" (UID: "5511ae3a-4fa5-400a-88c6-e9fca01787cf"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:07:50 crc kubenswrapper[4886]: I1124 09:07:50.465978 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5511ae3a-4fa5-400a-88c6-e9fca01787cf-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5511ae3a-4fa5-400a-88c6-e9fca01787cf" (UID: "5511ae3a-4fa5-400a-88c6-e9fca01787cf"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:07:50 crc kubenswrapper[4886]: I1124 09:07:50.474489 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5511ae3a-4fa5-400a-88c6-e9fca01787cf-config-data" (OuterVolumeSpecName: "config-data") pod "5511ae3a-4fa5-400a-88c6-e9fca01787cf" (UID: "5511ae3a-4fa5-400a-88c6-e9fca01787cf"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:07:50 crc kubenswrapper[4886]: I1124 09:07:50.536356 4886 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5511ae3a-4fa5-400a-88c6-e9fca01787cf-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 09:07:50 crc kubenswrapper[4886]: I1124 09:07:50.536420 4886 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5511ae3a-4fa5-400a-88c6-e9fca01787cf-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 09:07:50 crc kubenswrapper[4886]: I1124 09:07:50.536440 4886 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/5511ae3a-4fa5-400a-88c6-e9fca01787cf-credential-keys\") on node \"crc\" DevicePath \"\"" Nov 24 09:07:50 crc kubenswrapper[4886]: I1124 09:07:50.536453 4886 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5511ae3a-4fa5-400a-88c6-e9fca01787cf-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 09:07:50 crc kubenswrapper[4886]: I1124 09:07:50.536464 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6m7x\" (UniqueName: \"kubernetes.io/projected/5511ae3a-4fa5-400a-88c6-e9fca01787cf-kube-api-access-d6m7x\") on node \"crc\" DevicePath \"\"" Nov 24 09:07:50 crc kubenswrapper[4886]: I1124 09:07:50.536477 4886 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/5511ae3a-4fa5-400a-88c6-e9fca01787cf-fernet-keys\") on node \"crc\" DevicePath \"\"" Nov 24 09:07:51 crc kubenswrapper[4886]: I1124 09:07:51.236946 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-wsgjs" Nov 24 09:07:51 crc kubenswrapper[4886]: I1124 09:07:51.489518 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-wsgjs"] Nov 24 09:07:51 crc kubenswrapper[4886]: I1124 09:07:51.498521 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-wsgjs"] Nov 24 09:07:51 crc kubenswrapper[4886]: I1124 09:07:51.594754 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-c6hzr"] Nov 24 09:07:51 crc kubenswrapper[4886]: E1124 09:07:51.595337 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5511ae3a-4fa5-400a-88c6-e9fca01787cf" containerName="keystone-bootstrap" Nov 24 09:07:51 crc kubenswrapper[4886]: I1124 09:07:51.595365 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="5511ae3a-4fa5-400a-88c6-e9fca01787cf" containerName="keystone-bootstrap" Nov 24 09:07:51 crc kubenswrapper[4886]: I1124 09:07:51.595620 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="5511ae3a-4fa5-400a-88c6-e9fca01787cf" containerName="keystone-bootstrap" Nov 24 09:07:51 crc kubenswrapper[4886]: I1124 09:07:51.596498 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-c6hzr" Nov 24 09:07:51 crc kubenswrapper[4886]: I1124 09:07:51.600916 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-tfhvb" Nov 24 09:07:51 crc kubenswrapper[4886]: I1124 09:07:51.601211 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Nov 24 09:07:51 crc kubenswrapper[4886]: I1124 09:07:51.601401 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Nov 24 09:07:51 crc kubenswrapper[4886]: I1124 09:07:51.601430 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Nov 24 09:07:51 crc kubenswrapper[4886]: I1124 09:07:51.601449 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Nov 24 09:07:51 crc kubenswrapper[4886]: I1124 09:07:51.614186 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-c6hzr"] Nov 24 09:07:51 crc kubenswrapper[4886]: I1124 09:07:51.663385 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec036c5c-6eff-4c4e-83c2-5727576b540e-combined-ca-bundle\") pod \"keystone-bootstrap-c6hzr\" (UID: \"ec036c5c-6eff-4c4e-83c2-5727576b540e\") " pod="openstack/keystone-bootstrap-c6hzr" Nov 24 09:07:51 crc kubenswrapper[4886]: I1124 09:07:51.663497 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/ec036c5c-6eff-4c4e-83c2-5727576b540e-fernet-keys\") pod \"keystone-bootstrap-c6hzr\" (UID: \"ec036c5c-6eff-4c4e-83c2-5727576b540e\") " pod="openstack/keystone-bootstrap-c6hzr" Nov 24 09:07:51 crc kubenswrapper[4886]: I1124 09:07:51.663528 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/ec036c5c-6eff-4c4e-83c2-5727576b540e-credential-keys\") pod \"keystone-bootstrap-c6hzr\" (UID: \"ec036c5c-6eff-4c4e-83c2-5727576b540e\") " pod="openstack/keystone-bootstrap-c6hzr" Nov 24 09:07:51 crc kubenswrapper[4886]: I1124 09:07:51.663801 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j4t65\" (UniqueName: \"kubernetes.io/projected/ec036c5c-6eff-4c4e-83c2-5727576b540e-kube-api-access-j4t65\") pod \"keystone-bootstrap-c6hzr\" (UID: \"ec036c5c-6eff-4c4e-83c2-5727576b540e\") " pod="openstack/keystone-bootstrap-c6hzr" Nov 24 09:07:51 crc kubenswrapper[4886]: I1124 09:07:51.663948 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec036c5c-6eff-4c4e-83c2-5727576b540e-config-data\") pod \"keystone-bootstrap-c6hzr\" (UID: \"ec036c5c-6eff-4c4e-83c2-5727576b540e\") " pod="openstack/keystone-bootstrap-c6hzr" Nov 24 09:07:51 crc kubenswrapper[4886]: I1124 09:07:51.664121 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ec036c5c-6eff-4c4e-83c2-5727576b540e-scripts\") pod \"keystone-bootstrap-c6hzr\" (UID: \"ec036c5c-6eff-4c4e-83c2-5727576b540e\") " pod="openstack/keystone-bootstrap-c6hzr" Nov 24 09:07:51 crc kubenswrapper[4886]: I1124 09:07:51.766629 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j4t65\" (UniqueName: \"kubernetes.io/projected/ec036c5c-6eff-4c4e-83c2-5727576b540e-kube-api-access-j4t65\") pod \"keystone-bootstrap-c6hzr\" (UID: \"ec036c5c-6eff-4c4e-83c2-5727576b540e\") " pod="openstack/keystone-bootstrap-c6hzr" Nov 24 09:07:51 crc kubenswrapper[4886]: I1124 09:07:51.767186 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec036c5c-6eff-4c4e-83c2-5727576b540e-config-data\") pod \"keystone-bootstrap-c6hzr\" (UID: \"ec036c5c-6eff-4c4e-83c2-5727576b540e\") " pod="openstack/keystone-bootstrap-c6hzr" Nov 24 09:07:51 crc kubenswrapper[4886]: I1124 09:07:51.767254 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ec036c5c-6eff-4c4e-83c2-5727576b540e-scripts\") pod \"keystone-bootstrap-c6hzr\" (UID: \"ec036c5c-6eff-4c4e-83c2-5727576b540e\") " pod="openstack/keystone-bootstrap-c6hzr" Nov 24 09:07:51 crc kubenswrapper[4886]: I1124 09:07:51.767371 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec036c5c-6eff-4c4e-83c2-5727576b540e-combined-ca-bundle\") pod \"keystone-bootstrap-c6hzr\" (UID: \"ec036c5c-6eff-4c4e-83c2-5727576b540e\") " pod="openstack/keystone-bootstrap-c6hzr" Nov 24 09:07:51 crc kubenswrapper[4886]: I1124 09:07:51.767429 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/ec036c5c-6eff-4c4e-83c2-5727576b540e-fernet-keys\") pod \"keystone-bootstrap-c6hzr\" (UID: \"ec036c5c-6eff-4c4e-83c2-5727576b540e\") " pod="openstack/keystone-bootstrap-c6hzr" Nov 24 09:07:51 crc kubenswrapper[4886]: I1124 09:07:51.767455 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/ec036c5c-6eff-4c4e-83c2-5727576b540e-credential-keys\") pod \"keystone-bootstrap-c6hzr\" (UID: \"ec036c5c-6eff-4c4e-83c2-5727576b540e\") " pod="openstack/keystone-bootstrap-c6hzr" Nov 24 09:07:51 crc kubenswrapper[4886]: I1124 09:07:51.779895 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ec036c5c-6eff-4c4e-83c2-5727576b540e-scripts\") pod \"keystone-bootstrap-c6hzr\" (UID: \"ec036c5c-6eff-4c4e-83c2-5727576b540e\") " pod="openstack/keystone-bootstrap-c6hzr" Nov 24 09:07:51 crc kubenswrapper[4886]: I1124 09:07:51.780335 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec036c5c-6eff-4c4e-83c2-5727576b540e-combined-ca-bundle\") pod \"keystone-bootstrap-c6hzr\" (UID: \"ec036c5c-6eff-4c4e-83c2-5727576b540e\") " pod="openstack/keystone-bootstrap-c6hzr" Nov 24 09:07:51 crc kubenswrapper[4886]: I1124 09:07:51.780748 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/ec036c5c-6eff-4c4e-83c2-5727576b540e-fernet-keys\") pod \"keystone-bootstrap-c6hzr\" (UID: \"ec036c5c-6eff-4c4e-83c2-5727576b540e\") " pod="openstack/keystone-bootstrap-c6hzr" Nov 24 09:07:51 crc kubenswrapper[4886]: I1124 09:07:51.783597 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec036c5c-6eff-4c4e-83c2-5727576b540e-config-data\") pod \"keystone-bootstrap-c6hzr\" (UID: \"ec036c5c-6eff-4c4e-83c2-5727576b540e\") " pod="openstack/keystone-bootstrap-c6hzr" Nov 24 09:07:51 crc kubenswrapper[4886]: I1124 09:07:51.784026 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/ec036c5c-6eff-4c4e-83c2-5727576b540e-credential-keys\") pod \"keystone-bootstrap-c6hzr\" (UID: \"ec036c5c-6eff-4c4e-83c2-5727576b540e\") " pod="openstack/keystone-bootstrap-c6hzr" Nov 24 09:07:51 crc kubenswrapper[4886]: I1124 09:07:51.786147 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j4t65\" (UniqueName: \"kubernetes.io/projected/ec036c5c-6eff-4c4e-83c2-5727576b540e-kube-api-access-j4t65\") pod \"keystone-bootstrap-c6hzr\" (UID: \"ec036c5c-6eff-4c4e-83c2-5727576b540e\") " pod="openstack/keystone-bootstrap-c6hzr" Nov 24 09:07:51 crc kubenswrapper[4886]: I1124 09:07:51.916567 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-c6hzr" Nov 24 09:07:52 crc kubenswrapper[4886]: I1124 09:07:52.865989 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5511ae3a-4fa5-400a-88c6-e9fca01787cf" path="/var/lib/kubelet/pods/5511ae3a-4fa5-400a-88c6-e9fca01787cf/volumes" Nov 24 09:07:53 crc kubenswrapper[4886]: I1124 09:07:53.983201 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-b8fbc5445-ts9fz" podUID="028c4a78-6e79-412a-954c-abf1cdf4d5a2" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.110:5353: i/o timeout" Nov 24 09:07:57 crc kubenswrapper[4886]: I1124 09:07:57.300295 4886 generic.go:334] "Generic (PLEG): container finished" podID="390e7d30-f337-4255-a488-3b5b345235ed" containerID="bad5972703be48ed87467a308a29d4c811db61eabcf9b35e13da11a1be3c6fe1" exitCode=0 Nov 24 09:07:57 crc kubenswrapper[4886]: I1124 09:07:57.300390 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-lqtgn" event={"ID":"390e7d30-f337-4255-a488-3b5b345235ed","Type":"ContainerDied","Data":"bad5972703be48ed87467a308a29d4c811db61eabcf9b35e13da11a1be3c6fe1"} Nov 24 09:07:59 crc kubenswrapper[4886]: I1124 09:07:58.991450 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-b8fbc5445-ts9fz" podUID="028c4a78-6e79-412a-954c-abf1cdf4d5a2" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.110:5353: i/o timeout" Nov 24 09:07:59 crc kubenswrapper[4886]: I1124 09:07:59.032611 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Nov 24 09:07:59 crc kubenswrapper[4886]: I1124 09:07:59.032660 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Nov 24 09:07:59 crc kubenswrapper[4886]: E1124 09:07:59.922679 4886 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified" Nov 24 09:07:59 crc kubenswrapper[4886]: E1124 09:07:59.923320 4886 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cqwfs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-ph4gr_openstack(7ca0ca62-7545-4e1a-9969-121899a789b0): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 24 09:07:59 crc kubenswrapper[4886]: E1124 09:07:59.924561 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-ph4gr" podUID="7ca0ca62-7545-4e1a-9969-121899a789b0" Nov 24 09:08:00 crc kubenswrapper[4886]: E1124 09:08:00.271588 4886 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified" Nov 24 09:08:00 crc kubenswrapper[4886]: E1124 09:08:00.272596 4886 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ncch686h545h8dhbbh697hcbh7bhbch5ffh5d8h655h586hbfh5bch668h694h95h54fh9ch97h544hdh596h5d7hc7h676h5bh569h9ch558h588q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sqlz2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(6e8de0fe-585f-4cb8-8deb-d788e443fbd2): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 24 09:08:00 crc kubenswrapper[4886]: E1124 09:08:00.319894 4886 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-horizon:current-podified" Nov 24 09:08:00 crc kubenswrapper[4886]: E1124 09:08:00.320229 4886 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n566h657h699h69h54bh586h554h67dh88h685h64h57bh679h5cdh554h654h69hc8h674h68h558hchc9h646h5cbh655hcbh556h665h56ch67bhb7q,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:no,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-h6r2b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-5dc555666f-6hzb6_openstack(9c272c26-c042-4e0d-a35d-2ff5d329f901): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 24 09:08:00 crc kubenswrapper[4886]: E1124 09:08:00.335864 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\"]" pod="openstack/horizon-5dc555666f-6hzb6" podUID="9c272c26-c042-4e0d-a35d-2ff5d329f901" Nov 24 09:08:00 crc kubenswrapper[4886]: I1124 09:08:00.361744 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"bf964533-7535-4425-a880-9b95595188b4","Type":"ContainerDied","Data":"52226747dd4cce06f34c07109b1a3034304a1e79db1260bf01dbc923dcf2c274"} Nov 24 09:08:00 crc kubenswrapper[4886]: I1124 09:08:00.361976 4886 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="52226747dd4cce06f34c07109b1a3034304a1e79db1260bf01dbc923dcf2c274" Nov 24 09:08:00 crc kubenswrapper[4886]: I1124 09:08:00.367652 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-ts9fz" event={"ID":"028c4a78-6e79-412a-954c-abf1cdf4d5a2","Type":"ContainerDied","Data":"a1388ecae04dcf8d19014e7b5c5f8344ac347b6502925e8cb6eed15e9b56426b"} Nov 24 09:08:00 crc kubenswrapper[4886]: I1124 09:08:00.367722 4886 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a1388ecae04dcf8d19014e7b5c5f8344ac347b6502925e8cb6eed15e9b56426b" Nov 24 09:08:00 crc kubenswrapper[4886]: I1124 09:08:00.370335 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-c858856cc-mpd52" event={"ID":"fe0358bf-ca1c-425a-91eb-4a2a6435f618","Type":"ContainerDied","Data":"71f8438e6176c72b1d59c10883329cde381465513377a2a9747e24c4b5379575"} Nov 24 09:08:00 crc kubenswrapper[4886]: I1124 09:08:00.370363 4886 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="71f8438e6176c72b1d59c10883329cde381465513377a2a9747e24c4b5379575" Nov 24 09:08:00 crc kubenswrapper[4886]: E1124 09:08:00.371854 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified\\\"\"" pod="openstack/cinder-db-sync-ph4gr" podUID="7ca0ca62-7545-4e1a-9969-121899a789b0" Nov 24 09:08:00 crc kubenswrapper[4886]: I1124 09:08:00.425845 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 24 09:08:00 crc kubenswrapper[4886]: I1124 09:08:00.432568 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-ts9fz" Nov 24 09:08:00 crc kubenswrapper[4886]: I1124 09:08:00.441852 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-c858856cc-mpd52" Nov 24 09:08:00 crc kubenswrapper[4886]: I1124 09:08:00.575909 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/fe0358bf-ca1c-425a-91eb-4a2a6435f618-config-data\") pod \"fe0358bf-ca1c-425a-91eb-4a2a6435f618\" (UID: \"fe0358bf-ca1c-425a-91eb-4a2a6435f618\") " Nov 24 09:08:00 crc kubenswrapper[4886]: I1124 09:08:00.575989 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/028c4a78-6e79-412a-954c-abf1cdf4d5a2-dns-svc\") pod \"028c4a78-6e79-412a-954c-abf1cdf4d5a2\" (UID: \"028c4a78-6e79-412a-954c-abf1cdf4d5a2\") " Nov 24 09:08:00 crc kubenswrapper[4886]: I1124 09:08:00.576028 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf964533-7535-4425-a880-9b95595188b4-combined-ca-bundle\") pod \"bf964533-7535-4425-a880-9b95595188b4\" (UID: \"bf964533-7535-4425-a880-9b95595188b4\") " Nov 24 09:08:00 crc kubenswrapper[4886]: I1124 09:08:00.576084 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ssf5d\" (UniqueName: \"kubernetes.io/projected/bf964533-7535-4425-a880-9b95595188b4-kube-api-access-ssf5d\") pod \"bf964533-7535-4425-a880-9b95595188b4\" (UID: \"bf964533-7535-4425-a880-9b95595188b4\") " Nov 24 09:08:00 crc kubenswrapper[4886]: I1124 09:08:00.576126 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/fe0358bf-ca1c-425a-91eb-4a2a6435f618-scripts\") pod \"fe0358bf-ca1c-425a-91eb-4a2a6435f618\" (UID: \"fe0358bf-ca1c-425a-91eb-4a2a6435f618\") " Nov 24 09:08:00 crc kubenswrapper[4886]: I1124 09:08:00.576220 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bf964533-7535-4425-a880-9b95595188b4-config-data\") pod \"bf964533-7535-4425-a880-9b95595188b4\" (UID: \"bf964533-7535-4425-a880-9b95595188b4\") " Nov 24 09:08:00 crc kubenswrapper[4886]: I1124 09:08:00.576253 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bf964533-7535-4425-a880-9b95595188b4-logs\") pod \"bf964533-7535-4425-a880-9b95595188b4\" (UID: \"bf964533-7535-4425-a880-9b95595188b4\") " Nov 24 09:08:00 crc kubenswrapper[4886]: I1124 09:08:00.576311 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bf964533-7535-4425-a880-9b95595188b4-scripts\") pod \"bf964533-7535-4425-a880-9b95595188b4\" (UID: \"bf964533-7535-4425-a880-9b95595188b4\") " Nov 24 09:08:00 crc kubenswrapper[4886]: I1124 09:08:00.576370 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s8vn5\" (UniqueName: \"kubernetes.io/projected/fe0358bf-ca1c-425a-91eb-4a2a6435f618-kube-api-access-s8vn5\") pod \"fe0358bf-ca1c-425a-91eb-4a2a6435f618\" (UID: \"fe0358bf-ca1c-425a-91eb-4a2a6435f618\") " Nov 24 09:08:00 crc kubenswrapper[4886]: I1124 09:08:00.576420 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/028c4a78-6e79-412a-954c-abf1cdf4d5a2-ovsdbserver-nb\") pod \"028c4a78-6e79-412a-954c-abf1cdf4d5a2\" (UID: \"028c4a78-6e79-412a-954c-abf1cdf4d5a2\") " Nov 24 09:08:00 crc kubenswrapper[4886]: I1124 09:08:00.576482 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/fe0358bf-ca1c-425a-91eb-4a2a6435f618-horizon-secret-key\") pod \"fe0358bf-ca1c-425a-91eb-4a2a6435f618\" (UID: \"fe0358bf-ca1c-425a-91eb-4a2a6435f618\") " Nov 24 09:08:00 crc kubenswrapper[4886]: I1124 09:08:00.576509 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/028c4a78-6e79-412a-954c-abf1cdf4d5a2-config\") pod \"028c4a78-6e79-412a-954c-abf1cdf4d5a2\" (UID: \"028c4a78-6e79-412a-954c-abf1cdf4d5a2\") " Nov 24 09:08:00 crc kubenswrapper[4886]: I1124 09:08:00.576575 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-894zv\" (UniqueName: \"kubernetes.io/projected/028c4a78-6e79-412a-954c-abf1cdf4d5a2-kube-api-access-894zv\") pod \"028c4a78-6e79-412a-954c-abf1cdf4d5a2\" (UID: \"028c4a78-6e79-412a-954c-abf1cdf4d5a2\") " Nov 24 09:08:00 crc kubenswrapper[4886]: I1124 09:08:00.576617 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"bf964533-7535-4425-a880-9b95595188b4\" (UID: \"bf964533-7535-4425-a880-9b95595188b4\") " Nov 24 09:08:00 crc kubenswrapper[4886]: I1124 09:08:00.576680 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/bf964533-7535-4425-a880-9b95595188b4-httpd-run\") pod \"bf964533-7535-4425-a880-9b95595188b4\" (UID: \"bf964533-7535-4425-a880-9b95595188b4\") " Nov 24 09:08:00 crc kubenswrapper[4886]: I1124 09:08:00.576721 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/028c4a78-6e79-412a-954c-abf1cdf4d5a2-ovsdbserver-sb\") pod \"028c4a78-6e79-412a-954c-abf1cdf4d5a2\" (UID: \"028c4a78-6e79-412a-954c-abf1cdf4d5a2\") " Nov 24 09:08:00 crc kubenswrapper[4886]: I1124 09:08:00.576759 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fe0358bf-ca1c-425a-91eb-4a2a6435f618-logs\") pod \"fe0358bf-ca1c-425a-91eb-4a2a6435f618\" (UID: \"fe0358bf-ca1c-425a-91eb-4a2a6435f618\") " Nov 24 09:08:00 crc kubenswrapper[4886]: I1124 09:08:00.577742 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fe0358bf-ca1c-425a-91eb-4a2a6435f618-logs" (OuterVolumeSpecName: "logs") pod "fe0358bf-ca1c-425a-91eb-4a2a6435f618" (UID: "fe0358bf-ca1c-425a-91eb-4a2a6435f618"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 09:08:00 crc kubenswrapper[4886]: I1124 09:08:00.588031 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fe0358bf-ca1c-425a-91eb-4a2a6435f618-config-data" (OuterVolumeSpecName: "config-data") pod "fe0358bf-ca1c-425a-91eb-4a2a6435f618" (UID: "fe0358bf-ca1c-425a-91eb-4a2a6435f618"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 09:08:00 crc kubenswrapper[4886]: I1124 09:08:00.589396 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fe0358bf-ca1c-425a-91eb-4a2a6435f618-scripts" (OuterVolumeSpecName: "scripts") pod "fe0358bf-ca1c-425a-91eb-4a2a6435f618" (UID: "fe0358bf-ca1c-425a-91eb-4a2a6435f618"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 09:08:00 crc kubenswrapper[4886]: I1124 09:08:00.589615 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bf964533-7535-4425-a880-9b95595188b4-logs" (OuterVolumeSpecName: "logs") pod "bf964533-7535-4425-a880-9b95595188b4" (UID: "bf964533-7535-4425-a880-9b95595188b4"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 09:08:00 crc kubenswrapper[4886]: I1124 09:08:00.589876 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bf964533-7535-4425-a880-9b95595188b4-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "bf964533-7535-4425-a880-9b95595188b4" (UID: "bf964533-7535-4425-a880-9b95595188b4"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 09:08:00 crc kubenswrapper[4886]: I1124 09:08:00.610428 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf964533-7535-4425-a880-9b95595188b4-kube-api-access-ssf5d" (OuterVolumeSpecName: "kube-api-access-ssf5d") pod "bf964533-7535-4425-a880-9b95595188b4" (UID: "bf964533-7535-4425-a880-9b95595188b4"). InnerVolumeSpecName "kube-api-access-ssf5d". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 09:08:00 crc kubenswrapper[4886]: I1124 09:08:00.616744 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fe0358bf-ca1c-425a-91eb-4a2a6435f618-kube-api-access-s8vn5" (OuterVolumeSpecName: "kube-api-access-s8vn5") pod "fe0358bf-ca1c-425a-91eb-4a2a6435f618" (UID: "fe0358bf-ca1c-425a-91eb-4a2a6435f618"). InnerVolumeSpecName "kube-api-access-s8vn5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 09:08:00 crc kubenswrapper[4886]: I1124 09:08:00.620402 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf964533-7535-4425-a880-9b95595188b4-scripts" (OuterVolumeSpecName: "scripts") pod "bf964533-7535-4425-a880-9b95595188b4" (UID: "bf964533-7535-4425-a880-9b95595188b4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:08:00 crc kubenswrapper[4886]: I1124 09:08:00.632364 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage03-crc" (OuterVolumeSpecName: "glance") pod "bf964533-7535-4425-a880-9b95595188b4" (UID: "bf964533-7535-4425-a880-9b95595188b4"). InnerVolumeSpecName "local-storage03-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 24 09:08:00 crc kubenswrapper[4886]: I1124 09:08:00.635463 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/028c4a78-6e79-412a-954c-abf1cdf4d5a2-kube-api-access-894zv" (OuterVolumeSpecName: "kube-api-access-894zv") pod "028c4a78-6e79-412a-954c-abf1cdf4d5a2" (UID: "028c4a78-6e79-412a-954c-abf1cdf4d5a2"). InnerVolumeSpecName "kube-api-access-894zv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 09:08:00 crc kubenswrapper[4886]: I1124 09:08:00.637347 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fe0358bf-ca1c-425a-91eb-4a2a6435f618-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "fe0358bf-ca1c-425a-91eb-4a2a6435f618" (UID: "fe0358bf-ca1c-425a-91eb-4a2a6435f618"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:08:00 crc kubenswrapper[4886]: I1124 09:08:00.679139 4886 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" " Nov 24 09:08:00 crc kubenswrapper[4886]: I1124 09:08:00.679209 4886 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/bf964533-7535-4425-a880-9b95595188b4-httpd-run\") on node \"crc\" DevicePath \"\"" Nov 24 09:08:00 crc kubenswrapper[4886]: I1124 09:08:00.679224 4886 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fe0358bf-ca1c-425a-91eb-4a2a6435f618-logs\") on node \"crc\" DevicePath \"\"" Nov 24 09:08:00 crc kubenswrapper[4886]: I1124 09:08:00.679235 4886 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/fe0358bf-ca1c-425a-91eb-4a2a6435f618-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 09:08:00 crc kubenswrapper[4886]: I1124 09:08:00.679249 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ssf5d\" (UniqueName: \"kubernetes.io/projected/bf964533-7535-4425-a880-9b95595188b4-kube-api-access-ssf5d\") on node \"crc\" DevicePath \"\"" Nov 24 09:08:00 crc kubenswrapper[4886]: I1124 09:08:00.679261 4886 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/fe0358bf-ca1c-425a-91eb-4a2a6435f618-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 09:08:00 crc kubenswrapper[4886]: I1124 09:08:00.679270 4886 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bf964533-7535-4425-a880-9b95595188b4-logs\") on node \"crc\" DevicePath \"\"" Nov 24 09:08:00 crc kubenswrapper[4886]: I1124 09:08:00.679280 4886 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bf964533-7535-4425-a880-9b95595188b4-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 09:08:00 crc kubenswrapper[4886]: I1124 09:08:00.679290 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s8vn5\" (UniqueName: \"kubernetes.io/projected/fe0358bf-ca1c-425a-91eb-4a2a6435f618-kube-api-access-s8vn5\") on node \"crc\" DevicePath \"\"" Nov 24 09:08:00 crc kubenswrapper[4886]: I1124 09:08:00.679300 4886 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/fe0358bf-ca1c-425a-91eb-4a2a6435f618-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Nov 24 09:08:00 crc kubenswrapper[4886]: I1124 09:08:00.679312 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-894zv\" (UniqueName: \"kubernetes.io/projected/028c4a78-6e79-412a-954c-abf1cdf4d5a2-kube-api-access-894zv\") on node \"crc\" DevicePath \"\"" Nov 24 09:08:00 crc kubenswrapper[4886]: I1124 09:08:00.747327 4886 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage03-crc" (UniqueName: "kubernetes.io/local-volume/local-storage03-crc") on node "crc" Nov 24 09:08:00 crc kubenswrapper[4886]: I1124 09:08:00.783880 4886 reconciler_common.go:293] "Volume detached for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" DevicePath \"\"" Nov 24 09:08:00 crc kubenswrapper[4886]: I1124 09:08:00.788396 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf964533-7535-4425-a880-9b95595188b4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bf964533-7535-4425-a880-9b95595188b4" (UID: "bf964533-7535-4425-a880-9b95595188b4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:08:00 crc kubenswrapper[4886]: I1124 09:08:00.806969 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/028c4a78-6e79-412a-954c-abf1cdf4d5a2-config" (OuterVolumeSpecName: "config") pod "028c4a78-6e79-412a-954c-abf1cdf4d5a2" (UID: "028c4a78-6e79-412a-954c-abf1cdf4d5a2"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 09:08:00 crc kubenswrapper[4886]: I1124 09:08:00.841427 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf964533-7535-4425-a880-9b95595188b4-config-data" (OuterVolumeSpecName: "config-data") pod "bf964533-7535-4425-a880-9b95595188b4" (UID: "bf964533-7535-4425-a880-9b95595188b4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:08:00 crc kubenswrapper[4886]: I1124 09:08:00.846398 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/028c4a78-6e79-412a-954c-abf1cdf4d5a2-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "028c4a78-6e79-412a-954c-abf1cdf4d5a2" (UID: "028c4a78-6e79-412a-954c-abf1cdf4d5a2"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 09:08:00 crc kubenswrapper[4886]: I1124 09:08:00.860649 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/028c4a78-6e79-412a-954c-abf1cdf4d5a2-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "028c4a78-6e79-412a-954c-abf1cdf4d5a2" (UID: "028c4a78-6e79-412a-954c-abf1cdf4d5a2"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 09:08:00 crc kubenswrapper[4886]: I1124 09:08:00.871039 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/028c4a78-6e79-412a-954c-abf1cdf4d5a2-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "028c4a78-6e79-412a-954c-abf1cdf4d5a2" (UID: "028c4a78-6e79-412a-954c-abf1cdf4d5a2"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 09:08:00 crc kubenswrapper[4886]: I1124 09:08:00.895184 4886 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/028c4a78-6e79-412a-954c-abf1cdf4d5a2-config\") on node \"crc\" DevicePath \"\"" Nov 24 09:08:00 crc kubenswrapper[4886]: I1124 09:08:00.895270 4886 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/028c4a78-6e79-412a-954c-abf1cdf4d5a2-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 24 09:08:00 crc kubenswrapper[4886]: I1124 09:08:00.895290 4886 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/028c4a78-6e79-412a-954c-abf1cdf4d5a2-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 24 09:08:00 crc kubenswrapper[4886]: I1124 09:08:00.895306 4886 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf964533-7535-4425-a880-9b95595188b4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 09:08:00 crc kubenswrapper[4886]: I1124 09:08:00.895346 4886 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bf964533-7535-4425-a880-9b95595188b4-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 09:08:00 crc kubenswrapper[4886]: I1124 09:08:00.895373 4886 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/028c4a78-6e79-412a-954c-abf1cdf4d5a2-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 24 09:08:01 crc kubenswrapper[4886]: I1124 09:08:01.383459 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-ts9fz" Nov 24 09:08:01 crc kubenswrapper[4886]: I1124 09:08:01.384183 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 24 09:08:01 crc kubenswrapper[4886]: I1124 09:08:01.384947 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-c858856cc-mpd52" Nov 24 09:08:01 crc kubenswrapper[4886]: I1124 09:08:01.455669 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-ts9fz"] Nov 24 09:08:01 crc kubenswrapper[4886]: I1124 09:08:01.466938 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-ts9fz"] Nov 24 09:08:01 crc kubenswrapper[4886]: I1124 09:08:01.512230 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 24 09:08:01 crc kubenswrapper[4886]: I1124 09:08:01.525563 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 24 09:08:01 crc kubenswrapper[4886]: I1124 09:08:01.568378 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Nov 24 09:08:01 crc kubenswrapper[4886]: E1124 09:08:01.568875 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="028c4a78-6e79-412a-954c-abf1cdf4d5a2" containerName="init" Nov 24 09:08:01 crc kubenswrapper[4886]: I1124 09:08:01.568891 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="028c4a78-6e79-412a-954c-abf1cdf4d5a2" containerName="init" Nov 24 09:08:01 crc kubenswrapper[4886]: E1124 09:08:01.568925 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf964533-7535-4425-a880-9b95595188b4" containerName="glance-httpd" Nov 24 09:08:01 crc kubenswrapper[4886]: I1124 09:08:01.568932 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf964533-7535-4425-a880-9b95595188b4" containerName="glance-httpd" Nov 24 09:08:01 crc kubenswrapper[4886]: E1124 09:08:01.568955 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf964533-7535-4425-a880-9b95595188b4" containerName="glance-log" Nov 24 09:08:01 crc kubenswrapper[4886]: I1124 09:08:01.568963 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf964533-7535-4425-a880-9b95595188b4" containerName="glance-log" Nov 24 09:08:01 crc kubenswrapper[4886]: E1124 09:08:01.568971 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="028c4a78-6e79-412a-954c-abf1cdf4d5a2" containerName="dnsmasq-dns" Nov 24 09:08:01 crc kubenswrapper[4886]: I1124 09:08:01.568978 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="028c4a78-6e79-412a-954c-abf1cdf4d5a2" containerName="dnsmasq-dns" Nov 24 09:08:01 crc kubenswrapper[4886]: I1124 09:08:01.569143 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="028c4a78-6e79-412a-954c-abf1cdf4d5a2" containerName="dnsmasq-dns" Nov 24 09:08:01 crc kubenswrapper[4886]: I1124 09:08:01.569175 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="bf964533-7535-4425-a880-9b95595188b4" containerName="glance-log" Nov 24 09:08:01 crc kubenswrapper[4886]: I1124 09:08:01.569193 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="bf964533-7535-4425-a880-9b95595188b4" containerName="glance-httpd" Nov 24 09:08:01 crc kubenswrapper[4886]: I1124 09:08:01.572781 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 24 09:08:01 crc kubenswrapper[4886]: I1124 09:08:01.603376 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Nov 24 09:08:01 crc kubenswrapper[4886]: I1124 09:08:01.603706 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Nov 24 09:08:01 crc kubenswrapper[4886]: I1124 09:08:01.603920 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-c858856cc-mpd52"] Nov 24 09:08:01 crc kubenswrapper[4886]: E1124 09:08:01.639406 4886 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified" Nov 24 09:08:01 crc kubenswrapper[4886]: E1124 09:08:01.639605 4886 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:barbican-db-sync,Image:quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified,Command:[/bin/bash],Args:[-c barbican-manage db upgrade],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/barbican/barbican.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qs2t9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42403,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42403,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-db-sync-8hkhc_openstack(a2e82e3b-0acc-454e-b8b5-cf584f3298b4): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 24 09:08:01 crc kubenswrapper[4886]: E1124 09:08:01.643631 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/barbican-db-sync-8hkhc" podUID="a2e82e3b-0acc-454e-b8b5-cf584f3298b4" Nov 24 09:08:01 crc kubenswrapper[4886]: I1124 09:08:01.654197 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-c858856cc-mpd52"] Nov 24 09:08:01 crc kubenswrapper[4886]: I1124 09:08:01.659732 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 24 09:08:01 crc kubenswrapper[4886]: I1124 09:08:01.670191 4886 scope.go:117] "RemoveContainer" containerID="7485fe54ab25092f2dee41b85e54ada401bcf7466b136b5006d499ea64ac925a" Nov 24 09:08:01 crc kubenswrapper[4886]: I1124 09:08:01.731767 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/caebd1b1-b583-446f-bfc8-9c4a1be619da-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"caebd1b1-b583-446f-bfc8-9c4a1be619da\") " pod="openstack/glance-default-external-api-0" Nov 24 09:08:01 crc kubenswrapper[4886]: I1124 09:08:01.732570 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/caebd1b1-b583-446f-bfc8-9c4a1be619da-logs\") pod \"glance-default-external-api-0\" (UID: \"caebd1b1-b583-446f-bfc8-9c4a1be619da\") " pod="openstack/glance-default-external-api-0" Nov 24 09:08:01 crc kubenswrapper[4886]: I1124 09:08:01.732685 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vrhw7\" (UniqueName: \"kubernetes.io/projected/caebd1b1-b583-446f-bfc8-9c4a1be619da-kube-api-access-vrhw7\") pod \"glance-default-external-api-0\" (UID: \"caebd1b1-b583-446f-bfc8-9c4a1be619da\") " pod="openstack/glance-default-external-api-0" Nov 24 09:08:01 crc kubenswrapper[4886]: I1124 09:08:01.732739 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/caebd1b1-b583-446f-bfc8-9c4a1be619da-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"caebd1b1-b583-446f-bfc8-9c4a1be619da\") " pod="openstack/glance-default-external-api-0" Nov 24 09:08:01 crc kubenswrapper[4886]: I1124 09:08:01.732778 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/caebd1b1-b583-446f-bfc8-9c4a1be619da-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"caebd1b1-b583-446f-bfc8-9c4a1be619da\") " pod="openstack/glance-default-external-api-0" Nov 24 09:08:01 crc kubenswrapper[4886]: I1124 09:08:01.732817 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-external-api-0\" (UID: \"caebd1b1-b583-446f-bfc8-9c4a1be619da\") " pod="openstack/glance-default-external-api-0" Nov 24 09:08:01 crc kubenswrapper[4886]: I1124 09:08:01.732871 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/caebd1b1-b583-446f-bfc8-9c4a1be619da-scripts\") pod \"glance-default-external-api-0\" (UID: \"caebd1b1-b583-446f-bfc8-9c4a1be619da\") " pod="openstack/glance-default-external-api-0" Nov 24 09:08:01 crc kubenswrapper[4886]: I1124 09:08:01.732901 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/caebd1b1-b583-446f-bfc8-9c4a1be619da-config-data\") pod \"glance-default-external-api-0\" (UID: \"caebd1b1-b583-446f-bfc8-9c4a1be619da\") " pod="openstack/glance-default-external-api-0" Nov 24 09:08:01 crc kubenswrapper[4886]: I1124 09:08:01.841249 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vrhw7\" (UniqueName: \"kubernetes.io/projected/caebd1b1-b583-446f-bfc8-9c4a1be619da-kube-api-access-vrhw7\") pod \"glance-default-external-api-0\" (UID: \"caebd1b1-b583-446f-bfc8-9c4a1be619da\") " pod="openstack/glance-default-external-api-0" Nov 24 09:08:01 crc kubenswrapper[4886]: I1124 09:08:01.841774 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/caebd1b1-b583-446f-bfc8-9c4a1be619da-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"caebd1b1-b583-446f-bfc8-9c4a1be619da\") " pod="openstack/glance-default-external-api-0" Nov 24 09:08:01 crc kubenswrapper[4886]: I1124 09:08:01.841836 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/caebd1b1-b583-446f-bfc8-9c4a1be619da-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"caebd1b1-b583-446f-bfc8-9c4a1be619da\") " pod="openstack/glance-default-external-api-0" Nov 24 09:08:01 crc kubenswrapper[4886]: I1124 09:08:01.841882 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-external-api-0\" (UID: \"caebd1b1-b583-446f-bfc8-9c4a1be619da\") " pod="openstack/glance-default-external-api-0" Nov 24 09:08:01 crc kubenswrapper[4886]: I1124 09:08:01.841954 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/caebd1b1-b583-446f-bfc8-9c4a1be619da-scripts\") pod \"glance-default-external-api-0\" (UID: \"caebd1b1-b583-446f-bfc8-9c4a1be619da\") " pod="openstack/glance-default-external-api-0" Nov 24 09:08:01 crc kubenswrapper[4886]: I1124 09:08:01.841994 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/caebd1b1-b583-446f-bfc8-9c4a1be619da-config-data\") pod \"glance-default-external-api-0\" (UID: \"caebd1b1-b583-446f-bfc8-9c4a1be619da\") " pod="openstack/glance-default-external-api-0" Nov 24 09:08:01 crc kubenswrapper[4886]: I1124 09:08:01.842065 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/caebd1b1-b583-446f-bfc8-9c4a1be619da-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"caebd1b1-b583-446f-bfc8-9c4a1be619da\") " pod="openstack/glance-default-external-api-0" Nov 24 09:08:01 crc kubenswrapper[4886]: I1124 09:08:01.842292 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/caebd1b1-b583-446f-bfc8-9c4a1be619da-logs\") pod \"glance-default-external-api-0\" (UID: \"caebd1b1-b583-446f-bfc8-9c4a1be619da\") " pod="openstack/glance-default-external-api-0" Nov 24 09:08:01 crc kubenswrapper[4886]: I1124 09:08:01.843685 4886 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-external-api-0\" (UID: \"caebd1b1-b583-446f-bfc8-9c4a1be619da\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/glance-default-external-api-0" Nov 24 09:08:01 crc kubenswrapper[4886]: I1124 09:08:01.851275 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/caebd1b1-b583-446f-bfc8-9c4a1be619da-logs\") pod \"glance-default-external-api-0\" (UID: \"caebd1b1-b583-446f-bfc8-9c4a1be619da\") " pod="openstack/glance-default-external-api-0" Nov 24 09:08:01 crc kubenswrapper[4886]: I1124 09:08:01.852907 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/caebd1b1-b583-446f-bfc8-9c4a1be619da-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"caebd1b1-b583-446f-bfc8-9c4a1be619da\") " pod="openstack/glance-default-external-api-0" Nov 24 09:08:01 crc kubenswrapper[4886]: I1124 09:08:01.862053 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/caebd1b1-b583-446f-bfc8-9c4a1be619da-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"caebd1b1-b583-446f-bfc8-9c4a1be619da\") " pod="openstack/glance-default-external-api-0" Nov 24 09:08:01 crc kubenswrapper[4886]: I1124 09:08:01.864415 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/caebd1b1-b583-446f-bfc8-9c4a1be619da-config-data\") pod \"glance-default-external-api-0\" (UID: \"caebd1b1-b583-446f-bfc8-9c4a1be619da\") " pod="openstack/glance-default-external-api-0" Nov 24 09:08:01 crc kubenswrapper[4886]: I1124 09:08:01.867951 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-lqtgn" Nov 24 09:08:01 crc kubenswrapper[4886]: I1124 09:08:01.869070 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/caebd1b1-b583-446f-bfc8-9c4a1be619da-scripts\") pod \"glance-default-external-api-0\" (UID: \"caebd1b1-b583-446f-bfc8-9c4a1be619da\") " pod="openstack/glance-default-external-api-0" Nov 24 09:08:01 crc kubenswrapper[4886]: I1124 09:08:01.874334 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vrhw7\" (UniqueName: \"kubernetes.io/projected/caebd1b1-b583-446f-bfc8-9c4a1be619da-kube-api-access-vrhw7\") pod \"glance-default-external-api-0\" (UID: \"caebd1b1-b583-446f-bfc8-9c4a1be619da\") " pod="openstack/glance-default-external-api-0" Nov 24 09:08:01 crc kubenswrapper[4886]: I1124 09:08:01.884882 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/caebd1b1-b583-446f-bfc8-9c4a1be619da-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"caebd1b1-b583-446f-bfc8-9c4a1be619da\") " pod="openstack/glance-default-external-api-0" Nov 24 09:08:01 crc kubenswrapper[4886]: I1124 09:08:01.890456 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5dc555666f-6hzb6" Nov 24 09:08:01 crc kubenswrapper[4886]: I1124 09:08:01.935840 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-external-api-0\" (UID: \"caebd1b1-b583-446f-bfc8-9c4a1be619da\") " pod="openstack/glance-default-external-api-0" Nov 24 09:08:01 crc kubenswrapper[4886]: I1124 09:08:01.945932 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9c272c26-c042-4e0d-a35d-2ff5d329f901-scripts\") pod \"9c272c26-c042-4e0d-a35d-2ff5d329f901\" (UID: \"9c272c26-c042-4e0d-a35d-2ff5d329f901\") " Nov 24 09:08:01 crc kubenswrapper[4886]: I1124 09:08:01.946025 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h6r2b\" (UniqueName: \"kubernetes.io/projected/9c272c26-c042-4e0d-a35d-2ff5d329f901-kube-api-access-h6r2b\") pod \"9c272c26-c042-4e0d-a35d-2ff5d329f901\" (UID: \"9c272c26-c042-4e0d-a35d-2ff5d329f901\") " Nov 24 09:08:01 crc kubenswrapper[4886]: I1124 09:08:01.946068 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9c272c26-c042-4e0d-a35d-2ff5d329f901-config-data\") pod \"9c272c26-c042-4e0d-a35d-2ff5d329f901\" (UID: \"9c272c26-c042-4e0d-a35d-2ff5d329f901\") " Nov 24 09:08:01 crc kubenswrapper[4886]: I1124 09:08:01.946208 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v5x4t\" (UniqueName: \"kubernetes.io/projected/390e7d30-f337-4255-a488-3b5b345235ed-kube-api-access-v5x4t\") pod \"390e7d30-f337-4255-a488-3b5b345235ed\" (UID: \"390e7d30-f337-4255-a488-3b5b345235ed\") " Nov 24 09:08:01 crc kubenswrapper[4886]: I1124 09:08:01.946266 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/9c272c26-c042-4e0d-a35d-2ff5d329f901-horizon-secret-key\") pod \"9c272c26-c042-4e0d-a35d-2ff5d329f901\" (UID: \"9c272c26-c042-4e0d-a35d-2ff5d329f901\") " Nov 24 09:08:01 crc kubenswrapper[4886]: I1124 09:08:01.946291 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/390e7d30-f337-4255-a488-3b5b345235ed-config\") pod \"390e7d30-f337-4255-a488-3b5b345235ed\" (UID: \"390e7d30-f337-4255-a488-3b5b345235ed\") " Nov 24 09:08:01 crc kubenswrapper[4886]: I1124 09:08:01.946344 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9c272c26-c042-4e0d-a35d-2ff5d329f901-logs\") pod \"9c272c26-c042-4e0d-a35d-2ff5d329f901\" (UID: \"9c272c26-c042-4e0d-a35d-2ff5d329f901\") " Nov 24 09:08:01 crc kubenswrapper[4886]: I1124 09:08:01.946391 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/390e7d30-f337-4255-a488-3b5b345235ed-combined-ca-bundle\") pod \"390e7d30-f337-4255-a488-3b5b345235ed\" (UID: \"390e7d30-f337-4255-a488-3b5b345235ed\") " Nov 24 09:08:01 crc kubenswrapper[4886]: I1124 09:08:01.947087 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9c272c26-c042-4e0d-a35d-2ff5d329f901-config-data" (OuterVolumeSpecName: "config-data") pod "9c272c26-c042-4e0d-a35d-2ff5d329f901" (UID: "9c272c26-c042-4e0d-a35d-2ff5d329f901"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 09:08:01 crc kubenswrapper[4886]: I1124 09:08:01.947578 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9c272c26-c042-4e0d-a35d-2ff5d329f901-scripts" (OuterVolumeSpecName: "scripts") pod "9c272c26-c042-4e0d-a35d-2ff5d329f901" (UID: "9c272c26-c042-4e0d-a35d-2ff5d329f901"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 09:08:01 crc kubenswrapper[4886]: I1124 09:08:01.949697 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9c272c26-c042-4e0d-a35d-2ff5d329f901-logs" (OuterVolumeSpecName: "logs") pod "9c272c26-c042-4e0d-a35d-2ff5d329f901" (UID: "9c272c26-c042-4e0d-a35d-2ff5d329f901"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 09:08:01 crc kubenswrapper[4886]: I1124 09:08:01.957451 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/390e7d30-f337-4255-a488-3b5b345235ed-kube-api-access-v5x4t" (OuterVolumeSpecName: "kube-api-access-v5x4t") pod "390e7d30-f337-4255-a488-3b5b345235ed" (UID: "390e7d30-f337-4255-a488-3b5b345235ed"). InnerVolumeSpecName "kube-api-access-v5x4t". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 09:08:01 crc kubenswrapper[4886]: I1124 09:08:01.960090 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9c272c26-c042-4e0d-a35d-2ff5d329f901-kube-api-access-h6r2b" (OuterVolumeSpecName: "kube-api-access-h6r2b") pod "9c272c26-c042-4e0d-a35d-2ff5d329f901" (UID: "9c272c26-c042-4e0d-a35d-2ff5d329f901"). InnerVolumeSpecName "kube-api-access-h6r2b". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 09:08:01 crc kubenswrapper[4886]: I1124 09:08:01.959958 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9c272c26-c042-4e0d-a35d-2ff5d329f901-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "9c272c26-c042-4e0d-a35d-2ff5d329f901" (UID: "9c272c26-c042-4e0d-a35d-2ff5d329f901"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:08:02 crc kubenswrapper[4886]: I1124 09:08:02.022492 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/390e7d30-f337-4255-a488-3b5b345235ed-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "390e7d30-f337-4255-a488-3b5b345235ed" (UID: "390e7d30-f337-4255-a488-3b5b345235ed"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:08:02 crc kubenswrapper[4886]: I1124 09:08:02.034430 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/390e7d30-f337-4255-a488-3b5b345235ed-config" (OuterVolumeSpecName: "config") pod "390e7d30-f337-4255-a488-3b5b345235ed" (UID: "390e7d30-f337-4255-a488-3b5b345235ed"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:08:02 crc kubenswrapper[4886]: I1124 09:08:02.049690 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v5x4t\" (UniqueName: \"kubernetes.io/projected/390e7d30-f337-4255-a488-3b5b345235ed-kube-api-access-v5x4t\") on node \"crc\" DevicePath \"\"" Nov 24 09:08:02 crc kubenswrapper[4886]: I1124 09:08:02.050066 4886 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/9c272c26-c042-4e0d-a35d-2ff5d329f901-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Nov 24 09:08:02 crc kubenswrapper[4886]: I1124 09:08:02.050204 4886 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/390e7d30-f337-4255-a488-3b5b345235ed-config\") on node \"crc\" DevicePath \"\"" Nov 24 09:08:02 crc kubenswrapper[4886]: I1124 09:08:02.050327 4886 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9c272c26-c042-4e0d-a35d-2ff5d329f901-logs\") on node \"crc\" DevicePath \"\"" Nov 24 09:08:02 crc kubenswrapper[4886]: I1124 09:08:02.050403 4886 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/390e7d30-f337-4255-a488-3b5b345235ed-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 09:08:02 crc kubenswrapper[4886]: I1124 09:08:02.050495 4886 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9c272c26-c042-4e0d-a35d-2ff5d329f901-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 09:08:02 crc kubenswrapper[4886]: I1124 09:08:02.050560 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h6r2b\" (UniqueName: \"kubernetes.io/projected/9c272c26-c042-4e0d-a35d-2ff5d329f901-kube-api-access-h6r2b\") on node \"crc\" DevicePath \"\"" Nov 24 09:08:02 crc kubenswrapper[4886]: I1124 09:08:02.050736 4886 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9c272c26-c042-4e0d-a35d-2ff5d329f901-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 09:08:02 crc kubenswrapper[4886]: I1124 09:08:02.141859 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 24 09:08:02 crc kubenswrapper[4886]: I1124 09:08:02.299103 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-75ffb75746-pwc5g"] Nov 24 09:08:02 crc kubenswrapper[4886]: I1124 09:08:02.308015 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-664f9d77dd-zw4gm"] Nov 24 09:08:02 crc kubenswrapper[4886]: W1124 09:08:02.365615 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod19e275c2_5fd6_4ea7_a023_6d7478ae5750.slice/crio-6861bfeee7292b301654b4f38fb93a530f6d76c4e1283933b5fef10bdcc02571 WatchSource:0}: Error finding container 6861bfeee7292b301654b4f38fb93a530f6d76c4e1283933b5fef10bdcc02571: Status 404 returned error can't find the container with id 6861bfeee7292b301654b4f38fb93a530f6d76c4e1283933b5fef10bdcc02571 Nov 24 09:08:02 crc kubenswrapper[4886]: I1124 09:08:02.401457 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5dc555666f-6hzb6" event={"ID":"9c272c26-c042-4e0d-a35d-2ff5d329f901","Type":"ContainerDied","Data":"9826f97e10a735a735636f97cd28b813e53398ff87d488f2603f81bd0598cf27"} Nov 24 09:08:02 crc kubenswrapper[4886]: I1124 09:08:02.401570 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5dc555666f-6hzb6" Nov 24 09:08:02 crc kubenswrapper[4886]: I1124 09:08:02.425973 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-lqtgn" event={"ID":"390e7d30-f337-4255-a488-3b5b345235ed","Type":"ContainerDied","Data":"c480d08fbd2e71950aed9f98c6ebcc8a3561ddbdab1354b5217a768a9b1eba27"} Nov 24 09:08:02 crc kubenswrapper[4886]: I1124 09:08:02.426028 4886 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c480d08fbd2e71950aed9f98c6ebcc8a3561ddbdab1354b5217a768a9b1eba27" Nov 24 09:08:02 crc kubenswrapper[4886]: I1124 09:08:02.426231 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-lqtgn" Nov 24 09:08:02 crc kubenswrapper[4886]: I1124 09:08:02.431534 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5654c4b6cf-6mjl4" event={"ID":"2b936c64-69ae-43db-9d33-9ed58719be26","Type":"ContainerStarted","Data":"bfc902b8166c3fe12c6fbd87a7114a3179672f82f0eaf46d09d5d4af87c8a9f7"} Nov 24 09:08:02 crc kubenswrapper[4886]: I1124 09:08:02.463646 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-8bzdg" event={"ID":"d9603d94-2b25-4cdc-bab2-daeae2b9f8a7","Type":"ContainerStarted","Data":"eb061de0a3b606b9324ec783a29c55b4b2f1d44741b5469c970c9ad488a0b3a6"} Nov 24 09:08:02 crc kubenswrapper[4886]: I1124 09:08:02.470551 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-75ffb75746-pwc5g" event={"ID":"54dfe4a2-2d0f-417f-b1d6-df18e5581bfc","Type":"ContainerStarted","Data":"4af8b7c8248b28e69e1033743affb1fbf6a8db634ffd4fc78c957c02576ffe85"} Nov 24 09:08:02 crc kubenswrapper[4886]: I1124 09:08:02.505131 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-664f9d77dd-zw4gm" event={"ID":"19e275c2-5fd6-4ea7-a023-6d7478ae5750","Type":"ContainerStarted","Data":"6861bfeee7292b301654b4f38fb93a530f6d76c4e1283933b5fef10bdcc02571"} Nov 24 09:08:02 crc kubenswrapper[4886]: E1124 09:08:02.518238 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified\\\"\"" pod="openstack/barbican-db-sync-8hkhc" podUID="a2e82e3b-0acc-454e-b8b5-cf584f3298b4" Nov 24 09:08:02 crc kubenswrapper[4886]: I1124 09:08:02.536804 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-c6hzr"] Nov 24 09:08:02 crc kubenswrapper[4886]: I1124 09:08:02.548743 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-8bzdg" podStartSLOduration=3.286699477 podStartE2EDuration="34.548721383s" podCreationTimestamp="2025-11-24 09:07:28 +0000 UTC" firstStartedPulling="2025-11-24 09:07:30.215734718 +0000 UTC m=+1106.102472843" lastFinishedPulling="2025-11-24 09:08:01.477756614 +0000 UTC m=+1137.364494749" observedRunningTime="2025-11-24 09:08:02.512624282 +0000 UTC m=+1138.399362427" watchObservedRunningTime="2025-11-24 09:08:02.548721383 +0000 UTC m=+1138.435459518" Nov 24 09:08:02 crc kubenswrapper[4886]: I1124 09:08:02.612926 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 24 09:08:02 crc kubenswrapper[4886]: I1124 09:08:02.685340 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-5dc555666f-6hzb6"] Nov 24 09:08:02 crc kubenswrapper[4886]: I1124 09:08:02.701338 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-5dc555666f-6hzb6"] Nov 24 09:08:02 crc kubenswrapper[4886]: I1124 09:08:02.871142 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="028c4a78-6e79-412a-954c-abf1cdf4d5a2" path="/var/lib/kubelet/pods/028c4a78-6e79-412a-954c-abf1cdf4d5a2/volumes" Nov 24 09:08:02 crc kubenswrapper[4886]: I1124 09:08:02.872005 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9c272c26-c042-4e0d-a35d-2ff5d329f901" path="/var/lib/kubelet/pods/9c272c26-c042-4e0d-a35d-2ff5d329f901/volumes" Nov 24 09:08:02 crc kubenswrapper[4886]: I1124 09:08:02.872904 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf964533-7535-4425-a880-9b95595188b4" path="/var/lib/kubelet/pods/bf964533-7535-4425-a880-9b95595188b4/volumes" Nov 24 09:08:02 crc kubenswrapper[4886]: I1124 09:08:02.874757 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fe0358bf-ca1c-425a-91eb-4a2a6435f618" path="/var/lib/kubelet/pods/fe0358bf-ca1c-425a-91eb-4a2a6435f618/volumes" Nov 24 09:08:03 crc kubenswrapper[4886]: I1124 09:08:03.075862 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6b7b667979-j6rct"] Nov 24 09:08:03 crc kubenswrapper[4886]: E1124 09:08:03.076766 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="390e7d30-f337-4255-a488-3b5b345235ed" containerName="neutron-db-sync" Nov 24 09:08:03 crc kubenswrapper[4886]: I1124 09:08:03.076793 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="390e7d30-f337-4255-a488-3b5b345235ed" containerName="neutron-db-sync" Nov 24 09:08:03 crc kubenswrapper[4886]: I1124 09:08:03.076997 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="390e7d30-f337-4255-a488-3b5b345235ed" containerName="neutron-db-sync" Nov 24 09:08:03 crc kubenswrapper[4886]: I1124 09:08:03.078406 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b7b667979-j6rct" Nov 24 09:08:03 crc kubenswrapper[4886]: I1124 09:08:03.115938 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6b7b667979-j6rct"] Nov 24 09:08:03 crc kubenswrapper[4886]: I1124 09:08:03.213569 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a9ffb58a-a86d-40b8-9dc7-3388e0eedb2a-config\") pod \"dnsmasq-dns-6b7b667979-j6rct\" (UID: \"a9ffb58a-a86d-40b8-9dc7-3388e0eedb2a\") " pod="openstack/dnsmasq-dns-6b7b667979-j6rct" Nov 24 09:08:03 crc kubenswrapper[4886]: I1124 09:08:03.213653 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a9ffb58a-a86d-40b8-9dc7-3388e0eedb2a-dns-swift-storage-0\") pod \"dnsmasq-dns-6b7b667979-j6rct\" (UID: \"a9ffb58a-a86d-40b8-9dc7-3388e0eedb2a\") " pod="openstack/dnsmasq-dns-6b7b667979-j6rct" Nov 24 09:08:03 crc kubenswrapper[4886]: I1124 09:08:03.213712 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a9ffb58a-a86d-40b8-9dc7-3388e0eedb2a-ovsdbserver-sb\") pod \"dnsmasq-dns-6b7b667979-j6rct\" (UID: \"a9ffb58a-a86d-40b8-9dc7-3388e0eedb2a\") " pod="openstack/dnsmasq-dns-6b7b667979-j6rct" Nov 24 09:08:03 crc kubenswrapper[4886]: I1124 09:08:03.213732 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4z8vc\" (UniqueName: \"kubernetes.io/projected/a9ffb58a-a86d-40b8-9dc7-3388e0eedb2a-kube-api-access-4z8vc\") pod \"dnsmasq-dns-6b7b667979-j6rct\" (UID: \"a9ffb58a-a86d-40b8-9dc7-3388e0eedb2a\") " pod="openstack/dnsmasq-dns-6b7b667979-j6rct" Nov 24 09:08:03 crc kubenswrapper[4886]: I1124 09:08:03.213752 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a9ffb58a-a86d-40b8-9dc7-3388e0eedb2a-ovsdbserver-nb\") pod \"dnsmasq-dns-6b7b667979-j6rct\" (UID: \"a9ffb58a-a86d-40b8-9dc7-3388e0eedb2a\") " pod="openstack/dnsmasq-dns-6b7b667979-j6rct" Nov 24 09:08:03 crc kubenswrapper[4886]: I1124 09:08:03.213815 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a9ffb58a-a86d-40b8-9dc7-3388e0eedb2a-dns-svc\") pod \"dnsmasq-dns-6b7b667979-j6rct\" (UID: \"a9ffb58a-a86d-40b8-9dc7-3388e0eedb2a\") " pod="openstack/dnsmasq-dns-6b7b667979-j6rct" Nov 24 09:08:03 crc kubenswrapper[4886]: I1124 09:08:03.281059 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-8568c485b4-l582l"] Nov 24 09:08:03 crc kubenswrapper[4886]: I1124 09:08:03.283411 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-8568c485b4-l582l" Nov 24 09:08:03 crc kubenswrapper[4886]: I1124 09:08:03.289572 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Nov 24 09:08:03 crc kubenswrapper[4886]: I1124 09:08:03.289763 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Nov 24 09:08:03 crc kubenswrapper[4886]: I1124 09:08:03.289951 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-pjfbc" Nov 24 09:08:03 crc kubenswrapper[4886]: I1124 09:08:03.290210 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Nov 24 09:08:03 crc kubenswrapper[4886]: I1124 09:08:03.314625 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-8568c485b4-l582l"] Nov 24 09:08:03 crc kubenswrapper[4886]: I1124 09:08:03.316239 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a9ffb58a-a86d-40b8-9dc7-3388e0eedb2a-dns-svc\") pod \"dnsmasq-dns-6b7b667979-j6rct\" (UID: \"a9ffb58a-a86d-40b8-9dc7-3388e0eedb2a\") " pod="openstack/dnsmasq-dns-6b7b667979-j6rct" Nov 24 09:08:03 crc kubenswrapper[4886]: I1124 09:08:03.316305 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a9ffb58a-a86d-40b8-9dc7-3388e0eedb2a-config\") pod \"dnsmasq-dns-6b7b667979-j6rct\" (UID: \"a9ffb58a-a86d-40b8-9dc7-3388e0eedb2a\") " pod="openstack/dnsmasq-dns-6b7b667979-j6rct" Nov 24 09:08:03 crc kubenswrapper[4886]: I1124 09:08:03.316343 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a9ffb58a-a86d-40b8-9dc7-3388e0eedb2a-dns-swift-storage-0\") pod \"dnsmasq-dns-6b7b667979-j6rct\" (UID: \"a9ffb58a-a86d-40b8-9dc7-3388e0eedb2a\") " pod="openstack/dnsmasq-dns-6b7b667979-j6rct" Nov 24 09:08:03 crc kubenswrapper[4886]: I1124 09:08:03.316381 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a9ffb58a-a86d-40b8-9dc7-3388e0eedb2a-ovsdbserver-sb\") pod \"dnsmasq-dns-6b7b667979-j6rct\" (UID: \"a9ffb58a-a86d-40b8-9dc7-3388e0eedb2a\") " pod="openstack/dnsmasq-dns-6b7b667979-j6rct" Nov 24 09:08:03 crc kubenswrapper[4886]: I1124 09:08:03.316398 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4z8vc\" (UniqueName: \"kubernetes.io/projected/a9ffb58a-a86d-40b8-9dc7-3388e0eedb2a-kube-api-access-4z8vc\") pod \"dnsmasq-dns-6b7b667979-j6rct\" (UID: \"a9ffb58a-a86d-40b8-9dc7-3388e0eedb2a\") " pod="openstack/dnsmasq-dns-6b7b667979-j6rct" Nov 24 09:08:03 crc kubenswrapper[4886]: I1124 09:08:03.316416 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a9ffb58a-a86d-40b8-9dc7-3388e0eedb2a-ovsdbserver-nb\") pod \"dnsmasq-dns-6b7b667979-j6rct\" (UID: \"a9ffb58a-a86d-40b8-9dc7-3388e0eedb2a\") " pod="openstack/dnsmasq-dns-6b7b667979-j6rct" Nov 24 09:08:03 crc kubenswrapper[4886]: I1124 09:08:03.317710 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a9ffb58a-a86d-40b8-9dc7-3388e0eedb2a-ovsdbserver-nb\") pod \"dnsmasq-dns-6b7b667979-j6rct\" (UID: \"a9ffb58a-a86d-40b8-9dc7-3388e0eedb2a\") " pod="openstack/dnsmasq-dns-6b7b667979-j6rct" Nov 24 09:08:03 crc kubenswrapper[4886]: I1124 09:08:03.318179 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a9ffb58a-a86d-40b8-9dc7-3388e0eedb2a-dns-swift-storage-0\") pod \"dnsmasq-dns-6b7b667979-j6rct\" (UID: \"a9ffb58a-a86d-40b8-9dc7-3388e0eedb2a\") " pod="openstack/dnsmasq-dns-6b7b667979-j6rct" Nov 24 09:08:03 crc kubenswrapper[4886]: I1124 09:08:03.318368 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a9ffb58a-a86d-40b8-9dc7-3388e0eedb2a-ovsdbserver-sb\") pod \"dnsmasq-dns-6b7b667979-j6rct\" (UID: \"a9ffb58a-a86d-40b8-9dc7-3388e0eedb2a\") " pod="openstack/dnsmasq-dns-6b7b667979-j6rct" Nov 24 09:08:03 crc kubenswrapper[4886]: I1124 09:08:03.318984 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a9ffb58a-a86d-40b8-9dc7-3388e0eedb2a-dns-svc\") pod \"dnsmasq-dns-6b7b667979-j6rct\" (UID: \"a9ffb58a-a86d-40b8-9dc7-3388e0eedb2a\") " pod="openstack/dnsmasq-dns-6b7b667979-j6rct" Nov 24 09:08:03 crc kubenswrapper[4886]: I1124 09:08:03.319695 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a9ffb58a-a86d-40b8-9dc7-3388e0eedb2a-config\") pod \"dnsmasq-dns-6b7b667979-j6rct\" (UID: \"a9ffb58a-a86d-40b8-9dc7-3388e0eedb2a\") " pod="openstack/dnsmasq-dns-6b7b667979-j6rct" Nov 24 09:08:03 crc kubenswrapper[4886]: I1124 09:08:03.350346 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4z8vc\" (UniqueName: \"kubernetes.io/projected/a9ffb58a-a86d-40b8-9dc7-3388e0eedb2a-kube-api-access-4z8vc\") pod \"dnsmasq-dns-6b7b667979-j6rct\" (UID: \"a9ffb58a-a86d-40b8-9dc7-3388e0eedb2a\") " pod="openstack/dnsmasq-dns-6b7b667979-j6rct" Nov 24 09:08:03 crc kubenswrapper[4886]: I1124 09:08:03.422127 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2mgzr\" (UniqueName: \"kubernetes.io/projected/58e8a97b-bd79-4e90-99ef-4e0eab79c454-kube-api-access-2mgzr\") pod \"neutron-8568c485b4-l582l\" (UID: \"58e8a97b-bd79-4e90-99ef-4e0eab79c454\") " pod="openstack/neutron-8568c485b4-l582l" Nov 24 09:08:03 crc kubenswrapper[4886]: I1124 09:08:03.422229 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/58e8a97b-bd79-4e90-99ef-4e0eab79c454-config\") pod \"neutron-8568c485b4-l582l\" (UID: \"58e8a97b-bd79-4e90-99ef-4e0eab79c454\") " pod="openstack/neutron-8568c485b4-l582l" Nov 24 09:08:03 crc kubenswrapper[4886]: I1124 09:08:03.422387 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/58e8a97b-bd79-4e90-99ef-4e0eab79c454-httpd-config\") pod \"neutron-8568c485b4-l582l\" (UID: \"58e8a97b-bd79-4e90-99ef-4e0eab79c454\") " pod="openstack/neutron-8568c485b4-l582l" Nov 24 09:08:03 crc kubenswrapper[4886]: I1124 09:08:03.422443 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/58e8a97b-bd79-4e90-99ef-4e0eab79c454-ovndb-tls-certs\") pod \"neutron-8568c485b4-l582l\" (UID: \"58e8a97b-bd79-4e90-99ef-4e0eab79c454\") " pod="openstack/neutron-8568c485b4-l582l" Nov 24 09:08:03 crc kubenswrapper[4886]: I1124 09:08:03.422569 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58e8a97b-bd79-4e90-99ef-4e0eab79c454-combined-ca-bundle\") pod \"neutron-8568c485b4-l582l\" (UID: \"58e8a97b-bd79-4e90-99ef-4e0eab79c454\") " pod="openstack/neutron-8568c485b4-l582l" Nov 24 09:08:03 crc kubenswrapper[4886]: I1124 09:08:03.466747 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b7b667979-j6rct" Nov 24 09:08:03 crc kubenswrapper[4886]: I1124 09:08:03.492036 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 24 09:08:03 crc kubenswrapper[4886]: I1124 09:08:03.524638 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-c6hzr" event={"ID":"ec036c5c-6eff-4c4e-83c2-5727576b540e","Type":"ContainerStarted","Data":"ee3f62d7c41f7cbeb64a14223a657dfe0b6d8af2c34cbb7bf58d4ebe083c7ccf"} Nov 24 09:08:03 crc kubenswrapper[4886]: I1124 09:08:03.524949 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-c6hzr" event={"ID":"ec036c5c-6eff-4c4e-83c2-5727576b540e","Type":"ContainerStarted","Data":"169e660d7d74619e1ea151b2cef247c9ac6cd033ed26fdfe47b6f7e1d3506ab1"} Nov 24 09:08:03 crc kubenswrapper[4886]: I1124 09:08:03.529471 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/58e8a97b-bd79-4e90-99ef-4e0eab79c454-httpd-config\") pod \"neutron-8568c485b4-l582l\" (UID: \"58e8a97b-bd79-4e90-99ef-4e0eab79c454\") " pod="openstack/neutron-8568c485b4-l582l" Nov 24 09:08:03 crc kubenswrapper[4886]: I1124 09:08:03.529579 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/58e8a97b-bd79-4e90-99ef-4e0eab79c454-ovndb-tls-certs\") pod \"neutron-8568c485b4-l582l\" (UID: \"58e8a97b-bd79-4e90-99ef-4e0eab79c454\") " pod="openstack/neutron-8568c485b4-l582l" Nov 24 09:08:03 crc kubenswrapper[4886]: I1124 09:08:03.531528 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58e8a97b-bd79-4e90-99ef-4e0eab79c454-combined-ca-bundle\") pod \"neutron-8568c485b4-l582l\" (UID: \"58e8a97b-bd79-4e90-99ef-4e0eab79c454\") " pod="openstack/neutron-8568c485b4-l582l" Nov 24 09:08:03 crc kubenswrapper[4886]: I1124 09:08:03.531720 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2mgzr\" (UniqueName: \"kubernetes.io/projected/58e8a97b-bd79-4e90-99ef-4e0eab79c454-kube-api-access-2mgzr\") pod \"neutron-8568c485b4-l582l\" (UID: \"58e8a97b-bd79-4e90-99ef-4e0eab79c454\") " pod="openstack/neutron-8568c485b4-l582l" Nov 24 09:08:03 crc kubenswrapper[4886]: I1124 09:08:03.531773 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/58e8a97b-bd79-4e90-99ef-4e0eab79c454-config\") pod \"neutron-8568c485b4-l582l\" (UID: \"58e8a97b-bd79-4e90-99ef-4e0eab79c454\") " pod="openstack/neutron-8568c485b4-l582l" Nov 24 09:08:03 crc kubenswrapper[4886]: I1124 09:08:03.534268 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/58e8a97b-bd79-4e90-99ef-4e0eab79c454-httpd-config\") pod \"neutron-8568c485b4-l582l\" (UID: \"58e8a97b-bd79-4e90-99ef-4e0eab79c454\") " pod="openstack/neutron-8568c485b4-l582l" Nov 24 09:08:03 crc kubenswrapper[4886]: I1124 09:08:03.535409 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5654c4b6cf-6mjl4" event={"ID":"2b936c64-69ae-43db-9d33-9ed58719be26","Type":"ContainerStarted","Data":"043796bb0b674d4e5dc3a373e0e41904fae080bfdbe53f3350ea850561868dce"} Nov 24 09:08:03 crc kubenswrapper[4886]: I1124 09:08:03.535825 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-5654c4b6cf-6mjl4" podUID="2b936c64-69ae-43db-9d33-9ed58719be26" containerName="horizon" containerID="cri-o://043796bb0b674d4e5dc3a373e0e41904fae080bfdbe53f3350ea850561868dce" gracePeriod=30 Nov 24 09:08:03 crc kubenswrapper[4886]: I1124 09:08:03.536092 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-5654c4b6cf-6mjl4" podUID="2b936c64-69ae-43db-9d33-9ed58719be26" containerName="horizon-log" containerID="cri-o://bfc902b8166c3fe12c6fbd87a7114a3179672f82f0eaf46d09d5d4af87c8a9f7" gracePeriod=30 Nov 24 09:08:03 crc kubenswrapper[4886]: I1124 09:08:03.540299 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/58e8a97b-bd79-4e90-99ef-4e0eab79c454-config\") pod \"neutron-8568c485b4-l582l\" (UID: \"58e8a97b-bd79-4e90-99ef-4e0eab79c454\") " pod="openstack/neutron-8568c485b4-l582l" Nov 24 09:08:03 crc kubenswrapper[4886]: I1124 09:08:03.545379 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"b4a3df3e-e493-448e-afb1-b52e1a50437a","Type":"ContainerStarted","Data":"7f784a6ead27bc4f173acaa06b1963cd540a199cada1ca65151204c3c05534dc"} Nov 24 09:08:03 crc kubenswrapper[4886]: I1124 09:08:03.561844 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-75ffb75746-pwc5g" event={"ID":"54dfe4a2-2d0f-417f-b1d6-df18e5581bfc","Type":"ContainerStarted","Data":"692850d79528a83bba835c99fe3464de2f71e29a9c94c5526cca407f8eedbbb6"} Nov 24 09:08:03 crc kubenswrapper[4886]: I1124 09:08:03.569079 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/58e8a97b-bd79-4e90-99ef-4e0eab79c454-ovndb-tls-certs\") pod \"neutron-8568c485b4-l582l\" (UID: \"58e8a97b-bd79-4e90-99ef-4e0eab79c454\") " pod="openstack/neutron-8568c485b4-l582l" Nov 24 09:08:03 crc kubenswrapper[4886]: I1124 09:08:03.570728 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2mgzr\" (UniqueName: \"kubernetes.io/projected/58e8a97b-bd79-4e90-99ef-4e0eab79c454-kube-api-access-2mgzr\") pod \"neutron-8568c485b4-l582l\" (UID: \"58e8a97b-bd79-4e90-99ef-4e0eab79c454\") " pod="openstack/neutron-8568c485b4-l582l" Nov 24 09:08:03 crc kubenswrapper[4886]: I1124 09:08:03.577747 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-c6hzr" podStartSLOduration=12.577717136 podStartE2EDuration="12.577717136s" podCreationTimestamp="2025-11-24 09:07:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 09:08:03.549876306 +0000 UTC m=+1139.436614451" watchObservedRunningTime="2025-11-24 09:08:03.577717136 +0000 UTC m=+1139.464455271" Nov 24 09:08:03 crc kubenswrapper[4886]: I1124 09:08:03.587112 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58e8a97b-bd79-4e90-99ef-4e0eab79c454-combined-ca-bundle\") pod \"neutron-8568c485b4-l582l\" (UID: \"58e8a97b-bd79-4e90-99ef-4e0eab79c454\") " pod="openstack/neutron-8568c485b4-l582l" Nov 24 09:08:03 crc kubenswrapper[4886]: I1124 09:08:03.591531 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-5654c4b6cf-6mjl4" podStartSLOduration=6.51377713 podStartE2EDuration="32.591502252s" podCreationTimestamp="2025-11-24 09:07:31 +0000 UTC" firstStartedPulling="2025-11-24 09:07:35.619455673 +0000 UTC m=+1111.506193808" lastFinishedPulling="2025-11-24 09:08:01.697180795 +0000 UTC m=+1137.583918930" observedRunningTime="2025-11-24 09:08:03.582090289 +0000 UTC m=+1139.468828434" watchObservedRunningTime="2025-11-24 09:08:03.591502252 +0000 UTC m=+1139.478240387" Nov 24 09:08:03 crc kubenswrapper[4886]: I1124 09:08:03.597018 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-664f9d77dd-zw4gm" event={"ID":"19e275c2-5fd6-4ea7-a023-6d7478ae5750","Type":"ContainerStarted","Data":"562e6289fda66ba4955faecce5f2091b761e67560c22637120c38205928b7308"} Nov 24 09:08:03 crc kubenswrapper[4886]: I1124 09:08:03.597078 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-664f9d77dd-zw4gm" event={"ID":"19e275c2-5fd6-4ea7-a023-6d7478ae5750","Type":"ContainerStarted","Data":"3c50d8c2dec77c2d144b87416408861ca9aba2fb29009be03aa60deaef78a1cd"} Nov 24 09:08:03 crc kubenswrapper[4886]: I1124 09:08:03.647253 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-664f9d77dd-zw4gm" podStartSLOduration=24.647200863 podStartE2EDuration="24.647200863s" podCreationTimestamp="2025-11-24 09:07:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 09:08:03.633672443 +0000 UTC m=+1139.520410588" watchObservedRunningTime="2025-11-24 09:08:03.647200863 +0000 UTC m=+1139.533938998" Nov 24 09:08:03 crc kubenswrapper[4886]: I1124 09:08:03.718801 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-8568c485b4-l582l" Nov 24 09:08:03 crc kubenswrapper[4886]: I1124 09:08:03.992918 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-b8fbc5445-ts9fz" podUID="028c4a78-6e79-412a-954c-abf1cdf4d5a2" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.110:5353: i/o timeout" Nov 24 09:08:04 crc kubenswrapper[4886]: W1124 09:08:04.417758 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcaebd1b1_b583_446f_bfc8_9c4a1be619da.slice/crio-016d9ca63725e50bf9606cd6fb84645d6efc52876d6cc104b31e83ee681818a9 WatchSource:0}: Error finding container 016d9ca63725e50bf9606cd6fb84645d6efc52876d6cc104b31e83ee681818a9: Status 404 returned error can't find the container with id 016d9ca63725e50bf9606cd6fb84645d6efc52876d6cc104b31e83ee681818a9 Nov 24 09:08:04 crc kubenswrapper[4886]: I1124 09:08:04.612093 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"b4a3df3e-e493-448e-afb1-b52e1a50437a","Type":"ContainerStarted","Data":"93dad6caf21ecc145b49baafdfd8e66227d03285557979991d216a83b024b3c6"} Nov 24 09:08:04 crc kubenswrapper[4886]: I1124 09:08:04.614198 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"caebd1b1-b583-446f-bfc8-9c4a1be619da","Type":"ContainerStarted","Data":"016d9ca63725e50bf9606cd6fb84645d6efc52876d6cc104b31e83ee681818a9"} Nov 24 09:08:05 crc kubenswrapper[4886]: I1124 09:08:05.304379 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6b7b667979-j6rct"] Nov 24 09:08:05 crc kubenswrapper[4886]: W1124 09:08:05.365478 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda9ffb58a_a86d_40b8_9dc7_3388e0eedb2a.slice/crio-4a0dc994b42d8ef6230d62072cfecd8ea1d2bd0e18badfd7053072217377ffc2 WatchSource:0}: Error finding container 4a0dc994b42d8ef6230d62072cfecd8ea1d2bd0e18badfd7053072217377ffc2: Status 404 returned error can't find the container with id 4a0dc994b42d8ef6230d62072cfecd8ea1d2bd0e18badfd7053072217377ffc2 Nov 24 09:08:05 crc kubenswrapper[4886]: I1124 09:08:05.384143 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-8568c485b4-l582l"] Nov 24 09:08:05 crc kubenswrapper[4886]: I1124 09:08:05.654408 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-8568c485b4-l582l" event={"ID":"58e8a97b-bd79-4e90-99ef-4e0eab79c454","Type":"ContainerStarted","Data":"43c4f475050ee811619b226dfd84d447e64cc208fec0b54014e2c6f1eadd6569"} Nov 24 09:08:05 crc kubenswrapper[4886]: I1124 09:08:05.664228 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7b667979-j6rct" event={"ID":"a9ffb58a-a86d-40b8-9dc7-3388e0eedb2a","Type":"ContainerStarted","Data":"4a0dc994b42d8ef6230d62072cfecd8ea1d2bd0e18badfd7053072217377ffc2"} Nov 24 09:08:05 crc kubenswrapper[4886]: I1124 09:08:05.692112 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-75ffb75746-pwc5g" event={"ID":"54dfe4a2-2d0f-417f-b1d6-df18e5581bfc","Type":"ContainerStarted","Data":"06e9411832b7ffda01b80eacb06c2049544b01676f1e649af4e6b2361c635c65"} Nov 24 09:08:05 crc kubenswrapper[4886]: I1124 09:08:05.701100 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6e8de0fe-585f-4cb8-8deb-d788e443fbd2","Type":"ContainerStarted","Data":"a76fc97611d3be26cdd4e41ad350c79afbc3b4cd82d9c74df46643996426a51b"} Nov 24 09:08:05 crc kubenswrapper[4886]: I1124 09:08:05.706792 4886 generic.go:334] "Generic (PLEG): container finished" podID="d9603d94-2b25-4cdc-bab2-daeae2b9f8a7" containerID="eb061de0a3b606b9324ec783a29c55b4b2f1d44741b5469c970c9ad488a0b3a6" exitCode=0 Nov 24 09:08:05 crc kubenswrapper[4886]: I1124 09:08:05.706859 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-8bzdg" event={"ID":"d9603d94-2b25-4cdc-bab2-daeae2b9f8a7","Type":"ContainerDied","Data":"eb061de0a3b606b9324ec783a29c55b4b2f1d44741b5469c970c9ad488a0b3a6"} Nov 24 09:08:05 crc kubenswrapper[4886]: I1124 09:08:05.734828 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-75ffb75746-pwc5g" podStartSLOduration=26.734801918 podStartE2EDuration="26.734801918s" podCreationTimestamp="2025-11-24 09:07:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 09:08:05.725608921 +0000 UTC m=+1141.612347056" watchObservedRunningTime="2025-11-24 09:08:05.734801918 +0000 UTC m=+1141.621540053" Nov 24 09:08:05 crc kubenswrapper[4886]: I1124 09:08:05.839935 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-58775dd67f-bvv4s"] Nov 24 09:08:05 crc kubenswrapper[4886]: I1124 09:08:05.841558 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-58775dd67f-bvv4s" Nov 24 09:08:05 crc kubenswrapper[4886]: I1124 09:08:05.850796 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Nov 24 09:08:05 crc kubenswrapper[4886]: I1124 09:08:05.851011 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Nov 24 09:08:05 crc kubenswrapper[4886]: I1124 09:08:05.880671 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-58775dd67f-bvv4s"] Nov 24 09:08:06 crc kubenswrapper[4886]: I1124 09:08:06.020661 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/f7e8a3f6-7c28-4b33-a355-1c7f38b2cc7b-config\") pod \"neutron-58775dd67f-bvv4s\" (UID: \"f7e8a3f6-7c28-4b33-a355-1c7f38b2cc7b\") " pod="openstack/neutron-58775dd67f-bvv4s" Nov 24 09:08:06 crc kubenswrapper[4886]: I1124 09:08:06.020722 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f7e8a3f6-7c28-4b33-a355-1c7f38b2cc7b-internal-tls-certs\") pod \"neutron-58775dd67f-bvv4s\" (UID: \"f7e8a3f6-7c28-4b33-a355-1c7f38b2cc7b\") " pod="openstack/neutron-58775dd67f-bvv4s" Nov 24 09:08:06 crc kubenswrapper[4886]: I1124 09:08:06.020745 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wqc7x\" (UniqueName: \"kubernetes.io/projected/f7e8a3f6-7c28-4b33-a355-1c7f38b2cc7b-kube-api-access-wqc7x\") pod \"neutron-58775dd67f-bvv4s\" (UID: \"f7e8a3f6-7c28-4b33-a355-1c7f38b2cc7b\") " pod="openstack/neutron-58775dd67f-bvv4s" Nov 24 09:08:06 crc kubenswrapper[4886]: I1124 09:08:06.020814 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f7e8a3f6-7c28-4b33-a355-1c7f38b2cc7b-public-tls-certs\") pod \"neutron-58775dd67f-bvv4s\" (UID: \"f7e8a3f6-7c28-4b33-a355-1c7f38b2cc7b\") " pod="openstack/neutron-58775dd67f-bvv4s" Nov 24 09:08:06 crc kubenswrapper[4886]: I1124 09:08:06.020931 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/f7e8a3f6-7c28-4b33-a355-1c7f38b2cc7b-ovndb-tls-certs\") pod \"neutron-58775dd67f-bvv4s\" (UID: \"f7e8a3f6-7c28-4b33-a355-1c7f38b2cc7b\") " pod="openstack/neutron-58775dd67f-bvv4s" Nov 24 09:08:06 crc kubenswrapper[4886]: I1124 09:08:06.020962 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7e8a3f6-7c28-4b33-a355-1c7f38b2cc7b-combined-ca-bundle\") pod \"neutron-58775dd67f-bvv4s\" (UID: \"f7e8a3f6-7c28-4b33-a355-1c7f38b2cc7b\") " pod="openstack/neutron-58775dd67f-bvv4s" Nov 24 09:08:06 crc kubenswrapper[4886]: I1124 09:08:06.020984 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/f7e8a3f6-7c28-4b33-a355-1c7f38b2cc7b-httpd-config\") pod \"neutron-58775dd67f-bvv4s\" (UID: \"f7e8a3f6-7c28-4b33-a355-1c7f38b2cc7b\") " pod="openstack/neutron-58775dd67f-bvv4s" Nov 24 09:08:06 crc kubenswrapper[4886]: I1124 09:08:06.123368 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f7e8a3f6-7c28-4b33-a355-1c7f38b2cc7b-public-tls-certs\") pod \"neutron-58775dd67f-bvv4s\" (UID: \"f7e8a3f6-7c28-4b33-a355-1c7f38b2cc7b\") " pod="openstack/neutron-58775dd67f-bvv4s" Nov 24 09:08:06 crc kubenswrapper[4886]: I1124 09:08:06.124013 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/f7e8a3f6-7c28-4b33-a355-1c7f38b2cc7b-ovndb-tls-certs\") pod \"neutron-58775dd67f-bvv4s\" (UID: \"f7e8a3f6-7c28-4b33-a355-1c7f38b2cc7b\") " pod="openstack/neutron-58775dd67f-bvv4s" Nov 24 09:08:06 crc kubenswrapper[4886]: I1124 09:08:06.124063 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7e8a3f6-7c28-4b33-a355-1c7f38b2cc7b-combined-ca-bundle\") pod \"neutron-58775dd67f-bvv4s\" (UID: \"f7e8a3f6-7c28-4b33-a355-1c7f38b2cc7b\") " pod="openstack/neutron-58775dd67f-bvv4s" Nov 24 09:08:06 crc kubenswrapper[4886]: I1124 09:08:06.124092 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/f7e8a3f6-7c28-4b33-a355-1c7f38b2cc7b-httpd-config\") pod \"neutron-58775dd67f-bvv4s\" (UID: \"f7e8a3f6-7c28-4b33-a355-1c7f38b2cc7b\") " pod="openstack/neutron-58775dd67f-bvv4s" Nov 24 09:08:06 crc kubenswrapper[4886]: I1124 09:08:06.124192 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/f7e8a3f6-7c28-4b33-a355-1c7f38b2cc7b-config\") pod \"neutron-58775dd67f-bvv4s\" (UID: \"f7e8a3f6-7c28-4b33-a355-1c7f38b2cc7b\") " pod="openstack/neutron-58775dd67f-bvv4s" Nov 24 09:08:06 crc kubenswrapper[4886]: I1124 09:08:06.124226 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f7e8a3f6-7c28-4b33-a355-1c7f38b2cc7b-internal-tls-certs\") pod \"neutron-58775dd67f-bvv4s\" (UID: \"f7e8a3f6-7c28-4b33-a355-1c7f38b2cc7b\") " pod="openstack/neutron-58775dd67f-bvv4s" Nov 24 09:08:06 crc kubenswrapper[4886]: I1124 09:08:06.124264 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wqc7x\" (UniqueName: \"kubernetes.io/projected/f7e8a3f6-7c28-4b33-a355-1c7f38b2cc7b-kube-api-access-wqc7x\") pod \"neutron-58775dd67f-bvv4s\" (UID: \"f7e8a3f6-7c28-4b33-a355-1c7f38b2cc7b\") " pod="openstack/neutron-58775dd67f-bvv4s" Nov 24 09:08:06 crc kubenswrapper[4886]: I1124 09:08:06.130283 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f7e8a3f6-7c28-4b33-a355-1c7f38b2cc7b-public-tls-certs\") pod \"neutron-58775dd67f-bvv4s\" (UID: \"f7e8a3f6-7c28-4b33-a355-1c7f38b2cc7b\") " pod="openstack/neutron-58775dd67f-bvv4s" Nov 24 09:08:06 crc kubenswrapper[4886]: I1124 09:08:06.131859 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/f7e8a3f6-7c28-4b33-a355-1c7f38b2cc7b-config\") pod \"neutron-58775dd67f-bvv4s\" (UID: \"f7e8a3f6-7c28-4b33-a355-1c7f38b2cc7b\") " pod="openstack/neutron-58775dd67f-bvv4s" Nov 24 09:08:06 crc kubenswrapper[4886]: I1124 09:08:06.136355 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/f7e8a3f6-7c28-4b33-a355-1c7f38b2cc7b-ovndb-tls-certs\") pod \"neutron-58775dd67f-bvv4s\" (UID: \"f7e8a3f6-7c28-4b33-a355-1c7f38b2cc7b\") " pod="openstack/neutron-58775dd67f-bvv4s" Nov 24 09:08:06 crc kubenswrapper[4886]: I1124 09:08:06.140909 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f7e8a3f6-7c28-4b33-a355-1c7f38b2cc7b-internal-tls-certs\") pod \"neutron-58775dd67f-bvv4s\" (UID: \"f7e8a3f6-7c28-4b33-a355-1c7f38b2cc7b\") " pod="openstack/neutron-58775dd67f-bvv4s" Nov 24 09:08:06 crc kubenswrapper[4886]: I1124 09:08:06.144791 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/f7e8a3f6-7c28-4b33-a355-1c7f38b2cc7b-httpd-config\") pod \"neutron-58775dd67f-bvv4s\" (UID: \"f7e8a3f6-7c28-4b33-a355-1c7f38b2cc7b\") " pod="openstack/neutron-58775dd67f-bvv4s" Nov 24 09:08:06 crc kubenswrapper[4886]: I1124 09:08:06.145616 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wqc7x\" (UniqueName: \"kubernetes.io/projected/f7e8a3f6-7c28-4b33-a355-1c7f38b2cc7b-kube-api-access-wqc7x\") pod \"neutron-58775dd67f-bvv4s\" (UID: \"f7e8a3f6-7c28-4b33-a355-1c7f38b2cc7b\") " pod="openstack/neutron-58775dd67f-bvv4s" Nov 24 09:08:06 crc kubenswrapper[4886]: I1124 09:08:06.163393 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7e8a3f6-7c28-4b33-a355-1c7f38b2cc7b-combined-ca-bundle\") pod \"neutron-58775dd67f-bvv4s\" (UID: \"f7e8a3f6-7c28-4b33-a355-1c7f38b2cc7b\") " pod="openstack/neutron-58775dd67f-bvv4s" Nov 24 09:08:06 crc kubenswrapper[4886]: I1124 09:08:06.185066 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-58775dd67f-bvv4s" Nov 24 09:08:06 crc kubenswrapper[4886]: I1124 09:08:06.739596 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"caebd1b1-b583-446f-bfc8-9c4a1be619da","Type":"ContainerStarted","Data":"8e39182930d884585426ec654fe44ba08ebe43f29f1ea47f6779bc6dbdc6d168"} Nov 24 09:08:06 crc kubenswrapper[4886]: I1124 09:08:06.745498 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-8568c485b4-l582l" event={"ID":"58e8a97b-bd79-4e90-99ef-4e0eab79c454","Type":"ContainerStarted","Data":"8d13265eb27febd68b62cd429cdb79754e348aaaf855824ed40c4f409263550d"} Nov 24 09:08:06 crc kubenswrapper[4886]: I1124 09:08:06.759814 4886 generic.go:334] "Generic (PLEG): container finished" podID="a9ffb58a-a86d-40b8-9dc7-3388e0eedb2a" containerID="212424341bf6e05770364590b05890293fe77b0701e9d44f15856d2d159517f7" exitCode=0 Nov 24 09:08:06 crc kubenswrapper[4886]: I1124 09:08:06.759932 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7b667979-j6rct" event={"ID":"a9ffb58a-a86d-40b8-9dc7-3388e0eedb2a","Type":"ContainerDied","Data":"212424341bf6e05770364590b05890293fe77b0701e9d44f15856d2d159517f7"} Nov 24 09:08:06 crc kubenswrapper[4886]: I1124 09:08:06.776806 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"b4a3df3e-e493-448e-afb1-b52e1a50437a","Type":"ContainerStarted","Data":"0a2394bb23bdc576103a3a750bc16ce96e45f561bdb043e7aa24c8b8b48fbe56"} Nov 24 09:08:06 crc kubenswrapper[4886]: I1124 09:08:06.865652 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=18.865632146 podStartE2EDuration="18.865632146s" podCreationTimestamp="2025-11-24 09:07:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 09:08:06.864858894 +0000 UTC m=+1142.751597039" watchObservedRunningTime="2025-11-24 09:08:06.865632146 +0000 UTC m=+1142.752370271" Nov 24 09:08:07 crc kubenswrapper[4886]: I1124 09:08:07.308926 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-58775dd67f-bvv4s"] Nov 24 09:08:07 crc kubenswrapper[4886]: W1124 09:08:07.353201 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf7e8a3f6_7c28_4b33_a355_1c7f38b2cc7b.slice/crio-82359ab31a5fc7d9d90ce9b9180a9e22edd79b9265e87df4da2fbad711456537 WatchSource:0}: Error finding container 82359ab31a5fc7d9d90ce9b9180a9e22edd79b9265e87df4da2fbad711456537: Status 404 returned error can't find the container with id 82359ab31a5fc7d9d90ce9b9180a9e22edd79b9265e87df4da2fbad711456537 Nov 24 09:08:07 crc kubenswrapper[4886]: I1124 09:08:07.713928 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-8bzdg" Nov 24 09:08:07 crc kubenswrapper[4886]: I1124 09:08:07.823635 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d9603d94-2b25-4cdc-bab2-daeae2b9f8a7-logs\") pod \"d9603d94-2b25-4cdc-bab2-daeae2b9f8a7\" (UID: \"d9603d94-2b25-4cdc-bab2-daeae2b9f8a7\") " Nov 24 09:08:07 crc kubenswrapper[4886]: I1124 09:08:07.823761 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d9603d94-2b25-4cdc-bab2-daeae2b9f8a7-config-data\") pod \"d9603d94-2b25-4cdc-bab2-daeae2b9f8a7\" (UID: \"d9603d94-2b25-4cdc-bab2-daeae2b9f8a7\") " Nov 24 09:08:07 crc kubenswrapper[4886]: I1124 09:08:07.823790 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d9603d94-2b25-4cdc-bab2-daeae2b9f8a7-scripts\") pod \"d9603d94-2b25-4cdc-bab2-daeae2b9f8a7\" (UID: \"d9603d94-2b25-4cdc-bab2-daeae2b9f8a7\") " Nov 24 09:08:07 crc kubenswrapper[4886]: I1124 09:08:07.824113 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9603d94-2b25-4cdc-bab2-daeae2b9f8a7-combined-ca-bundle\") pod \"d9603d94-2b25-4cdc-bab2-daeae2b9f8a7\" (UID: \"d9603d94-2b25-4cdc-bab2-daeae2b9f8a7\") " Nov 24 09:08:07 crc kubenswrapper[4886]: I1124 09:08:07.824139 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vjjwb\" (UniqueName: \"kubernetes.io/projected/d9603d94-2b25-4cdc-bab2-daeae2b9f8a7-kube-api-access-vjjwb\") pod \"d9603d94-2b25-4cdc-bab2-daeae2b9f8a7\" (UID: \"d9603d94-2b25-4cdc-bab2-daeae2b9f8a7\") " Nov 24 09:08:07 crc kubenswrapper[4886]: I1124 09:08:07.826262 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d9603d94-2b25-4cdc-bab2-daeae2b9f8a7-logs" (OuterVolumeSpecName: "logs") pod "d9603d94-2b25-4cdc-bab2-daeae2b9f8a7" (UID: "d9603d94-2b25-4cdc-bab2-daeae2b9f8a7"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 09:08:07 crc kubenswrapper[4886]: I1124 09:08:07.840473 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"caebd1b1-b583-446f-bfc8-9c4a1be619da","Type":"ContainerStarted","Data":"c955b1a526ed486b0ca332997a1e4a75451163825649ca98e777b93f4c4dd6aa"} Nov 24 09:08:07 crc kubenswrapper[4886]: I1124 09:08:07.841068 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d9603d94-2b25-4cdc-bab2-daeae2b9f8a7-kube-api-access-vjjwb" (OuterVolumeSpecName: "kube-api-access-vjjwb") pod "d9603d94-2b25-4cdc-bab2-daeae2b9f8a7" (UID: "d9603d94-2b25-4cdc-bab2-daeae2b9f8a7"). InnerVolumeSpecName "kube-api-access-vjjwb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 09:08:07 crc kubenswrapper[4886]: I1124 09:08:07.842562 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d9603d94-2b25-4cdc-bab2-daeae2b9f8a7-scripts" (OuterVolumeSpecName: "scripts") pod "d9603d94-2b25-4cdc-bab2-daeae2b9f8a7" (UID: "d9603d94-2b25-4cdc-bab2-daeae2b9f8a7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:08:07 crc kubenswrapper[4886]: I1124 09:08:07.864584 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-8bzdg" event={"ID":"d9603d94-2b25-4cdc-bab2-daeae2b9f8a7","Type":"ContainerDied","Data":"1167b45578b03296ec9f947936712436221f2689e4276ac7db0461dc9580911f"} Nov 24 09:08:07 crc kubenswrapper[4886]: I1124 09:08:07.864636 4886 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1167b45578b03296ec9f947936712436221f2689e4276ac7db0461dc9580911f" Nov 24 09:08:07 crc kubenswrapper[4886]: I1124 09:08:07.864708 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-8bzdg" Nov 24 09:08:07 crc kubenswrapper[4886]: I1124 09:08:07.886525 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-646878466-vzd4z"] Nov 24 09:08:07 crc kubenswrapper[4886]: E1124 09:08:07.887058 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d9603d94-2b25-4cdc-bab2-daeae2b9f8a7" containerName="placement-db-sync" Nov 24 09:08:07 crc kubenswrapper[4886]: I1124 09:08:07.887077 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="d9603d94-2b25-4cdc-bab2-daeae2b9f8a7" containerName="placement-db-sync" Nov 24 09:08:07 crc kubenswrapper[4886]: I1124 09:08:07.887306 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="d9603d94-2b25-4cdc-bab2-daeae2b9f8a7" containerName="placement-db-sync" Nov 24 09:08:07 crc kubenswrapper[4886]: I1124 09:08:07.888479 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-646878466-vzd4z" Nov 24 09:08:07 crc kubenswrapper[4886]: I1124 09:08:07.896634 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-8568c485b4-l582l" event={"ID":"58e8a97b-bd79-4e90-99ef-4e0eab79c454","Type":"ContainerStarted","Data":"889ae79220a89d15eec8f56867b0331a8b9a2285a600e5e3ca9e38df50c45943"} Nov 24 09:08:07 crc kubenswrapper[4886]: I1124 09:08:07.903542 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-58775dd67f-bvv4s" event={"ID":"f7e8a3f6-7c28-4b33-a355-1c7f38b2cc7b","Type":"ContainerStarted","Data":"82359ab31a5fc7d9d90ce9b9180a9e22edd79b9265e87df4da2fbad711456537"} Nov 24 09:08:07 crc kubenswrapper[4886]: I1124 09:08:07.907197 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-8568c485b4-l582l" Nov 24 09:08:07 crc kubenswrapper[4886]: I1124 09:08:07.913722 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Nov 24 09:08:07 crc kubenswrapper[4886]: I1124 09:08:07.913722 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Nov 24 09:08:07 crc kubenswrapper[4886]: I1124 09:08:07.927055 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vjjwb\" (UniqueName: \"kubernetes.io/projected/d9603d94-2b25-4cdc-bab2-daeae2b9f8a7-kube-api-access-vjjwb\") on node \"crc\" DevicePath \"\"" Nov 24 09:08:07 crc kubenswrapper[4886]: I1124 09:08:07.927098 4886 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d9603d94-2b25-4cdc-bab2-daeae2b9f8a7-logs\") on node \"crc\" DevicePath \"\"" Nov 24 09:08:07 crc kubenswrapper[4886]: I1124 09:08:07.927110 4886 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d9603d94-2b25-4cdc-bab2-daeae2b9f8a7-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 09:08:07 crc kubenswrapper[4886]: I1124 09:08:07.929805 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=6.929775793 podStartE2EDuration="6.929775793s" podCreationTimestamp="2025-11-24 09:08:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 09:08:07.878269409 +0000 UTC m=+1143.765007554" watchObservedRunningTime="2025-11-24 09:08:07.929775793 +0000 UTC m=+1143.816513928" Nov 24 09:08:07 crc kubenswrapper[4886]: I1124 09:08:07.943048 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-646878466-vzd4z"] Nov 24 09:08:07 crc kubenswrapper[4886]: I1124 09:08:07.996641 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-8568c485b4-l582l" podStartSLOduration=4.996616726 podStartE2EDuration="4.996616726s" podCreationTimestamp="2025-11-24 09:08:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 09:08:07.98997507 +0000 UTC m=+1143.876713215" watchObservedRunningTime="2025-11-24 09:08:07.996616726 +0000 UTC m=+1143.883354861" Nov 24 09:08:07 crc kubenswrapper[4886]: I1124 09:08:07.997366 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d9603d94-2b25-4cdc-bab2-daeae2b9f8a7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d9603d94-2b25-4cdc-bab2-daeae2b9f8a7" (UID: "d9603d94-2b25-4cdc-bab2-daeae2b9f8a7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:08:08 crc kubenswrapper[4886]: I1124 09:08:08.027819 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d9603d94-2b25-4cdc-bab2-daeae2b9f8a7-config-data" (OuterVolumeSpecName: "config-data") pod "d9603d94-2b25-4cdc-bab2-daeae2b9f8a7" (UID: "d9603d94-2b25-4cdc-bab2-daeae2b9f8a7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:08:08 crc kubenswrapper[4886]: I1124 09:08:08.028987 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/98af9edc-5cf6-4dd9-93e0-2e320d0d0939-public-tls-certs\") pod \"placement-646878466-vzd4z\" (UID: \"98af9edc-5cf6-4dd9-93e0-2e320d0d0939\") " pod="openstack/placement-646878466-vzd4z" Nov 24 09:08:08 crc kubenswrapper[4886]: I1124 09:08:08.029108 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98af9edc-5cf6-4dd9-93e0-2e320d0d0939-combined-ca-bundle\") pod \"placement-646878466-vzd4z\" (UID: \"98af9edc-5cf6-4dd9-93e0-2e320d0d0939\") " pod="openstack/placement-646878466-vzd4z" Nov 24 09:08:08 crc kubenswrapper[4886]: I1124 09:08:08.029140 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7z7wx\" (UniqueName: \"kubernetes.io/projected/98af9edc-5cf6-4dd9-93e0-2e320d0d0939-kube-api-access-7z7wx\") pod \"placement-646878466-vzd4z\" (UID: \"98af9edc-5cf6-4dd9-93e0-2e320d0d0939\") " pod="openstack/placement-646878466-vzd4z" Nov 24 09:08:08 crc kubenswrapper[4886]: I1124 09:08:08.029356 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/98af9edc-5cf6-4dd9-93e0-2e320d0d0939-config-data\") pod \"placement-646878466-vzd4z\" (UID: \"98af9edc-5cf6-4dd9-93e0-2e320d0d0939\") " pod="openstack/placement-646878466-vzd4z" Nov 24 09:08:08 crc kubenswrapper[4886]: I1124 09:08:08.029435 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/98af9edc-5cf6-4dd9-93e0-2e320d0d0939-internal-tls-certs\") pod \"placement-646878466-vzd4z\" (UID: \"98af9edc-5cf6-4dd9-93e0-2e320d0d0939\") " pod="openstack/placement-646878466-vzd4z" Nov 24 09:08:08 crc kubenswrapper[4886]: I1124 09:08:08.029484 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/98af9edc-5cf6-4dd9-93e0-2e320d0d0939-logs\") pod \"placement-646878466-vzd4z\" (UID: \"98af9edc-5cf6-4dd9-93e0-2e320d0d0939\") " pod="openstack/placement-646878466-vzd4z" Nov 24 09:08:08 crc kubenswrapper[4886]: I1124 09:08:08.029536 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/98af9edc-5cf6-4dd9-93e0-2e320d0d0939-scripts\") pod \"placement-646878466-vzd4z\" (UID: \"98af9edc-5cf6-4dd9-93e0-2e320d0d0939\") " pod="openstack/placement-646878466-vzd4z" Nov 24 09:08:08 crc kubenswrapper[4886]: I1124 09:08:08.029635 4886 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9603d94-2b25-4cdc-bab2-daeae2b9f8a7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 09:08:08 crc kubenswrapper[4886]: I1124 09:08:08.029656 4886 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d9603d94-2b25-4cdc-bab2-daeae2b9f8a7-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 09:08:08 crc kubenswrapper[4886]: I1124 09:08:08.132663 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/98af9edc-5cf6-4dd9-93e0-2e320d0d0939-config-data\") pod \"placement-646878466-vzd4z\" (UID: \"98af9edc-5cf6-4dd9-93e0-2e320d0d0939\") " pod="openstack/placement-646878466-vzd4z" Nov 24 09:08:08 crc kubenswrapper[4886]: I1124 09:08:08.133139 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/98af9edc-5cf6-4dd9-93e0-2e320d0d0939-internal-tls-certs\") pod \"placement-646878466-vzd4z\" (UID: \"98af9edc-5cf6-4dd9-93e0-2e320d0d0939\") " pod="openstack/placement-646878466-vzd4z" Nov 24 09:08:08 crc kubenswrapper[4886]: I1124 09:08:08.133198 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/98af9edc-5cf6-4dd9-93e0-2e320d0d0939-logs\") pod \"placement-646878466-vzd4z\" (UID: \"98af9edc-5cf6-4dd9-93e0-2e320d0d0939\") " pod="openstack/placement-646878466-vzd4z" Nov 24 09:08:08 crc kubenswrapper[4886]: I1124 09:08:08.133247 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/98af9edc-5cf6-4dd9-93e0-2e320d0d0939-scripts\") pod \"placement-646878466-vzd4z\" (UID: \"98af9edc-5cf6-4dd9-93e0-2e320d0d0939\") " pod="openstack/placement-646878466-vzd4z" Nov 24 09:08:08 crc kubenswrapper[4886]: I1124 09:08:08.133325 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/98af9edc-5cf6-4dd9-93e0-2e320d0d0939-public-tls-certs\") pod \"placement-646878466-vzd4z\" (UID: \"98af9edc-5cf6-4dd9-93e0-2e320d0d0939\") " pod="openstack/placement-646878466-vzd4z" Nov 24 09:08:08 crc kubenswrapper[4886]: I1124 09:08:08.133373 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98af9edc-5cf6-4dd9-93e0-2e320d0d0939-combined-ca-bundle\") pod \"placement-646878466-vzd4z\" (UID: \"98af9edc-5cf6-4dd9-93e0-2e320d0d0939\") " pod="openstack/placement-646878466-vzd4z" Nov 24 09:08:08 crc kubenswrapper[4886]: I1124 09:08:08.133393 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7z7wx\" (UniqueName: \"kubernetes.io/projected/98af9edc-5cf6-4dd9-93e0-2e320d0d0939-kube-api-access-7z7wx\") pod \"placement-646878466-vzd4z\" (UID: \"98af9edc-5cf6-4dd9-93e0-2e320d0d0939\") " pod="openstack/placement-646878466-vzd4z" Nov 24 09:08:08 crc kubenswrapper[4886]: I1124 09:08:08.135687 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/98af9edc-5cf6-4dd9-93e0-2e320d0d0939-logs\") pod \"placement-646878466-vzd4z\" (UID: \"98af9edc-5cf6-4dd9-93e0-2e320d0d0939\") " pod="openstack/placement-646878466-vzd4z" Nov 24 09:08:08 crc kubenswrapper[4886]: I1124 09:08:08.137956 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/98af9edc-5cf6-4dd9-93e0-2e320d0d0939-scripts\") pod \"placement-646878466-vzd4z\" (UID: \"98af9edc-5cf6-4dd9-93e0-2e320d0d0939\") " pod="openstack/placement-646878466-vzd4z" Nov 24 09:08:08 crc kubenswrapper[4886]: I1124 09:08:08.138335 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/98af9edc-5cf6-4dd9-93e0-2e320d0d0939-internal-tls-certs\") pod \"placement-646878466-vzd4z\" (UID: \"98af9edc-5cf6-4dd9-93e0-2e320d0d0939\") " pod="openstack/placement-646878466-vzd4z" Nov 24 09:08:08 crc kubenswrapper[4886]: I1124 09:08:08.139635 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98af9edc-5cf6-4dd9-93e0-2e320d0d0939-combined-ca-bundle\") pod \"placement-646878466-vzd4z\" (UID: \"98af9edc-5cf6-4dd9-93e0-2e320d0d0939\") " pod="openstack/placement-646878466-vzd4z" Nov 24 09:08:08 crc kubenswrapper[4886]: I1124 09:08:08.143323 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/98af9edc-5cf6-4dd9-93e0-2e320d0d0939-config-data\") pod \"placement-646878466-vzd4z\" (UID: \"98af9edc-5cf6-4dd9-93e0-2e320d0d0939\") " pod="openstack/placement-646878466-vzd4z" Nov 24 09:08:08 crc kubenswrapper[4886]: I1124 09:08:08.145060 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/98af9edc-5cf6-4dd9-93e0-2e320d0d0939-public-tls-certs\") pod \"placement-646878466-vzd4z\" (UID: \"98af9edc-5cf6-4dd9-93e0-2e320d0d0939\") " pod="openstack/placement-646878466-vzd4z" Nov 24 09:08:08 crc kubenswrapper[4886]: I1124 09:08:08.160485 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7z7wx\" (UniqueName: \"kubernetes.io/projected/98af9edc-5cf6-4dd9-93e0-2e320d0d0939-kube-api-access-7z7wx\") pod \"placement-646878466-vzd4z\" (UID: \"98af9edc-5cf6-4dd9-93e0-2e320d0d0939\") " pod="openstack/placement-646878466-vzd4z" Nov 24 09:08:08 crc kubenswrapper[4886]: I1124 09:08:08.283427 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-646878466-vzd4z" Nov 24 09:08:08 crc kubenswrapper[4886]: I1124 09:08:08.664872 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Nov 24 09:08:08 crc kubenswrapper[4886]: I1124 09:08:08.665165 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Nov 24 09:08:08 crc kubenswrapper[4886]: I1124 09:08:08.746220 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Nov 24 09:08:08 crc kubenswrapper[4886]: I1124 09:08:08.780829 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Nov 24 09:08:08 crc kubenswrapper[4886]: I1124 09:08:08.884526 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-646878466-vzd4z"] Nov 24 09:08:08 crc kubenswrapper[4886]: I1124 09:08:08.937613 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-58775dd67f-bvv4s" event={"ID":"f7e8a3f6-7c28-4b33-a355-1c7f38b2cc7b","Type":"ContainerStarted","Data":"e31950c9485a0867b128e4a163f8a84532fd27f4557f4683d5a8e411ddb9a1be"} Nov 24 09:08:08 crc kubenswrapper[4886]: I1124 09:08:08.937668 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-58775dd67f-bvv4s" event={"ID":"f7e8a3f6-7c28-4b33-a355-1c7f38b2cc7b","Type":"ContainerStarted","Data":"91676e99205914f2e275ce900da88227740745e1b2e0a447858d63631809ee57"} Nov 24 09:08:08 crc kubenswrapper[4886]: I1124 09:08:08.939143 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-58775dd67f-bvv4s" Nov 24 09:08:08 crc kubenswrapper[4886]: I1124 09:08:08.942123 4886 generic.go:334] "Generic (PLEG): container finished" podID="ec036c5c-6eff-4c4e-83c2-5727576b540e" containerID="ee3f62d7c41f7cbeb64a14223a657dfe0b6d8af2c34cbb7bf58d4ebe083c7ccf" exitCode=0 Nov 24 09:08:08 crc kubenswrapper[4886]: I1124 09:08:08.942355 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-c6hzr" event={"ID":"ec036c5c-6eff-4c4e-83c2-5727576b540e","Type":"ContainerDied","Data":"ee3f62d7c41f7cbeb64a14223a657dfe0b6d8af2c34cbb7bf58d4ebe083c7ccf"} Nov 24 09:08:08 crc kubenswrapper[4886]: I1124 09:08:08.947405 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7b667979-j6rct" event={"ID":"a9ffb58a-a86d-40b8-9dc7-3388e0eedb2a","Type":"ContainerStarted","Data":"0333e0897d0b7c2f487364113d5b89b47d1882e24e7f01a38ec764a7b93dda93"} Nov 24 09:08:08 crc kubenswrapper[4886]: I1124 09:08:08.947846 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Nov 24 09:08:08 crc kubenswrapper[4886]: I1124 09:08:08.947903 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Nov 24 09:08:08 crc kubenswrapper[4886]: I1124 09:08:08.947925 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6b7b667979-j6rct" Nov 24 09:08:08 crc kubenswrapper[4886]: I1124 09:08:08.982390 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-58775dd67f-bvv4s" podStartSLOduration=3.982366527 podStartE2EDuration="3.982366527s" podCreationTimestamp="2025-11-24 09:08:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 09:08:08.967371566 +0000 UTC m=+1144.854109701" watchObservedRunningTime="2025-11-24 09:08:08.982366527 +0000 UTC m=+1144.869104662" Nov 24 09:08:09 crc kubenswrapper[4886]: I1124 09:08:09.001287 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6b7b667979-j6rct" podStartSLOduration=6.001258246 podStartE2EDuration="6.001258246s" podCreationTimestamp="2025-11-24 09:08:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 09:08:08.992521521 +0000 UTC m=+1144.879259656" watchObservedRunningTime="2025-11-24 09:08:09.001258246 +0000 UTC m=+1144.887996381" Nov 24 09:08:09 crc kubenswrapper[4886]: I1124 09:08:09.866142 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-75ffb75746-pwc5g" Nov 24 09:08:09 crc kubenswrapper[4886]: I1124 09:08:09.866675 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-75ffb75746-pwc5g" Nov 24 09:08:09 crc kubenswrapper[4886]: I1124 09:08:09.908240 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-664f9d77dd-zw4gm" Nov 24 09:08:09 crc kubenswrapper[4886]: I1124 09:08:09.909811 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-664f9d77dd-zw4gm" Nov 24 09:08:09 crc kubenswrapper[4886]: I1124 09:08:09.988544 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-646878466-vzd4z" event={"ID":"98af9edc-5cf6-4dd9-93e0-2e320d0d0939","Type":"ContainerStarted","Data":"70bda6239654c380a097993d196ca376630467c43e9cde1abadd6616b9895e48"} Nov 24 09:08:09 crc kubenswrapper[4886]: I1124 09:08:09.988592 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-646878466-vzd4z" event={"ID":"98af9edc-5cf6-4dd9-93e0-2e320d0d0939","Type":"ContainerStarted","Data":"0c02fd647c884a54ef50ee022ec04fedd6d258e5b7994bd1eddd2fd33d823c77"} Nov 24 09:08:09 crc kubenswrapper[4886]: I1124 09:08:09.988604 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-646878466-vzd4z" event={"ID":"98af9edc-5cf6-4dd9-93e0-2e320d0d0939","Type":"ContainerStarted","Data":"0571baf0b2125256bba6b40aa0938f95047e2dde2c74074176e5ca7965780d75"} Nov 24 09:08:09 crc kubenswrapper[4886]: I1124 09:08:09.990515 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-646878466-vzd4z" Nov 24 09:08:09 crc kubenswrapper[4886]: I1124 09:08:09.990557 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-646878466-vzd4z" Nov 24 09:08:10 crc kubenswrapper[4886]: I1124 09:08:10.054331 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-646878466-vzd4z" podStartSLOduration=3.054309573 podStartE2EDuration="3.054309573s" podCreationTimestamp="2025-11-24 09:08:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 09:08:10.050133406 +0000 UTC m=+1145.936871551" watchObservedRunningTime="2025-11-24 09:08:10.054309573 +0000 UTC m=+1145.941047708" Nov 24 09:08:11 crc kubenswrapper[4886]: I1124 09:08:11.006111 4886 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 24 09:08:12 crc kubenswrapper[4886]: I1124 09:08:12.032829 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Nov 24 09:08:12 crc kubenswrapper[4886]: I1124 09:08:12.033502 4886 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 24 09:08:12 crc kubenswrapper[4886]: I1124 09:08:12.034415 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-5654c4b6cf-6mjl4" Nov 24 09:08:12 crc kubenswrapper[4886]: I1124 09:08:12.038174 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Nov 24 09:08:12 crc kubenswrapper[4886]: I1124 09:08:12.145464 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Nov 24 09:08:12 crc kubenswrapper[4886]: I1124 09:08:12.145526 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Nov 24 09:08:12 crc kubenswrapper[4886]: I1124 09:08:12.207311 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Nov 24 09:08:12 crc kubenswrapper[4886]: I1124 09:08:12.222460 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Nov 24 09:08:13 crc kubenswrapper[4886]: I1124 09:08:13.028339 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Nov 24 09:08:13 crc kubenswrapper[4886]: I1124 09:08:13.028403 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Nov 24 09:08:13 crc kubenswrapper[4886]: I1124 09:08:13.470469 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6b7b667979-j6rct" Nov 24 09:08:13 crc kubenswrapper[4886]: I1124 09:08:13.549264 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-8wwcw"] Nov 24 09:08:13 crc kubenswrapper[4886]: I1124 09:08:13.549705 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-56df8fb6b7-8wwcw" podUID="d02e92b5-fa15-43ce-a8aa-c3dc06490056" containerName="dnsmasq-dns" containerID="cri-o://eab15089d73b98cd36e88c196689d7cf11cf9012d2d1eaf43de60cdd938e3822" gracePeriod=10 Nov 24 09:08:14 crc kubenswrapper[4886]: I1124 09:08:14.014574 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-56df8fb6b7-8wwcw" podUID="d02e92b5-fa15-43ce-a8aa-c3dc06490056" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.140:5353: connect: connection refused" Nov 24 09:08:14 crc kubenswrapper[4886]: I1124 09:08:14.044913 4886 generic.go:334] "Generic (PLEG): container finished" podID="d02e92b5-fa15-43ce-a8aa-c3dc06490056" containerID="eab15089d73b98cd36e88c196689d7cf11cf9012d2d1eaf43de60cdd938e3822" exitCode=0 Nov 24 09:08:14 crc kubenswrapper[4886]: I1124 09:08:14.044968 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56df8fb6b7-8wwcw" event={"ID":"d02e92b5-fa15-43ce-a8aa-c3dc06490056","Type":"ContainerDied","Data":"eab15089d73b98cd36e88c196689d7cf11cf9012d2d1eaf43de60cdd938e3822"} Nov 24 09:08:15 crc kubenswrapper[4886]: I1124 09:08:15.572405 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Nov 24 09:08:15 crc kubenswrapper[4886]: I1124 09:08:15.572842 4886 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 24 09:08:15 crc kubenswrapper[4886]: I1124 09:08:15.575041 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Nov 24 09:08:16 crc kubenswrapper[4886]: I1124 09:08:16.073983 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-c6hzr" event={"ID":"ec036c5c-6eff-4c4e-83c2-5727576b540e","Type":"ContainerDied","Data":"169e660d7d74619e1ea151b2cef247c9ac6cd033ed26fdfe47b6f7e1d3506ab1"} Nov 24 09:08:16 crc kubenswrapper[4886]: I1124 09:08:16.074845 4886 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="169e660d7d74619e1ea151b2cef247c9ac6cd033ed26fdfe47b6f7e1d3506ab1" Nov 24 09:08:16 crc kubenswrapper[4886]: I1124 09:08:16.240546 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-c6hzr" Nov 24 09:08:16 crc kubenswrapper[4886]: I1124 09:08:16.360565 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/ec036c5c-6eff-4c4e-83c2-5727576b540e-fernet-keys\") pod \"ec036c5c-6eff-4c4e-83c2-5727576b540e\" (UID: \"ec036c5c-6eff-4c4e-83c2-5727576b540e\") " Nov 24 09:08:16 crc kubenswrapper[4886]: I1124 09:08:16.360770 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j4t65\" (UniqueName: \"kubernetes.io/projected/ec036c5c-6eff-4c4e-83c2-5727576b540e-kube-api-access-j4t65\") pod \"ec036c5c-6eff-4c4e-83c2-5727576b540e\" (UID: \"ec036c5c-6eff-4c4e-83c2-5727576b540e\") " Nov 24 09:08:16 crc kubenswrapper[4886]: I1124 09:08:16.360802 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/ec036c5c-6eff-4c4e-83c2-5727576b540e-credential-keys\") pod \"ec036c5c-6eff-4c4e-83c2-5727576b540e\" (UID: \"ec036c5c-6eff-4c4e-83c2-5727576b540e\") " Nov 24 09:08:16 crc kubenswrapper[4886]: I1124 09:08:16.360826 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec036c5c-6eff-4c4e-83c2-5727576b540e-config-data\") pod \"ec036c5c-6eff-4c4e-83c2-5727576b540e\" (UID: \"ec036c5c-6eff-4c4e-83c2-5727576b540e\") " Nov 24 09:08:16 crc kubenswrapper[4886]: I1124 09:08:16.360911 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ec036c5c-6eff-4c4e-83c2-5727576b540e-scripts\") pod \"ec036c5c-6eff-4c4e-83c2-5727576b540e\" (UID: \"ec036c5c-6eff-4c4e-83c2-5727576b540e\") " Nov 24 09:08:16 crc kubenswrapper[4886]: I1124 09:08:16.361040 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec036c5c-6eff-4c4e-83c2-5727576b540e-combined-ca-bundle\") pod \"ec036c5c-6eff-4c4e-83c2-5727576b540e\" (UID: \"ec036c5c-6eff-4c4e-83c2-5727576b540e\") " Nov 24 09:08:16 crc kubenswrapper[4886]: I1124 09:08:16.370769 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ec036c5c-6eff-4c4e-83c2-5727576b540e-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "ec036c5c-6eff-4c4e-83c2-5727576b540e" (UID: "ec036c5c-6eff-4c4e-83c2-5727576b540e"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:08:16 crc kubenswrapper[4886]: I1124 09:08:16.381679 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ec036c5c-6eff-4c4e-83c2-5727576b540e-scripts" (OuterVolumeSpecName: "scripts") pod "ec036c5c-6eff-4c4e-83c2-5727576b540e" (UID: "ec036c5c-6eff-4c4e-83c2-5727576b540e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:08:16 crc kubenswrapper[4886]: I1124 09:08:16.383562 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ec036c5c-6eff-4c4e-83c2-5727576b540e-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "ec036c5c-6eff-4c4e-83c2-5727576b540e" (UID: "ec036c5c-6eff-4c4e-83c2-5727576b540e"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:08:16 crc kubenswrapper[4886]: I1124 09:08:16.388874 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ec036c5c-6eff-4c4e-83c2-5727576b540e-kube-api-access-j4t65" (OuterVolumeSpecName: "kube-api-access-j4t65") pod "ec036c5c-6eff-4c4e-83c2-5727576b540e" (UID: "ec036c5c-6eff-4c4e-83c2-5727576b540e"). InnerVolumeSpecName "kube-api-access-j4t65". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 09:08:16 crc kubenswrapper[4886]: I1124 09:08:16.464221 4886 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/ec036c5c-6eff-4c4e-83c2-5727576b540e-fernet-keys\") on node \"crc\" DevicePath \"\"" Nov 24 09:08:16 crc kubenswrapper[4886]: I1124 09:08:16.464270 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j4t65\" (UniqueName: \"kubernetes.io/projected/ec036c5c-6eff-4c4e-83c2-5727576b540e-kube-api-access-j4t65\") on node \"crc\" DevicePath \"\"" Nov 24 09:08:16 crc kubenswrapper[4886]: I1124 09:08:16.464286 4886 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/ec036c5c-6eff-4c4e-83c2-5727576b540e-credential-keys\") on node \"crc\" DevicePath \"\"" Nov 24 09:08:16 crc kubenswrapper[4886]: I1124 09:08:16.464297 4886 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ec036c5c-6eff-4c4e-83c2-5727576b540e-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 09:08:16 crc kubenswrapper[4886]: I1124 09:08:16.468340 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ec036c5c-6eff-4c4e-83c2-5727576b540e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ec036c5c-6eff-4c4e-83c2-5727576b540e" (UID: "ec036c5c-6eff-4c4e-83c2-5727576b540e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:08:16 crc kubenswrapper[4886]: I1124 09:08:16.491211 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ec036c5c-6eff-4c4e-83c2-5727576b540e-config-data" (OuterVolumeSpecName: "config-data") pod "ec036c5c-6eff-4c4e-83c2-5727576b540e" (UID: "ec036c5c-6eff-4c4e-83c2-5727576b540e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:08:16 crc kubenswrapper[4886]: I1124 09:08:16.566751 4886 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec036c5c-6eff-4c4e-83c2-5727576b540e-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 09:08:16 crc kubenswrapper[4886]: I1124 09:08:16.566807 4886 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec036c5c-6eff-4c4e-83c2-5727576b540e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 09:08:16 crc kubenswrapper[4886]: I1124 09:08:16.583432 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56df8fb6b7-8wwcw" Nov 24 09:08:16 crc kubenswrapper[4886]: I1124 09:08:16.670817 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lx9wt\" (UniqueName: \"kubernetes.io/projected/d02e92b5-fa15-43ce-a8aa-c3dc06490056-kube-api-access-lx9wt\") pod \"d02e92b5-fa15-43ce-a8aa-c3dc06490056\" (UID: \"d02e92b5-fa15-43ce-a8aa-c3dc06490056\") " Nov 24 09:08:16 crc kubenswrapper[4886]: I1124 09:08:16.670919 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d02e92b5-fa15-43ce-a8aa-c3dc06490056-dns-swift-storage-0\") pod \"d02e92b5-fa15-43ce-a8aa-c3dc06490056\" (UID: \"d02e92b5-fa15-43ce-a8aa-c3dc06490056\") " Nov 24 09:08:16 crc kubenswrapper[4886]: I1124 09:08:16.670970 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d02e92b5-fa15-43ce-a8aa-c3dc06490056-ovsdbserver-nb\") pod \"d02e92b5-fa15-43ce-a8aa-c3dc06490056\" (UID: \"d02e92b5-fa15-43ce-a8aa-c3dc06490056\") " Nov 24 09:08:16 crc kubenswrapper[4886]: I1124 09:08:16.671046 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d02e92b5-fa15-43ce-a8aa-c3dc06490056-config\") pod \"d02e92b5-fa15-43ce-a8aa-c3dc06490056\" (UID: \"d02e92b5-fa15-43ce-a8aa-c3dc06490056\") " Nov 24 09:08:16 crc kubenswrapper[4886]: I1124 09:08:16.671103 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d02e92b5-fa15-43ce-a8aa-c3dc06490056-ovsdbserver-sb\") pod \"d02e92b5-fa15-43ce-a8aa-c3dc06490056\" (UID: \"d02e92b5-fa15-43ce-a8aa-c3dc06490056\") " Nov 24 09:08:16 crc kubenswrapper[4886]: I1124 09:08:16.671262 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d02e92b5-fa15-43ce-a8aa-c3dc06490056-dns-svc\") pod \"d02e92b5-fa15-43ce-a8aa-c3dc06490056\" (UID: \"d02e92b5-fa15-43ce-a8aa-c3dc06490056\") " Nov 24 09:08:16 crc kubenswrapper[4886]: I1124 09:08:16.693219 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d02e92b5-fa15-43ce-a8aa-c3dc06490056-kube-api-access-lx9wt" (OuterVolumeSpecName: "kube-api-access-lx9wt") pod "d02e92b5-fa15-43ce-a8aa-c3dc06490056" (UID: "d02e92b5-fa15-43ce-a8aa-c3dc06490056"). InnerVolumeSpecName "kube-api-access-lx9wt". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 09:08:16 crc kubenswrapper[4886]: I1124 09:08:16.757588 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d02e92b5-fa15-43ce-a8aa-c3dc06490056-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "d02e92b5-fa15-43ce-a8aa-c3dc06490056" (UID: "d02e92b5-fa15-43ce-a8aa-c3dc06490056"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 09:08:16 crc kubenswrapper[4886]: I1124 09:08:16.774651 4886 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d02e92b5-fa15-43ce-a8aa-c3dc06490056-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 24 09:08:16 crc kubenswrapper[4886]: I1124 09:08:16.774694 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lx9wt\" (UniqueName: \"kubernetes.io/projected/d02e92b5-fa15-43ce-a8aa-c3dc06490056-kube-api-access-lx9wt\") on node \"crc\" DevicePath \"\"" Nov 24 09:08:16 crc kubenswrapper[4886]: I1124 09:08:16.775838 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d02e92b5-fa15-43ce-a8aa-c3dc06490056-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "d02e92b5-fa15-43ce-a8aa-c3dc06490056" (UID: "d02e92b5-fa15-43ce-a8aa-c3dc06490056"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 09:08:16 crc kubenswrapper[4886]: I1124 09:08:16.804931 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d02e92b5-fa15-43ce-a8aa-c3dc06490056-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "d02e92b5-fa15-43ce-a8aa-c3dc06490056" (UID: "d02e92b5-fa15-43ce-a8aa-c3dc06490056"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 09:08:16 crc kubenswrapper[4886]: I1124 09:08:16.814849 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d02e92b5-fa15-43ce-a8aa-c3dc06490056-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "d02e92b5-fa15-43ce-a8aa-c3dc06490056" (UID: "d02e92b5-fa15-43ce-a8aa-c3dc06490056"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 09:08:16 crc kubenswrapper[4886]: I1124 09:08:16.900692 4886 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d02e92b5-fa15-43ce-a8aa-c3dc06490056-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 24 09:08:16 crc kubenswrapper[4886]: I1124 09:08:16.900750 4886 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d02e92b5-fa15-43ce-a8aa-c3dc06490056-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 24 09:08:16 crc kubenswrapper[4886]: I1124 09:08:16.900764 4886 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d02e92b5-fa15-43ce-a8aa-c3dc06490056-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 24 09:08:16 crc kubenswrapper[4886]: I1124 09:08:16.986700 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d02e92b5-fa15-43ce-a8aa-c3dc06490056-config" (OuterVolumeSpecName: "config") pod "d02e92b5-fa15-43ce-a8aa-c3dc06490056" (UID: "d02e92b5-fa15-43ce-a8aa-c3dc06490056"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 09:08:17 crc kubenswrapper[4886]: I1124 09:08:17.012285 4886 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d02e92b5-fa15-43ce-a8aa-c3dc06490056-config\") on node \"crc\" DevicePath \"\"" Nov 24 09:08:17 crc kubenswrapper[4886]: I1124 09:08:17.161599 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-c6hzr" Nov 24 09:08:17 crc kubenswrapper[4886]: I1124 09:08:17.162576 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56df8fb6b7-8wwcw" Nov 24 09:08:17 crc kubenswrapper[4886]: I1124 09:08:17.165484 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56df8fb6b7-8wwcw" event={"ID":"d02e92b5-fa15-43ce-a8aa-c3dc06490056","Type":"ContainerDied","Data":"4fd89efe314c245138763f1ece726d81324a2a53de93c942959af9ab1ac20570"} Nov 24 09:08:17 crc kubenswrapper[4886]: I1124 09:08:17.165560 4886 scope.go:117] "RemoveContainer" containerID="eab15089d73b98cd36e88c196689d7cf11cf9012d2d1eaf43de60cdd938e3822" Nov 24 09:08:17 crc kubenswrapper[4886]: I1124 09:08:17.214324 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-8wwcw"] Nov 24 09:08:17 crc kubenswrapper[4886]: I1124 09:08:17.224750 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-8wwcw"] Nov 24 09:08:17 crc kubenswrapper[4886]: I1124 09:08:17.271550 4886 scope.go:117] "RemoveContainer" containerID="07682178d95fbd8e2bc70ffbdb25293b719ac0df1e69af24562f01b4a6ba79a8" Nov 24 09:08:17 crc kubenswrapper[4886]: I1124 09:08:17.456204 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-78ff5b5cf5-swx4n"] Nov 24 09:08:17 crc kubenswrapper[4886]: E1124 09:08:17.456768 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d02e92b5-fa15-43ce-a8aa-c3dc06490056" containerName="init" Nov 24 09:08:17 crc kubenswrapper[4886]: I1124 09:08:17.456787 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="d02e92b5-fa15-43ce-a8aa-c3dc06490056" containerName="init" Nov 24 09:08:17 crc kubenswrapper[4886]: E1124 09:08:17.456818 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec036c5c-6eff-4c4e-83c2-5727576b540e" containerName="keystone-bootstrap" Nov 24 09:08:17 crc kubenswrapper[4886]: I1124 09:08:17.456826 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec036c5c-6eff-4c4e-83c2-5727576b540e" containerName="keystone-bootstrap" Nov 24 09:08:17 crc kubenswrapper[4886]: E1124 09:08:17.456850 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d02e92b5-fa15-43ce-a8aa-c3dc06490056" containerName="dnsmasq-dns" Nov 24 09:08:17 crc kubenswrapper[4886]: I1124 09:08:17.456855 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="d02e92b5-fa15-43ce-a8aa-c3dc06490056" containerName="dnsmasq-dns" Nov 24 09:08:17 crc kubenswrapper[4886]: I1124 09:08:17.457106 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="d02e92b5-fa15-43ce-a8aa-c3dc06490056" containerName="dnsmasq-dns" Nov 24 09:08:17 crc kubenswrapper[4886]: I1124 09:08:17.457142 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="ec036c5c-6eff-4c4e-83c2-5727576b540e" containerName="keystone-bootstrap" Nov 24 09:08:17 crc kubenswrapper[4886]: I1124 09:08:17.457924 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-78ff5b5cf5-swx4n" Nov 24 09:08:17 crc kubenswrapper[4886]: I1124 09:08:17.461244 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Nov 24 09:08:17 crc kubenswrapper[4886]: I1124 09:08:17.461557 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Nov 24 09:08:17 crc kubenswrapper[4886]: I1124 09:08:17.461722 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-tfhvb" Nov 24 09:08:17 crc kubenswrapper[4886]: I1124 09:08:17.461861 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Nov 24 09:08:17 crc kubenswrapper[4886]: I1124 09:08:17.461980 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Nov 24 09:08:17 crc kubenswrapper[4886]: I1124 09:08:17.467470 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-78ff5b5cf5-swx4n"] Nov 24 09:08:17 crc kubenswrapper[4886]: I1124 09:08:17.468257 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Nov 24 09:08:17 crc kubenswrapper[4886]: I1124 09:08:17.632334 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ba9d3f7a-c442-4fac-bc1f-4863e157b084-scripts\") pod \"keystone-78ff5b5cf5-swx4n\" (UID: \"ba9d3f7a-c442-4fac-bc1f-4863e157b084\") " pod="openstack/keystone-78ff5b5cf5-swx4n" Nov 24 09:08:17 crc kubenswrapper[4886]: I1124 09:08:17.633124 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xl99g\" (UniqueName: \"kubernetes.io/projected/ba9d3f7a-c442-4fac-bc1f-4863e157b084-kube-api-access-xl99g\") pod \"keystone-78ff5b5cf5-swx4n\" (UID: \"ba9d3f7a-c442-4fac-bc1f-4863e157b084\") " pod="openstack/keystone-78ff5b5cf5-swx4n" Nov 24 09:08:17 crc kubenswrapper[4886]: I1124 09:08:17.633250 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ba9d3f7a-c442-4fac-bc1f-4863e157b084-config-data\") pod \"keystone-78ff5b5cf5-swx4n\" (UID: \"ba9d3f7a-c442-4fac-bc1f-4863e157b084\") " pod="openstack/keystone-78ff5b5cf5-swx4n" Nov 24 09:08:17 crc kubenswrapper[4886]: I1124 09:08:17.633332 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ba9d3f7a-c442-4fac-bc1f-4863e157b084-public-tls-certs\") pod \"keystone-78ff5b5cf5-swx4n\" (UID: \"ba9d3f7a-c442-4fac-bc1f-4863e157b084\") " pod="openstack/keystone-78ff5b5cf5-swx4n" Nov 24 09:08:17 crc kubenswrapper[4886]: I1124 09:08:17.633355 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba9d3f7a-c442-4fac-bc1f-4863e157b084-combined-ca-bundle\") pod \"keystone-78ff5b5cf5-swx4n\" (UID: \"ba9d3f7a-c442-4fac-bc1f-4863e157b084\") " pod="openstack/keystone-78ff5b5cf5-swx4n" Nov 24 09:08:17 crc kubenswrapper[4886]: I1124 09:08:17.633498 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/ba9d3f7a-c442-4fac-bc1f-4863e157b084-fernet-keys\") pod \"keystone-78ff5b5cf5-swx4n\" (UID: \"ba9d3f7a-c442-4fac-bc1f-4863e157b084\") " pod="openstack/keystone-78ff5b5cf5-swx4n" Nov 24 09:08:17 crc kubenswrapper[4886]: I1124 09:08:17.633584 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ba9d3f7a-c442-4fac-bc1f-4863e157b084-internal-tls-certs\") pod \"keystone-78ff5b5cf5-swx4n\" (UID: \"ba9d3f7a-c442-4fac-bc1f-4863e157b084\") " pod="openstack/keystone-78ff5b5cf5-swx4n" Nov 24 09:08:17 crc kubenswrapper[4886]: I1124 09:08:17.633772 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/ba9d3f7a-c442-4fac-bc1f-4863e157b084-credential-keys\") pod \"keystone-78ff5b5cf5-swx4n\" (UID: \"ba9d3f7a-c442-4fac-bc1f-4863e157b084\") " pod="openstack/keystone-78ff5b5cf5-swx4n" Nov 24 09:08:17 crc kubenswrapper[4886]: I1124 09:08:17.735287 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/ba9d3f7a-c442-4fac-bc1f-4863e157b084-credential-keys\") pod \"keystone-78ff5b5cf5-swx4n\" (UID: \"ba9d3f7a-c442-4fac-bc1f-4863e157b084\") " pod="openstack/keystone-78ff5b5cf5-swx4n" Nov 24 09:08:17 crc kubenswrapper[4886]: I1124 09:08:17.735347 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ba9d3f7a-c442-4fac-bc1f-4863e157b084-scripts\") pod \"keystone-78ff5b5cf5-swx4n\" (UID: \"ba9d3f7a-c442-4fac-bc1f-4863e157b084\") " pod="openstack/keystone-78ff5b5cf5-swx4n" Nov 24 09:08:17 crc kubenswrapper[4886]: I1124 09:08:17.735440 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xl99g\" (UniqueName: \"kubernetes.io/projected/ba9d3f7a-c442-4fac-bc1f-4863e157b084-kube-api-access-xl99g\") pod \"keystone-78ff5b5cf5-swx4n\" (UID: \"ba9d3f7a-c442-4fac-bc1f-4863e157b084\") " pod="openstack/keystone-78ff5b5cf5-swx4n" Nov 24 09:08:17 crc kubenswrapper[4886]: I1124 09:08:17.735466 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ba9d3f7a-c442-4fac-bc1f-4863e157b084-config-data\") pod \"keystone-78ff5b5cf5-swx4n\" (UID: \"ba9d3f7a-c442-4fac-bc1f-4863e157b084\") " pod="openstack/keystone-78ff5b5cf5-swx4n" Nov 24 09:08:17 crc kubenswrapper[4886]: I1124 09:08:17.736739 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba9d3f7a-c442-4fac-bc1f-4863e157b084-combined-ca-bundle\") pod \"keystone-78ff5b5cf5-swx4n\" (UID: \"ba9d3f7a-c442-4fac-bc1f-4863e157b084\") " pod="openstack/keystone-78ff5b5cf5-swx4n" Nov 24 09:08:17 crc kubenswrapper[4886]: I1124 09:08:17.736769 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ba9d3f7a-c442-4fac-bc1f-4863e157b084-public-tls-certs\") pod \"keystone-78ff5b5cf5-swx4n\" (UID: \"ba9d3f7a-c442-4fac-bc1f-4863e157b084\") " pod="openstack/keystone-78ff5b5cf5-swx4n" Nov 24 09:08:17 crc kubenswrapper[4886]: I1124 09:08:17.736793 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/ba9d3f7a-c442-4fac-bc1f-4863e157b084-fernet-keys\") pod \"keystone-78ff5b5cf5-swx4n\" (UID: \"ba9d3f7a-c442-4fac-bc1f-4863e157b084\") " pod="openstack/keystone-78ff5b5cf5-swx4n" Nov 24 09:08:17 crc kubenswrapper[4886]: I1124 09:08:17.736811 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ba9d3f7a-c442-4fac-bc1f-4863e157b084-internal-tls-certs\") pod \"keystone-78ff5b5cf5-swx4n\" (UID: \"ba9d3f7a-c442-4fac-bc1f-4863e157b084\") " pod="openstack/keystone-78ff5b5cf5-swx4n" Nov 24 09:08:17 crc kubenswrapper[4886]: I1124 09:08:17.743231 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ba9d3f7a-c442-4fac-bc1f-4863e157b084-config-data\") pod \"keystone-78ff5b5cf5-swx4n\" (UID: \"ba9d3f7a-c442-4fac-bc1f-4863e157b084\") " pod="openstack/keystone-78ff5b5cf5-swx4n" Nov 24 09:08:17 crc kubenswrapper[4886]: I1124 09:08:17.743930 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/ba9d3f7a-c442-4fac-bc1f-4863e157b084-credential-keys\") pod \"keystone-78ff5b5cf5-swx4n\" (UID: \"ba9d3f7a-c442-4fac-bc1f-4863e157b084\") " pod="openstack/keystone-78ff5b5cf5-swx4n" Nov 24 09:08:17 crc kubenswrapper[4886]: I1124 09:08:17.744348 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ba9d3f7a-c442-4fac-bc1f-4863e157b084-internal-tls-certs\") pod \"keystone-78ff5b5cf5-swx4n\" (UID: \"ba9d3f7a-c442-4fac-bc1f-4863e157b084\") " pod="openstack/keystone-78ff5b5cf5-swx4n" Nov 24 09:08:17 crc kubenswrapper[4886]: I1124 09:08:17.746210 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ba9d3f7a-c442-4fac-bc1f-4863e157b084-scripts\") pod \"keystone-78ff5b5cf5-swx4n\" (UID: \"ba9d3f7a-c442-4fac-bc1f-4863e157b084\") " pod="openstack/keystone-78ff5b5cf5-swx4n" Nov 24 09:08:17 crc kubenswrapper[4886]: I1124 09:08:17.746544 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba9d3f7a-c442-4fac-bc1f-4863e157b084-combined-ca-bundle\") pod \"keystone-78ff5b5cf5-swx4n\" (UID: \"ba9d3f7a-c442-4fac-bc1f-4863e157b084\") " pod="openstack/keystone-78ff5b5cf5-swx4n" Nov 24 09:08:17 crc kubenswrapper[4886]: I1124 09:08:17.747703 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ba9d3f7a-c442-4fac-bc1f-4863e157b084-public-tls-certs\") pod \"keystone-78ff5b5cf5-swx4n\" (UID: \"ba9d3f7a-c442-4fac-bc1f-4863e157b084\") " pod="openstack/keystone-78ff5b5cf5-swx4n" Nov 24 09:08:17 crc kubenswrapper[4886]: I1124 09:08:17.753500 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/ba9d3f7a-c442-4fac-bc1f-4863e157b084-fernet-keys\") pod \"keystone-78ff5b5cf5-swx4n\" (UID: \"ba9d3f7a-c442-4fac-bc1f-4863e157b084\") " pod="openstack/keystone-78ff5b5cf5-swx4n" Nov 24 09:08:17 crc kubenswrapper[4886]: I1124 09:08:17.767108 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xl99g\" (UniqueName: \"kubernetes.io/projected/ba9d3f7a-c442-4fac-bc1f-4863e157b084-kube-api-access-xl99g\") pod \"keystone-78ff5b5cf5-swx4n\" (UID: \"ba9d3f7a-c442-4fac-bc1f-4863e157b084\") " pod="openstack/keystone-78ff5b5cf5-swx4n" Nov 24 09:08:17 crc kubenswrapper[4886]: I1124 09:08:17.785583 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-78ff5b5cf5-swx4n" Nov 24 09:08:18 crc kubenswrapper[4886]: I1124 09:08:18.210759 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-ph4gr" event={"ID":"7ca0ca62-7545-4e1a-9969-121899a789b0","Type":"ContainerStarted","Data":"1a2f781f50b1db4cb1b9621aa4fdeee17b675d073a091c05c759de27d2a1091e"} Nov 24 09:08:18 crc kubenswrapper[4886]: I1124 09:08:18.226876 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6e8de0fe-585f-4cb8-8deb-d788e443fbd2","Type":"ContainerStarted","Data":"69b02fe9c17baa35f3e9f196d186d3b4a8906e2c02a6354e1311d809807c4d5d"} Nov 24 09:08:18 crc kubenswrapper[4886]: I1124 09:08:18.238356 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-8hkhc" event={"ID":"a2e82e3b-0acc-454e-b8b5-cf584f3298b4","Type":"ContainerStarted","Data":"b3b4e689be17251d4e47ec14d33393654fb1815831a108f236ce236410b2daf9"} Nov 24 09:08:18 crc kubenswrapper[4886]: I1124 09:08:18.238613 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-ph4gr" podStartSLOduration=4.347265284 podStartE2EDuration="51.238583666s" podCreationTimestamp="2025-11-24 09:07:27 +0000 UTC" firstStartedPulling="2025-11-24 09:07:29.454383077 +0000 UTC m=+1105.341121212" lastFinishedPulling="2025-11-24 09:08:16.345701449 +0000 UTC m=+1152.232439594" observedRunningTime="2025-11-24 09:08:18.234650946 +0000 UTC m=+1154.121389081" watchObservedRunningTime="2025-11-24 09:08:18.238583666 +0000 UTC m=+1154.125321801" Nov 24 09:08:18 crc kubenswrapper[4886]: I1124 09:08:18.279382 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-8hkhc" podStartSLOduration=3.591965414 podStartE2EDuration="50.279351349s" podCreationTimestamp="2025-11-24 09:07:28 +0000 UTC" firstStartedPulling="2025-11-24 09:07:29.974936379 +0000 UTC m=+1105.861674534" lastFinishedPulling="2025-11-24 09:08:16.662322334 +0000 UTC m=+1152.549060469" observedRunningTime="2025-11-24 09:08:18.272748614 +0000 UTC m=+1154.159486749" watchObservedRunningTime="2025-11-24 09:08:18.279351349 +0000 UTC m=+1154.166089484" Nov 24 09:08:18 crc kubenswrapper[4886]: I1124 09:08:18.463324 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-78ff5b5cf5-swx4n"] Nov 24 09:08:18 crc kubenswrapper[4886]: W1124 09:08:18.486795 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podba9d3f7a_c442_4fac_bc1f_4863e157b084.slice/crio-945354565bb673c1a27011466f7a15f8da3a0f7989f9587c7eab2fa5595aadd4 WatchSource:0}: Error finding container 945354565bb673c1a27011466f7a15f8da3a0f7989f9587c7eab2fa5595aadd4: Status 404 returned error can't find the container with id 945354565bb673c1a27011466f7a15f8da3a0f7989f9587c7eab2fa5595aadd4 Nov 24 09:08:18 crc kubenswrapper[4886]: I1124 09:08:18.865835 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d02e92b5-fa15-43ce-a8aa-c3dc06490056" path="/var/lib/kubelet/pods/d02e92b5-fa15-43ce-a8aa-c3dc06490056/volumes" Nov 24 09:08:19 crc kubenswrapper[4886]: I1124 09:08:19.261214 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-78ff5b5cf5-swx4n" event={"ID":"ba9d3f7a-c442-4fac-bc1f-4863e157b084","Type":"ContainerStarted","Data":"a32b8275bfc5c5d49f0ae6408c1dec9ecc44111662b8185fdec1483fc994c58a"} Nov 24 09:08:19 crc kubenswrapper[4886]: I1124 09:08:19.261711 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-78ff5b5cf5-swx4n" event={"ID":"ba9d3f7a-c442-4fac-bc1f-4863e157b084","Type":"ContainerStarted","Data":"945354565bb673c1a27011466f7a15f8da3a0f7989f9587c7eab2fa5595aadd4"} Nov 24 09:08:19 crc kubenswrapper[4886]: I1124 09:08:19.261732 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-78ff5b5cf5-swx4n" Nov 24 09:08:19 crc kubenswrapper[4886]: I1124 09:08:19.868322 4886 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-75ffb75746-pwc5g" podUID="54dfe4a2-2d0f-417f-b1d6-df18e5581bfc" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.144:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.144:8443: connect: connection refused" Nov 24 09:08:19 crc kubenswrapper[4886]: I1124 09:08:19.910674 4886 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-664f9d77dd-zw4gm" podUID="19e275c2-5fd6-4ea7-a023-6d7478ae5750" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.145:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.145:8443: connect: connection refused" Nov 24 09:08:24 crc kubenswrapper[4886]: I1124 09:08:24.320709 4886 generic.go:334] "Generic (PLEG): container finished" podID="a2e82e3b-0acc-454e-b8b5-cf584f3298b4" containerID="b3b4e689be17251d4e47ec14d33393654fb1815831a108f236ce236410b2daf9" exitCode=0 Nov 24 09:08:24 crc kubenswrapper[4886]: I1124 09:08:24.320804 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-8hkhc" event={"ID":"a2e82e3b-0acc-454e-b8b5-cf584f3298b4","Type":"ContainerDied","Data":"b3b4e689be17251d4e47ec14d33393654fb1815831a108f236ce236410b2daf9"} Nov 24 09:08:24 crc kubenswrapper[4886]: I1124 09:08:24.344378 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-78ff5b5cf5-swx4n" podStartSLOduration=7.34435851 podStartE2EDuration="7.34435851s" podCreationTimestamp="2025-11-24 09:08:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 09:08:19.295899553 +0000 UTC m=+1155.182637688" watchObservedRunningTime="2025-11-24 09:08:24.34435851 +0000 UTC m=+1160.231096645" Nov 24 09:08:27 crc kubenswrapper[4886]: I1124 09:08:27.077916 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-8hkhc" Nov 24 09:08:27 crc kubenswrapper[4886]: I1124 09:08:27.160588 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a2e82e3b-0acc-454e-b8b5-cf584f3298b4-db-sync-config-data\") pod \"a2e82e3b-0acc-454e-b8b5-cf584f3298b4\" (UID: \"a2e82e3b-0acc-454e-b8b5-cf584f3298b4\") " Nov 24 09:08:27 crc kubenswrapper[4886]: I1124 09:08:27.161335 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2e82e3b-0acc-454e-b8b5-cf584f3298b4-combined-ca-bundle\") pod \"a2e82e3b-0acc-454e-b8b5-cf584f3298b4\" (UID: \"a2e82e3b-0acc-454e-b8b5-cf584f3298b4\") " Nov 24 09:08:27 crc kubenswrapper[4886]: I1124 09:08:27.161480 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs2t9\" (UniqueName: \"kubernetes.io/projected/a2e82e3b-0acc-454e-b8b5-cf584f3298b4-kube-api-access-qs2t9\") pod \"a2e82e3b-0acc-454e-b8b5-cf584f3298b4\" (UID: \"a2e82e3b-0acc-454e-b8b5-cf584f3298b4\") " Nov 24 09:08:27 crc kubenswrapper[4886]: I1124 09:08:27.166524 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a2e82e3b-0acc-454e-b8b5-cf584f3298b4-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "a2e82e3b-0acc-454e-b8b5-cf584f3298b4" (UID: "a2e82e3b-0acc-454e-b8b5-cf584f3298b4"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:08:27 crc kubenswrapper[4886]: I1124 09:08:27.166612 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a2e82e3b-0acc-454e-b8b5-cf584f3298b4-kube-api-access-qs2t9" (OuterVolumeSpecName: "kube-api-access-qs2t9") pod "a2e82e3b-0acc-454e-b8b5-cf584f3298b4" (UID: "a2e82e3b-0acc-454e-b8b5-cf584f3298b4"). InnerVolumeSpecName "kube-api-access-qs2t9". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 09:08:27 crc kubenswrapper[4886]: I1124 09:08:27.190315 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a2e82e3b-0acc-454e-b8b5-cf584f3298b4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a2e82e3b-0acc-454e-b8b5-cf584f3298b4" (UID: "a2e82e3b-0acc-454e-b8b5-cf584f3298b4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:08:27 crc kubenswrapper[4886]: I1124 09:08:27.264334 4886 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2e82e3b-0acc-454e-b8b5-cf584f3298b4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 09:08:27 crc kubenswrapper[4886]: I1124 09:08:27.264381 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs2t9\" (UniqueName: \"kubernetes.io/projected/a2e82e3b-0acc-454e-b8b5-cf584f3298b4-kube-api-access-qs2t9\") on node \"crc\" DevicePath \"\"" Nov 24 09:08:27 crc kubenswrapper[4886]: I1124 09:08:27.264407 4886 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a2e82e3b-0acc-454e-b8b5-cf584f3298b4-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 09:08:27 crc kubenswrapper[4886]: E1124 09:08:27.340248 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ceilometer-0" podUID="6e8de0fe-585f-4cb8-8deb-d788e443fbd2" Nov 24 09:08:27 crc kubenswrapper[4886]: I1124 09:08:27.365374 4886 generic.go:334] "Generic (PLEG): container finished" podID="7ca0ca62-7545-4e1a-9969-121899a789b0" containerID="1a2f781f50b1db4cb1b9621aa4fdeee17b675d073a091c05c759de27d2a1091e" exitCode=0 Nov 24 09:08:27 crc kubenswrapper[4886]: I1124 09:08:27.365471 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-ph4gr" event={"ID":"7ca0ca62-7545-4e1a-9969-121899a789b0","Type":"ContainerDied","Data":"1a2f781f50b1db4cb1b9621aa4fdeee17b675d073a091c05c759de27d2a1091e"} Nov 24 09:08:27 crc kubenswrapper[4886]: I1124 09:08:27.368930 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6e8de0fe-585f-4cb8-8deb-d788e443fbd2","Type":"ContainerStarted","Data":"d60efe63b5b14f25d7b74a58e5802f3f1e8d9632266b07f9a2ec246deebd6174"} Nov 24 09:08:27 crc kubenswrapper[4886]: I1124 09:08:27.369040 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="6e8de0fe-585f-4cb8-8deb-d788e443fbd2" containerName="ceilometer-notification-agent" containerID="cri-o://a76fc97611d3be26cdd4e41ad350c79afbc3b4cd82d9c74df46643996426a51b" gracePeriod=30 Nov 24 09:08:27 crc kubenswrapper[4886]: I1124 09:08:27.369103 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="6e8de0fe-585f-4cb8-8deb-d788e443fbd2" containerName="proxy-httpd" containerID="cri-o://d60efe63b5b14f25d7b74a58e5802f3f1e8d9632266b07f9a2ec246deebd6174" gracePeriod=30 Nov 24 09:08:27 crc kubenswrapper[4886]: I1124 09:08:27.369089 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 24 09:08:27 crc kubenswrapper[4886]: I1124 09:08:27.369110 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="6e8de0fe-585f-4cb8-8deb-d788e443fbd2" containerName="sg-core" containerID="cri-o://69b02fe9c17baa35f3e9f196d186d3b4a8906e2c02a6354e1311d809807c4d5d" gracePeriod=30 Nov 24 09:08:27 crc kubenswrapper[4886]: I1124 09:08:27.374824 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-8hkhc" event={"ID":"a2e82e3b-0acc-454e-b8b5-cf584f3298b4","Type":"ContainerDied","Data":"799cd6f62389d70b2a893ebe5bcef353dfad1932e8e7517fb2b1188548f9faf7"} Nov 24 09:08:27 crc kubenswrapper[4886]: I1124 09:08:27.374870 4886 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="799cd6f62389d70b2a893ebe5bcef353dfad1932e8e7517fb2b1188548f9faf7" Nov 24 09:08:27 crc kubenswrapper[4886]: I1124 09:08:27.374869 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-8hkhc" Nov 24 09:08:28 crc kubenswrapper[4886]: I1124 09:08:28.366773 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-df69f5cf-v8lvl"] Nov 24 09:08:28 crc kubenswrapper[4886]: E1124 09:08:28.367711 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a2e82e3b-0acc-454e-b8b5-cf584f3298b4" containerName="barbican-db-sync" Nov 24 09:08:28 crc kubenswrapper[4886]: I1124 09:08:28.367724 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2e82e3b-0acc-454e-b8b5-cf584f3298b4" containerName="barbican-db-sync" Nov 24 09:08:28 crc kubenswrapper[4886]: I1124 09:08:28.367944 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="a2e82e3b-0acc-454e-b8b5-cf584f3298b4" containerName="barbican-db-sync" Nov 24 09:08:28 crc kubenswrapper[4886]: I1124 09:08:28.377283 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-df69f5cf-v8lvl" Nov 24 09:08:28 crc kubenswrapper[4886]: I1124 09:08:28.386233 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Nov 24 09:08:28 crc kubenswrapper[4886]: I1124 09:08:28.386559 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Nov 24 09:08:28 crc kubenswrapper[4886]: I1124 09:08:28.386730 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-5m576" Nov 24 09:08:28 crc kubenswrapper[4886]: I1124 09:08:28.393549 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-df69f5cf-v8lvl"] Nov 24 09:08:28 crc kubenswrapper[4886]: I1124 09:08:28.411618 4886 generic.go:334] "Generic (PLEG): container finished" podID="6e8de0fe-585f-4cb8-8deb-d788e443fbd2" containerID="69b02fe9c17baa35f3e9f196d186d3b4a8906e2c02a6354e1311d809807c4d5d" exitCode=2 Nov 24 09:08:28 crc kubenswrapper[4886]: I1124 09:08:28.411887 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6e8de0fe-585f-4cb8-8deb-d788e443fbd2","Type":"ContainerDied","Data":"69b02fe9c17baa35f3e9f196d186d3b4a8906e2c02a6354e1311d809807c4d5d"} Nov 24 09:08:28 crc kubenswrapper[4886]: I1124 09:08:28.420602 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-8b4cf4966-gt5q7"] Nov 24 09:08:28 crc kubenswrapper[4886]: I1124 09:08:28.422322 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-8b4cf4966-gt5q7" Nov 24 09:08:28 crc kubenswrapper[4886]: I1124 09:08:28.428567 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Nov 24 09:08:28 crc kubenswrapper[4886]: I1124 09:08:28.466597 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-8b4cf4966-gt5q7"] Nov 24 09:08:28 crc kubenswrapper[4886]: I1124 09:08:28.494410 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cf27d89f-7c4b-49b5-a993-b851f86a2994-config-data\") pod \"barbican-keystone-listener-8b4cf4966-gt5q7\" (UID: \"cf27d89f-7c4b-49b5-a993-b851f86a2994\") " pod="openstack/barbican-keystone-listener-8b4cf4966-gt5q7" Nov 24 09:08:28 crc kubenswrapper[4886]: I1124 09:08:28.494625 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/903a1b7e-92e3-455b-af86-c46c9a290f11-config-data-custom\") pod \"barbican-worker-df69f5cf-v8lvl\" (UID: \"903a1b7e-92e3-455b-af86-c46c9a290f11\") " pod="openstack/barbican-worker-df69f5cf-v8lvl" Nov 24 09:08:28 crc kubenswrapper[4886]: I1124 09:08:28.494681 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cf27d89f-7c4b-49b5-a993-b851f86a2994-logs\") pod \"barbican-keystone-listener-8b4cf4966-gt5q7\" (UID: \"cf27d89f-7c4b-49b5-a993-b851f86a2994\") " pod="openstack/barbican-keystone-listener-8b4cf4966-gt5q7" Nov 24 09:08:28 crc kubenswrapper[4886]: I1124 09:08:28.494705 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf27d89f-7c4b-49b5-a993-b851f86a2994-combined-ca-bundle\") pod \"barbican-keystone-listener-8b4cf4966-gt5q7\" (UID: \"cf27d89f-7c4b-49b5-a993-b851f86a2994\") " pod="openstack/barbican-keystone-listener-8b4cf4966-gt5q7" Nov 24 09:08:28 crc kubenswrapper[4886]: I1124 09:08:28.494728 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/903a1b7e-92e3-455b-af86-c46c9a290f11-config-data\") pod \"barbican-worker-df69f5cf-v8lvl\" (UID: \"903a1b7e-92e3-455b-af86-c46c9a290f11\") " pod="openstack/barbican-worker-df69f5cf-v8lvl" Nov 24 09:08:28 crc kubenswrapper[4886]: I1124 09:08:28.494754 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cf27d89f-7c4b-49b5-a993-b851f86a2994-config-data-custom\") pod \"barbican-keystone-listener-8b4cf4966-gt5q7\" (UID: \"cf27d89f-7c4b-49b5-a993-b851f86a2994\") " pod="openstack/barbican-keystone-listener-8b4cf4966-gt5q7" Nov 24 09:08:28 crc kubenswrapper[4886]: I1124 09:08:28.494778 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/903a1b7e-92e3-455b-af86-c46c9a290f11-combined-ca-bundle\") pod \"barbican-worker-df69f5cf-v8lvl\" (UID: \"903a1b7e-92e3-455b-af86-c46c9a290f11\") " pod="openstack/barbican-worker-df69f5cf-v8lvl" Nov 24 09:08:28 crc kubenswrapper[4886]: I1124 09:08:28.494802 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7tgmm\" (UniqueName: \"kubernetes.io/projected/cf27d89f-7c4b-49b5-a993-b851f86a2994-kube-api-access-7tgmm\") pod \"barbican-keystone-listener-8b4cf4966-gt5q7\" (UID: \"cf27d89f-7c4b-49b5-a993-b851f86a2994\") " pod="openstack/barbican-keystone-listener-8b4cf4966-gt5q7" Nov 24 09:08:28 crc kubenswrapper[4886]: I1124 09:08:28.494840 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bhmsv\" (UniqueName: \"kubernetes.io/projected/903a1b7e-92e3-455b-af86-c46c9a290f11-kube-api-access-bhmsv\") pod \"barbican-worker-df69f5cf-v8lvl\" (UID: \"903a1b7e-92e3-455b-af86-c46c9a290f11\") " pod="openstack/barbican-worker-df69f5cf-v8lvl" Nov 24 09:08:28 crc kubenswrapper[4886]: I1124 09:08:28.494862 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/903a1b7e-92e3-455b-af86-c46c9a290f11-logs\") pod \"barbican-worker-df69f5cf-v8lvl\" (UID: \"903a1b7e-92e3-455b-af86-c46c9a290f11\") " pod="openstack/barbican-worker-df69f5cf-v8lvl" Nov 24 09:08:28 crc kubenswrapper[4886]: I1124 09:08:28.569800 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-848cf88cfc-2l7b6"] Nov 24 09:08:28 crc kubenswrapper[4886]: I1124 09:08:28.571964 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-848cf88cfc-2l7b6" Nov 24 09:08:28 crc kubenswrapper[4886]: I1124 09:08:28.587127 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-848cf88cfc-2l7b6"] Nov 24 09:08:28 crc kubenswrapper[4886]: I1124 09:08:28.596558 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/903a1b7e-92e3-455b-af86-c46c9a290f11-config-data-custom\") pod \"barbican-worker-df69f5cf-v8lvl\" (UID: \"903a1b7e-92e3-455b-af86-c46c9a290f11\") " pod="openstack/barbican-worker-df69f5cf-v8lvl" Nov 24 09:08:28 crc kubenswrapper[4886]: I1124 09:08:28.596624 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cf27d89f-7c4b-49b5-a993-b851f86a2994-logs\") pod \"barbican-keystone-listener-8b4cf4966-gt5q7\" (UID: \"cf27d89f-7c4b-49b5-a993-b851f86a2994\") " pod="openstack/barbican-keystone-listener-8b4cf4966-gt5q7" Nov 24 09:08:28 crc kubenswrapper[4886]: I1124 09:08:28.596652 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf27d89f-7c4b-49b5-a993-b851f86a2994-combined-ca-bundle\") pod \"barbican-keystone-listener-8b4cf4966-gt5q7\" (UID: \"cf27d89f-7c4b-49b5-a993-b851f86a2994\") " pod="openstack/barbican-keystone-listener-8b4cf4966-gt5q7" Nov 24 09:08:28 crc kubenswrapper[4886]: I1124 09:08:28.596679 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/903a1b7e-92e3-455b-af86-c46c9a290f11-config-data\") pod \"barbican-worker-df69f5cf-v8lvl\" (UID: \"903a1b7e-92e3-455b-af86-c46c9a290f11\") " pod="openstack/barbican-worker-df69f5cf-v8lvl" Nov 24 09:08:28 crc kubenswrapper[4886]: I1124 09:08:28.596718 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cf27d89f-7c4b-49b5-a993-b851f86a2994-config-data-custom\") pod \"barbican-keystone-listener-8b4cf4966-gt5q7\" (UID: \"cf27d89f-7c4b-49b5-a993-b851f86a2994\") " pod="openstack/barbican-keystone-listener-8b4cf4966-gt5q7" Nov 24 09:08:28 crc kubenswrapper[4886]: I1124 09:08:28.596750 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/903a1b7e-92e3-455b-af86-c46c9a290f11-combined-ca-bundle\") pod \"barbican-worker-df69f5cf-v8lvl\" (UID: \"903a1b7e-92e3-455b-af86-c46c9a290f11\") " pod="openstack/barbican-worker-df69f5cf-v8lvl" Nov 24 09:08:28 crc kubenswrapper[4886]: I1124 09:08:28.596785 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7tgmm\" (UniqueName: \"kubernetes.io/projected/cf27d89f-7c4b-49b5-a993-b851f86a2994-kube-api-access-7tgmm\") pod \"barbican-keystone-listener-8b4cf4966-gt5q7\" (UID: \"cf27d89f-7c4b-49b5-a993-b851f86a2994\") " pod="openstack/barbican-keystone-listener-8b4cf4966-gt5q7" Nov 24 09:08:28 crc kubenswrapper[4886]: I1124 09:08:28.596840 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bhmsv\" (UniqueName: \"kubernetes.io/projected/903a1b7e-92e3-455b-af86-c46c9a290f11-kube-api-access-bhmsv\") pod \"barbican-worker-df69f5cf-v8lvl\" (UID: \"903a1b7e-92e3-455b-af86-c46c9a290f11\") " pod="openstack/barbican-worker-df69f5cf-v8lvl" Nov 24 09:08:28 crc kubenswrapper[4886]: I1124 09:08:28.596869 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/903a1b7e-92e3-455b-af86-c46c9a290f11-logs\") pod \"barbican-worker-df69f5cf-v8lvl\" (UID: \"903a1b7e-92e3-455b-af86-c46c9a290f11\") " pod="openstack/barbican-worker-df69f5cf-v8lvl" Nov 24 09:08:28 crc kubenswrapper[4886]: I1124 09:08:28.596903 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cf27d89f-7c4b-49b5-a993-b851f86a2994-config-data\") pod \"barbican-keystone-listener-8b4cf4966-gt5q7\" (UID: \"cf27d89f-7c4b-49b5-a993-b851f86a2994\") " pod="openstack/barbican-keystone-listener-8b4cf4966-gt5q7" Nov 24 09:08:28 crc kubenswrapper[4886]: I1124 09:08:28.611298 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/903a1b7e-92e3-455b-af86-c46c9a290f11-logs\") pod \"barbican-worker-df69f5cf-v8lvl\" (UID: \"903a1b7e-92e3-455b-af86-c46c9a290f11\") " pod="openstack/barbican-worker-df69f5cf-v8lvl" Nov 24 09:08:28 crc kubenswrapper[4886]: I1124 09:08:28.614979 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cf27d89f-7c4b-49b5-a993-b851f86a2994-logs\") pod \"barbican-keystone-listener-8b4cf4966-gt5q7\" (UID: \"cf27d89f-7c4b-49b5-a993-b851f86a2994\") " pod="openstack/barbican-keystone-listener-8b4cf4966-gt5q7" Nov 24 09:08:28 crc kubenswrapper[4886]: I1124 09:08:28.627062 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cf27d89f-7c4b-49b5-a993-b851f86a2994-config-data\") pod \"barbican-keystone-listener-8b4cf4966-gt5q7\" (UID: \"cf27d89f-7c4b-49b5-a993-b851f86a2994\") " pod="openstack/barbican-keystone-listener-8b4cf4966-gt5q7" Nov 24 09:08:28 crc kubenswrapper[4886]: I1124 09:08:28.633998 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/903a1b7e-92e3-455b-af86-c46c9a290f11-combined-ca-bundle\") pod \"barbican-worker-df69f5cf-v8lvl\" (UID: \"903a1b7e-92e3-455b-af86-c46c9a290f11\") " pod="openstack/barbican-worker-df69f5cf-v8lvl" Nov 24 09:08:28 crc kubenswrapper[4886]: I1124 09:08:28.636519 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/903a1b7e-92e3-455b-af86-c46c9a290f11-config-data\") pod \"barbican-worker-df69f5cf-v8lvl\" (UID: \"903a1b7e-92e3-455b-af86-c46c9a290f11\") " pod="openstack/barbican-worker-df69f5cf-v8lvl" Nov 24 09:08:28 crc kubenswrapper[4886]: I1124 09:08:28.644288 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf27d89f-7c4b-49b5-a993-b851f86a2994-combined-ca-bundle\") pod \"barbican-keystone-listener-8b4cf4966-gt5q7\" (UID: \"cf27d89f-7c4b-49b5-a993-b851f86a2994\") " pod="openstack/barbican-keystone-listener-8b4cf4966-gt5q7" Nov 24 09:08:28 crc kubenswrapper[4886]: I1124 09:08:28.644824 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/903a1b7e-92e3-455b-af86-c46c9a290f11-config-data-custom\") pod \"barbican-worker-df69f5cf-v8lvl\" (UID: \"903a1b7e-92e3-455b-af86-c46c9a290f11\") " pod="openstack/barbican-worker-df69f5cf-v8lvl" Nov 24 09:08:28 crc kubenswrapper[4886]: I1124 09:08:28.653422 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7tgmm\" (UniqueName: \"kubernetes.io/projected/cf27d89f-7c4b-49b5-a993-b851f86a2994-kube-api-access-7tgmm\") pod \"barbican-keystone-listener-8b4cf4966-gt5q7\" (UID: \"cf27d89f-7c4b-49b5-a993-b851f86a2994\") " pod="openstack/barbican-keystone-listener-8b4cf4966-gt5q7" Nov 24 09:08:28 crc kubenswrapper[4886]: I1124 09:08:28.665760 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cf27d89f-7c4b-49b5-a993-b851f86a2994-config-data-custom\") pod \"barbican-keystone-listener-8b4cf4966-gt5q7\" (UID: \"cf27d89f-7c4b-49b5-a993-b851f86a2994\") " pod="openstack/barbican-keystone-listener-8b4cf4966-gt5q7" Nov 24 09:08:28 crc kubenswrapper[4886]: I1124 09:08:28.669906 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bhmsv\" (UniqueName: \"kubernetes.io/projected/903a1b7e-92e3-455b-af86-c46c9a290f11-kube-api-access-bhmsv\") pod \"barbican-worker-df69f5cf-v8lvl\" (UID: \"903a1b7e-92e3-455b-af86-c46c9a290f11\") " pod="openstack/barbican-worker-df69f5cf-v8lvl" Nov 24 09:08:28 crc kubenswrapper[4886]: I1124 09:08:28.699147 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/051f4292-b5ec-4727-b48b-2b18b3f24b4f-ovsdbserver-nb\") pod \"dnsmasq-dns-848cf88cfc-2l7b6\" (UID: \"051f4292-b5ec-4727-b48b-2b18b3f24b4f\") " pod="openstack/dnsmasq-dns-848cf88cfc-2l7b6" Nov 24 09:08:28 crc kubenswrapper[4886]: I1124 09:08:28.699238 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tg67s\" (UniqueName: \"kubernetes.io/projected/051f4292-b5ec-4727-b48b-2b18b3f24b4f-kube-api-access-tg67s\") pod \"dnsmasq-dns-848cf88cfc-2l7b6\" (UID: \"051f4292-b5ec-4727-b48b-2b18b3f24b4f\") " pod="openstack/dnsmasq-dns-848cf88cfc-2l7b6" Nov 24 09:08:28 crc kubenswrapper[4886]: I1124 09:08:28.699327 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/051f4292-b5ec-4727-b48b-2b18b3f24b4f-dns-svc\") pod \"dnsmasq-dns-848cf88cfc-2l7b6\" (UID: \"051f4292-b5ec-4727-b48b-2b18b3f24b4f\") " pod="openstack/dnsmasq-dns-848cf88cfc-2l7b6" Nov 24 09:08:28 crc kubenswrapper[4886]: I1124 09:08:28.699397 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/051f4292-b5ec-4727-b48b-2b18b3f24b4f-config\") pod \"dnsmasq-dns-848cf88cfc-2l7b6\" (UID: \"051f4292-b5ec-4727-b48b-2b18b3f24b4f\") " pod="openstack/dnsmasq-dns-848cf88cfc-2l7b6" Nov 24 09:08:28 crc kubenswrapper[4886]: I1124 09:08:28.699436 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/051f4292-b5ec-4727-b48b-2b18b3f24b4f-dns-swift-storage-0\") pod \"dnsmasq-dns-848cf88cfc-2l7b6\" (UID: \"051f4292-b5ec-4727-b48b-2b18b3f24b4f\") " pod="openstack/dnsmasq-dns-848cf88cfc-2l7b6" Nov 24 09:08:28 crc kubenswrapper[4886]: I1124 09:08:28.699494 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/051f4292-b5ec-4727-b48b-2b18b3f24b4f-ovsdbserver-sb\") pod \"dnsmasq-dns-848cf88cfc-2l7b6\" (UID: \"051f4292-b5ec-4727-b48b-2b18b3f24b4f\") " pod="openstack/dnsmasq-dns-848cf88cfc-2l7b6" Nov 24 09:08:28 crc kubenswrapper[4886]: I1124 09:08:28.727102 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-df69f5cf-v8lvl" Nov 24 09:08:28 crc kubenswrapper[4886]: I1124 09:08:28.769329 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-8499689d4b-dvrvg"] Nov 24 09:08:28 crc kubenswrapper[4886]: I1124 09:08:28.771711 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-8499689d4b-dvrvg" Nov 24 09:08:28 crc kubenswrapper[4886]: I1124 09:08:28.775014 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Nov 24 09:08:28 crc kubenswrapper[4886]: I1124 09:08:28.794817 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-8b4cf4966-gt5q7" Nov 24 09:08:28 crc kubenswrapper[4886]: I1124 09:08:28.803998 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/051f4292-b5ec-4727-b48b-2b18b3f24b4f-ovsdbserver-nb\") pod \"dnsmasq-dns-848cf88cfc-2l7b6\" (UID: \"051f4292-b5ec-4727-b48b-2b18b3f24b4f\") " pod="openstack/dnsmasq-dns-848cf88cfc-2l7b6" Nov 24 09:08:28 crc kubenswrapper[4886]: I1124 09:08:28.804053 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tg67s\" (UniqueName: \"kubernetes.io/projected/051f4292-b5ec-4727-b48b-2b18b3f24b4f-kube-api-access-tg67s\") pod \"dnsmasq-dns-848cf88cfc-2l7b6\" (UID: \"051f4292-b5ec-4727-b48b-2b18b3f24b4f\") " pod="openstack/dnsmasq-dns-848cf88cfc-2l7b6" Nov 24 09:08:28 crc kubenswrapper[4886]: I1124 09:08:28.804138 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/051f4292-b5ec-4727-b48b-2b18b3f24b4f-dns-svc\") pod \"dnsmasq-dns-848cf88cfc-2l7b6\" (UID: \"051f4292-b5ec-4727-b48b-2b18b3f24b4f\") " pod="openstack/dnsmasq-dns-848cf88cfc-2l7b6" Nov 24 09:08:28 crc kubenswrapper[4886]: I1124 09:08:28.804241 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/051f4292-b5ec-4727-b48b-2b18b3f24b4f-config\") pod \"dnsmasq-dns-848cf88cfc-2l7b6\" (UID: \"051f4292-b5ec-4727-b48b-2b18b3f24b4f\") " pod="openstack/dnsmasq-dns-848cf88cfc-2l7b6" Nov 24 09:08:28 crc kubenswrapper[4886]: I1124 09:08:28.804281 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/051f4292-b5ec-4727-b48b-2b18b3f24b4f-dns-swift-storage-0\") pod \"dnsmasq-dns-848cf88cfc-2l7b6\" (UID: \"051f4292-b5ec-4727-b48b-2b18b3f24b4f\") " pod="openstack/dnsmasq-dns-848cf88cfc-2l7b6" Nov 24 09:08:28 crc kubenswrapper[4886]: I1124 09:08:28.804321 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/051f4292-b5ec-4727-b48b-2b18b3f24b4f-ovsdbserver-sb\") pod \"dnsmasq-dns-848cf88cfc-2l7b6\" (UID: \"051f4292-b5ec-4727-b48b-2b18b3f24b4f\") " pod="openstack/dnsmasq-dns-848cf88cfc-2l7b6" Nov 24 09:08:28 crc kubenswrapper[4886]: I1124 09:08:28.805535 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/051f4292-b5ec-4727-b48b-2b18b3f24b4f-ovsdbserver-sb\") pod \"dnsmasq-dns-848cf88cfc-2l7b6\" (UID: \"051f4292-b5ec-4727-b48b-2b18b3f24b4f\") " pod="openstack/dnsmasq-dns-848cf88cfc-2l7b6" Nov 24 09:08:28 crc kubenswrapper[4886]: I1124 09:08:28.807387 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/051f4292-b5ec-4727-b48b-2b18b3f24b4f-ovsdbserver-nb\") pod \"dnsmasq-dns-848cf88cfc-2l7b6\" (UID: \"051f4292-b5ec-4727-b48b-2b18b3f24b4f\") " pod="openstack/dnsmasq-dns-848cf88cfc-2l7b6" Nov 24 09:08:28 crc kubenswrapper[4886]: I1124 09:08:28.807983 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/051f4292-b5ec-4727-b48b-2b18b3f24b4f-dns-swift-storage-0\") pod \"dnsmasq-dns-848cf88cfc-2l7b6\" (UID: \"051f4292-b5ec-4727-b48b-2b18b3f24b4f\") " pod="openstack/dnsmasq-dns-848cf88cfc-2l7b6" Nov 24 09:08:28 crc kubenswrapper[4886]: I1124 09:08:28.808031 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-8499689d4b-dvrvg"] Nov 24 09:08:28 crc kubenswrapper[4886]: I1124 09:08:28.809622 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/051f4292-b5ec-4727-b48b-2b18b3f24b4f-dns-svc\") pod \"dnsmasq-dns-848cf88cfc-2l7b6\" (UID: \"051f4292-b5ec-4727-b48b-2b18b3f24b4f\") " pod="openstack/dnsmasq-dns-848cf88cfc-2l7b6" Nov 24 09:08:28 crc kubenswrapper[4886]: I1124 09:08:28.809813 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/051f4292-b5ec-4727-b48b-2b18b3f24b4f-config\") pod \"dnsmasq-dns-848cf88cfc-2l7b6\" (UID: \"051f4292-b5ec-4727-b48b-2b18b3f24b4f\") " pod="openstack/dnsmasq-dns-848cf88cfc-2l7b6" Nov 24 09:08:28 crc kubenswrapper[4886]: I1124 09:08:28.831088 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tg67s\" (UniqueName: \"kubernetes.io/projected/051f4292-b5ec-4727-b48b-2b18b3f24b4f-kube-api-access-tg67s\") pod \"dnsmasq-dns-848cf88cfc-2l7b6\" (UID: \"051f4292-b5ec-4727-b48b-2b18b3f24b4f\") " pod="openstack/dnsmasq-dns-848cf88cfc-2l7b6" Nov 24 09:08:28 crc kubenswrapper[4886]: I1124 09:08:28.906130 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-84jg9\" (UniqueName: \"kubernetes.io/projected/dbac479b-45c3-44c6-985a-b88d878f3506-kube-api-access-84jg9\") pod \"barbican-api-8499689d4b-dvrvg\" (UID: \"dbac479b-45c3-44c6-985a-b88d878f3506\") " pod="openstack/barbican-api-8499689d4b-dvrvg" Nov 24 09:08:28 crc kubenswrapper[4886]: I1124 09:08:28.906295 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/dbac479b-45c3-44c6-985a-b88d878f3506-config-data-custom\") pod \"barbican-api-8499689d4b-dvrvg\" (UID: \"dbac479b-45c3-44c6-985a-b88d878f3506\") " pod="openstack/barbican-api-8499689d4b-dvrvg" Nov 24 09:08:28 crc kubenswrapper[4886]: I1124 09:08:28.906329 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbac479b-45c3-44c6-985a-b88d878f3506-combined-ca-bundle\") pod \"barbican-api-8499689d4b-dvrvg\" (UID: \"dbac479b-45c3-44c6-985a-b88d878f3506\") " pod="openstack/barbican-api-8499689d4b-dvrvg" Nov 24 09:08:28 crc kubenswrapper[4886]: I1124 09:08:28.906369 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dbac479b-45c3-44c6-985a-b88d878f3506-logs\") pod \"barbican-api-8499689d4b-dvrvg\" (UID: \"dbac479b-45c3-44c6-985a-b88d878f3506\") " pod="openstack/barbican-api-8499689d4b-dvrvg" Nov 24 09:08:28 crc kubenswrapper[4886]: I1124 09:08:28.906453 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dbac479b-45c3-44c6-985a-b88d878f3506-config-data\") pod \"barbican-api-8499689d4b-dvrvg\" (UID: \"dbac479b-45c3-44c6-985a-b88d878f3506\") " pod="openstack/barbican-api-8499689d4b-dvrvg" Nov 24 09:08:29 crc kubenswrapper[4886]: I1124 09:08:29.008960 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dbac479b-45c3-44c6-985a-b88d878f3506-config-data\") pod \"barbican-api-8499689d4b-dvrvg\" (UID: \"dbac479b-45c3-44c6-985a-b88d878f3506\") " pod="openstack/barbican-api-8499689d4b-dvrvg" Nov 24 09:08:29 crc kubenswrapper[4886]: I1124 09:08:29.009089 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-84jg9\" (UniqueName: \"kubernetes.io/projected/dbac479b-45c3-44c6-985a-b88d878f3506-kube-api-access-84jg9\") pod \"barbican-api-8499689d4b-dvrvg\" (UID: \"dbac479b-45c3-44c6-985a-b88d878f3506\") " pod="openstack/barbican-api-8499689d4b-dvrvg" Nov 24 09:08:29 crc kubenswrapper[4886]: I1124 09:08:29.009187 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/dbac479b-45c3-44c6-985a-b88d878f3506-config-data-custom\") pod \"barbican-api-8499689d4b-dvrvg\" (UID: \"dbac479b-45c3-44c6-985a-b88d878f3506\") " pod="openstack/barbican-api-8499689d4b-dvrvg" Nov 24 09:08:29 crc kubenswrapper[4886]: I1124 09:08:29.009216 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbac479b-45c3-44c6-985a-b88d878f3506-combined-ca-bundle\") pod \"barbican-api-8499689d4b-dvrvg\" (UID: \"dbac479b-45c3-44c6-985a-b88d878f3506\") " pod="openstack/barbican-api-8499689d4b-dvrvg" Nov 24 09:08:29 crc kubenswrapper[4886]: I1124 09:08:29.009249 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dbac479b-45c3-44c6-985a-b88d878f3506-logs\") pod \"barbican-api-8499689d4b-dvrvg\" (UID: \"dbac479b-45c3-44c6-985a-b88d878f3506\") " pod="openstack/barbican-api-8499689d4b-dvrvg" Nov 24 09:08:29 crc kubenswrapper[4886]: I1124 09:08:29.011387 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dbac479b-45c3-44c6-985a-b88d878f3506-logs\") pod \"barbican-api-8499689d4b-dvrvg\" (UID: \"dbac479b-45c3-44c6-985a-b88d878f3506\") " pod="openstack/barbican-api-8499689d4b-dvrvg" Nov 24 09:08:29 crc kubenswrapper[4886]: I1124 09:08:29.018834 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/dbac479b-45c3-44c6-985a-b88d878f3506-config-data-custom\") pod \"barbican-api-8499689d4b-dvrvg\" (UID: \"dbac479b-45c3-44c6-985a-b88d878f3506\") " pod="openstack/barbican-api-8499689d4b-dvrvg" Nov 24 09:08:29 crc kubenswrapper[4886]: I1124 09:08:29.034741 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbac479b-45c3-44c6-985a-b88d878f3506-combined-ca-bundle\") pod \"barbican-api-8499689d4b-dvrvg\" (UID: \"dbac479b-45c3-44c6-985a-b88d878f3506\") " pod="openstack/barbican-api-8499689d4b-dvrvg" Nov 24 09:08:29 crc kubenswrapper[4886]: I1124 09:08:29.037026 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dbac479b-45c3-44c6-985a-b88d878f3506-config-data\") pod \"barbican-api-8499689d4b-dvrvg\" (UID: \"dbac479b-45c3-44c6-985a-b88d878f3506\") " pod="openstack/barbican-api-8499689d4b-dvrvg" Nov 24 09:08:29 crc kubenswrapper[4886]: I1124 09:08:29.056524 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-84jg9\" (UniqueName: \"kubernetes.io/projected/dbac479b-45c3-44c6-985a-b88d878f3506-kube-api-access-84jg9\") pod \"barbican-api-8499689d4b-dvrvg\" (UID: \"dbac479b-45c3-44c6-985a-b88d878f3506\") " pod="openstack/barbican-api-8499689d4b-dvrvg" Nov 24 09:08:29 crc kubenswrapper[4886]: I1124 09:08:29.071533 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-848cf88cfc-2l7b6" Nov 24 09:08:29 crc kubenswrapper[4886]: I1124 09:08:29.180101 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-8499689d4b-dvrvg" Nov 24 09:08:29 crc kubenswrapper[4886]: I1124 09:08:29.288882 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-ph4gr" Nov 24 09:08:29 crc kubenswrapper[4886]: I1124 09:08:29.423477 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7ca0ca62-7545-4e1a-9969-121899a789b0-scripts\") pod \"7ca0ca62-7545-4e1a-9969-121899a789b0\" (UID: \"7ca0ca62-7545-4e1a-9969-121899a789b0\") " Nov 24 09:08:29 crc kubenswrapper[4886]: I1124 09:08:29.424213 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/7ca0ca62-7545-4e1a-9969-121899a789b0-db-sync-config-data\") pod \"7ca0ca62-7545-4e1a-9969-121899a789b0\" (UID: \"7ca0ca62-7545-4e1a-9969-121899a789b0\") " Nov 24 09:08:29 crc kubenswrapper[4886]: I1124 09:08:29.424349 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cqwfs\" (UniqueName: \"kubernetes.io/projected/7ca0ca62-7545-4e1a-9969-121899a789b0-kube-api-access-cqwfs\") pod \"7ca0ca62-7545-4e1a-9969-121899a789b0\" (UID: \"7ca0ca62-7545-4e1a-9969-121899a789b0\") " Nov 24 09:08:29 crc kubenswrapper[4886]: I1124 09:08:29.424629 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ca0ca62-7545-4e1a-9969-121899a789b0-config-data\") pod \"7ca0ca62-7545-4e1a-9969-121899a789b0\" (UID: \"7ca0ca62-7545-4e1a-9969-121899a789b0\") " Nov 24 09:08:29 crc kubenswrapper[4886]: I1124 09:08:29.424681 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ca0ca62-7545-4e1a-9969-121899a789b0-combined-ca-bundle\") pod \"7ca0ca62-7545-4e1a-9969-121899a789b0\" (UID: \"7ca0ca62-7545-4e1a-9969-121899a789b0\") " Nov 24 09:08:29 crc kubenswrapper[4886]: I1124 09:08:29.424759 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7ca0ca62-7545-4e1a-9969-121899a789b0-etc-machine-id\") pod \"7ca0ca62-7545-4e1a-9969-121899a789b0\" (UID: \"7ca0ca62-7545-4e1a-9969-121899a789b0\") " Nov 24 09:08:29 crc kubenswrapper[4886]: I1124 09:08:29.425595 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7ca0ca62-7545-4e1a-9969-121899a789b0-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "7ca0ca62-7545-4e1a-9969-121899a789b0" (UID: "7ca0ca62-7545-4e1a-9969-121899a789b0"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 09:08:29 crc kubenswrapper[4886]: I1124 09:08:29.478342 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-df69f5cf-v8lvl"] Nov 24 09:08:29 crc kubenswrapper[4886]: I1124 09:08:29.481624 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7ca0ca62-7545-4e1a-9969-121899a789b0-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "7ca0ca62-7545-4e1a-9969-121899a789b0" (UID: "7ca0ca62-7545-4e1a-9969-121899a789b0"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:08:29 crc kubenswrapper[4886]: I1124 09:08:29.493456 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7ca0ca62-7545-4e1a-9969-121899a789b0-kube-api-access-cqwfs" (OuterVolumeSpecName: "kube-api-access-cqwfs") pod "7ca0ca62-7545-4e1a-9969-121899a789b0" (UID: "7ca0ca62-7545-4e1a-9969-121899a789b0"). InnerVolumeSpecName "kube-api-access-cqwfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 09:08:29 crc kubenswrapper[4886]: I1124 09:08:29.493857 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7ca0ca62-7545-4e1a-9969-121899a789b0-scripts" (OuterVolumeSpecName: "scripts") pod "7ca0ca62-7545-4e1a-9969-121899a789b0" (UID: "7ca0ca62-7545-4e1a-9969-121899a789b0"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:08:29 crc kubenswrapper[4886]: I1124 09:08:29.497925 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-ph4gr" event={"ID":"7ca0ca62-7545-4e1a-9969-121899a789b0","Type":"ContainerDied","Data":"98f958da8dece62769324b8e213afe34afbd39006950094f5969d58aa7ae1600"} Nov 24 09:08:29 crc kubenswrapper[4886]: I1124 09:08:29.497993 4886 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="98f958da8dece62769324b8e213afe34afbd39006950094f5969d58aa7ae1600" Nov 24 09:08:29 crc kubenswrapper[4886]: I1124 09:08:29.498113 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-ph4gr" Nov 24 09:08:29 crc kubenswrapper[4886]: I1124 09:08:29.529858 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cqwfs\" (UniqueName: \"kubernetes.io/projected/7ca0ca62-7545-4e1a-9969-121899a789b0-kube-api-access-cqwfs\") on node \"crc\" DevicePath \"\"" Nov 24 09:08:29 crc kubenswrapper[4886]: I1124 09:08:29.529906 4886 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7ca0ca62-7545-4e1a-9969-121899a789b0-etc-machine-id\") on node \"crc\" DevicePath \"\"" Nov 24 09:08:29 crc kubenswrapper[4886]: I1124 09:08:29.529920 4886 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7ca0ca62-7545-4e1a-9969-121899a789b0-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 09:08:29 crc kubenswrapper[4886]: I1124 09:08:29.529932 4886 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/7ca0ca62-7545-4e1a-9969-121899a789b0-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 09:08:29 crc kubenswrapper[4886]: W1124 09:08:29.552367 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod903a1b7e_92e3_455b_af86_c46c9a290f11.slice/crio-a7c3b76b81420ec62fac7f95634f1ec38166ac5a57e67341f8c4b6641a5ffb56 WatchSource:0}: Error finding container a7c3b76b81420ec62fac7f95634f1ec38166ac5a57e67341f8c4b6641a5ffb56: Status 404 returned error can't find the container with id a7c3b76b81420ec62fac7f95634f1ec38166ac5a57e67341f8c4b6641a5ffb56 Nov 24 09:08:29 crc kubenswrapper[4886]: I1124 09:08:29.640449 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7ca0ca62-7545-4e1a-9969-121899a789b0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7ca0ca62-7545-4e1a-9969-121899a789b0" (UID: "7ca0ca62-7545-4e1a-9969-121899a789b0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:08:29 crc kubenswrapper[4886]: I1124 09:08:29.641638 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ca0ca62-7545-4e1a-9969-121899a789b0-combined-ca-bundle\") pod \"7ca0ca62-7545-4e1a-9969-121899a789b0\" (UID: \"7ca0ca62-7545-4e1a-9969-121899a789b0\") " Nov 24 09:08:29 crc kubenswrapper[4886]: W1124 09:08:29.642857 4886 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/7ca0ca62-7545-4e1a-9969-121899a789b0/volumes/kubernetes.io~secret/combined-ca-bundle Nov 24 09:08:29 crc kubenswrapper[4886]: I1124 09:08:29.642878 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7ca0ca62-7545-4e1a-9969-121899a789b0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7ca0ca62-7545-4e1a-9969-121899a789b0" (UID: "7ca0ca62-7545-4e1a-9969-121899a789b0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:08:29 crc kubenswrapper[4886]: I1124 09:08:29.706420 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7ca0ca62-7545-4e1a-9969-121899a789b0-config-data" (OuterVolumeSpecName: "config-data") pod "7ca0ca62-7545-4e1a-9969-121899a789b0" (UID: "7ca0ca62-7545-4e1a-9969-121899a789b0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:08:29 crc kubenswrapper[4886]: I1124 09:08:29.747996 4886 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ca0ca62-7545-4e1a-9969-121899a789b0-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 09:08:29 crc kubenswrapper[4886]: I1124 09:08:29.748031 4886 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ca0ca62-7545-4e1a-9969-121899a789b0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 09:08:29 crc kubenswrapper[4886]: I1124 09:08:29.754692 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Nov 24 09:08:29 crc kubenswrapper[4886]: E1124 09:08:29.762908 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ca0ca62-7545-4e1a-9969-121899a789b0" containerName="cinder-db-sync" Nov 24 09:08:29 crc kubenswrapper[4886]: I1124 09:08:29.762964 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ca0ca62-7545-4e1a-9969-121899a789b0" containerName="cinder-db-sync" Nov 24 09:08:29 crc kubenswrapper[4886]: I1124 09:08:29.763372 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="7ca0ca62-7545-4e1a-9969-121899a789b0" containerName="cinder-db-sync" Nov 24 09:08:29 crc kubenswrapper[4886]: I1124 09:08:29.764872 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 24 09:08:29 crc kubenswrapper[4886]: I1124 09:08:29.783024 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Nov 24 09:08:29 crc kubenswrapper[4886]: I1124 09:08:29.783328 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 24 09:08:29 crc kubenswrapper[4886]: I1124 09:08:29.851250 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01e6ef25-7f8f-4ebb-9f73-d05aecd19942-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"01e6ef25-7f8f-4ebb-9f73-d05aecd19942\") " pod="openstack/cinder-scheduler-0" Nov 24 09:08:29 crc kubenswrapper[4886]: I1124 09:08:29.851323 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/01e6ef25-7f8f-4ebb-9f73-d05aecd19942-config-data\") pod \"cinder-scheduler-0\" (UID: \"01e6ef25-7f8f-4ebb-9f73-d05aecd19942\") " pod="openstack/cinder-scheduler-0" Nov 24 09:08:29 crc kubenswrapper[4886]: I1124 09:08:29.851369 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/01e6ef25-7f8f-4ebb-9f73-d05aecd19942-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"01e6ef25-7f8f-4ebb-9f73-d05aecd19942\") " pod="openstack/cinder-scheduler-0" Nov 24 09:08:29 crc kubenswrapper[4886]: I1124 09:08:29.851480 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/01e6ef25-7f8f-4ebb-9f73-d05aecd19942-scripts\") pod \"cinder-scheduler-0\" (UID: \"01e6ef25-7f8f-4ebb-9f73-d05aecd19942\") " pod="openstack/cinder-scheduler-0" Nov 24 09:08:29 crc kubenswrapper[4886]: I1124 09:08:29.851561 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/01e6ef25-7f8f-4ebb-9f73-d05aecd19942-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"01e6ef25-7f8f-4ebb-9f73-d05aecd19942\") " pod="openstack/cinder-scheduler-0" Nov 24 09:08:29 crc kubenswrapper[4886]: I1124 09:08:29.851618 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-976z5\" (UniqueName: \"kubernetes.io/projected/01e6ef25-7f8f-4ebb-9f73-d05aecd19942-kube-api-access-976z5\") pod \"cinder-scheduler-0\" (UID: \"01e6ef25-7f8f-4ebb-9f73-d05aecd19942\") " pod="openstack/cinder-scheduler-0" Nov 24 09:08:29 crc kubenswrapper[4886]: I1124 09:08:29.866817 4886 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-75ffb75746-pwc5g" podUID="54dfe4a2-2d0f-417f-b1d6-df18e5581bfc" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.144:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.144:8443: connect: connection refused" Nov 24 09:08:29 crc kubenswrapper[4886]: I1124 09:08:29.949978 4886 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-664f9d77dd-zw4gm" podUID="19e275c2-5fd6-4ea7-a023-6d7478ae5750" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.145:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.145:8443: connect: connection refused" Nov 24 09:08:29 crc kubenswrapper[4886]: I1124 09:08:29.953056 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-976z5\" (UniqueName: \"kubernetes.io/projected/01e6ef25-7f8f-4ebb-9f73-d05aecd19942-kube-api-access-976z5\") pod \"cinder-scheduler-0\" (UID: \"01e6ef25-7f8f-4ebb-9f73-d05aecd19942\") " pod="openstack/cinder-scheduler-0" Nov 24 09:08:29 crc kubenswrapper[4886]: I1124 09:08:29.953126 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01e6ef25-7f8f-4ebb-9f73-d05aecd19942-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"01e6ef25-7f8f-4ebb-9f73-d05aecd19942\") " pod="openstack/cinder-scheduler-0" Nov 24 09:08:29 crc kubenswrapper[4886]: I1124 09:08:29.953193 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/01e6ef25-7f8f-4ebb-9f73-d05aecd19942-config-data\") pod \"cinder-scheduler-0\" (UID: \"01e6ef25-7f8f-4ebb-9f73-d05aecd19942\") " pod="openstack/cinder-scheduler-0" Nov 24 09:08:29 crc kubenswrapper[4886]: I1124 09:08:29.953230 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/01e6ef25-7f8f-4ebb-9f73-d05aecd19942-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"01e6ef25-7f8f-4ebb-9f73-d05aecd19942\") " pod="openstack/cinder-scheduler-0" Nov 24 09:08:29 crc kubenswrapper[4886]: I1124 09:08:29.953316 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/01e6ef25-7f8f-4ebb-9f73-d05aecd19942-scripts\") pod \"cinder-scheduler-0\" (UID: \"01e6ef25-7f8f-4ebb-9f73-d05aecd19942\") " pod="openstack/cinder-scheduler-0" Nov 24 09:08:29 crc kubenswrapper[4886]: I1124 09:08:29.953362 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/01e6ef25-7f8f-4ebb-9f73-d05aecd19942-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"01e6ef25-7f8f-4ebb-9f73-d05aecd19942\") " pod="openstack/cinder-scheduler-0" Nov 24 09:08:29 crc kubenswrapper[4886]: I1124 09:08:29.953445 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/01e6ef25-7f8f-4ebb-9f73-d05aecd19942-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"01e6ef25-7f8f-4ebb-9f73-d05aecd19942\") " pod="openstack/cinder-scheduler-0" Nov 24 09:08:29 crc kubenswrapper[4886]: I1124 09:08:29.961553 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/01e6ef25-7f8f-4ebb-9f73-d05aecd19942-scripts\") pod \"cinder-scheduler-0\" (UID: \"01e6ef25-7f8f-4ebb-9f73-d05aecd19942\") " pod="openstack/cinder-scheduler-0" Nov 24 09:08:29 crc kubenswrapper[4886]: I1124 09:08:29.966297 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/01e6ef25-7f8f-4ebb-9f73-d05aecd19942-config-data\") pod \"cinder-scheduler-0\" (UID: \"01e6ef25-7f8f-4ebb-9f73-d05aecd19942\") " pod="openstack/cinder-scheduler-0" Nov 24 09:08:29 crc kubenswrapper[4886]: I1124 09:08:29.966397 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-848cf88cfc-2l7b6"] Nov 24 09:08:29 crc kubenswrapper[4886]: I1124 09:08:29.974874 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-5mr4b"] Nov 24 09:08:29 crc kubenswrapper[4886]: I1124 09:08:29.976851 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6578955fd5-5mr4b" Nov 24 09:08:29 crc kubenswrapper[4886]: I1124 09:08:29.989710 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01e6ef25-7f8f-4ebb-9f73-d05aecd19942-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"01e6ef25-7f8f-4ebb-9f73-d05aecd19942\") " pod="openstack/cinder-scheduler-0" Nov 24 09:08:29 crc kubenswrapper[4886]: I1124 09:08:29.996861 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/01e6ef25-7f8f-4ebb-9f73-d05aecd19942-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"01e6ef25-7f8f-4ebb-9f73-d05aecd19942\") " pod="openstack/cinder-scheduler-0" Nov 24 09:08:29 crc kubenswrapper[4886]: I1124 09:08:29.997851 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-976z5\" (UniqueName: \"kubernetes.io/projected/01e6ef25-7f8f-4ebb-9f73-d05aecd19942-kube-api-access-976z5\") pod \"cinder-scheduler-0\" (UID: \"01e6ef25-7f8f-4ebb-9f73-d05aecd19942\") " pod="openstack/cinder-scheduler-0" Nov 24 09:08:29 crc kubenswrapper[4886]: I1124 09:08:29.999825 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-5mr4b"] Nov 24 09:08:30 crc kubenswrapper[4886]: I1124 09:08:30.031245 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-8b4cf4966-gt5q7"] Nov 24 09:08:30 crc kubenswrapper[4886]: I1124 09:08:30.105647 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 24 09:08:30 crc kubenswrapper[4886]: I1124 09:08:30.120374 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Nov 24 09:08:30 crc kubenswrapper[4886]: I1124 09:08:30.122986 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 24 09:08:30 crc kubenswrapper[4886]: I1124 09:08:30.126651 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Nov 24 09:08:30 crc kubenswrapper[4886]: I1124 09:08:30.167786 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ae356139-b1b8-4952-9aa4-e233d04a9a08-dns-svc\") pod \"dnsmasq-dns-6578955fd5-5mr4b\" (UID: \"ae356139-b1b8-4952-9aa4-e233d04a9a08\") " pod="openstack/dnsmasq-dns-6578955fd5-5mr4b" Nov 24 09:08:30 crc kubenswrapper[4886]: I1124 09:08:30.167890 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ae356139-b1b8-4952-9aa4-e233d04a9a08-ovsdbserver-sb\") pod \"dnsmasq-dns-6578955fd5-5mr4b\" (UID: \"ae356139-b1b8-4952-9aa4-e233d04a9a08\") " pod="openstack/dnsmasq-dns-6578955fd5-5mr4b" Nov 24 09:08:30 crc kubenswrapper[4886]: I1124 09:08:30.168314 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hk5tt\" (UniqueName: \"kubernetes.io/projected/ae356139-b1b8-4952-9aa4-e233d04a9a08-kube-api-access-hk5tt\") pod \"dnsmasq-dns-6578955fd5-5mr4b\" (UID: \"ae356139-b1b8-4952-9aa4-e233d04a9a08\") " pod="openstack/dnsmasq-dns-6578955fd5-5mr4b" Nov 24 09:08:30 crc kubenswrapper[4886]: I1124 09:08:30.168601 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ae356139-b1b8-4952-9aa4-e233d04a9a08-dns-swift-storage-0\") pod \"dnsmasq-dns-6578955fd5-5mr4b\" (UID: \"ae356139-b1b8-4952-9aa4-e233d04a9a08\") " pod="openstack/dnsmasq-dns-6578955fd5-5mr4b" Nov 24 09:08:30 crc kubenswrapper[4886]: I1124 09:08:30.168885 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ae356139-b1b8-4952-9aa4-e233d04a9a08-config\") pod \"dnsmasq-dns-6578955fd5-5mr4b\" (UID: \"ae356139-b1b8-4952-9aa4-e233d04a9a08\") " pod="openstack/dnsmasq-dns-6578955fd5-5mr4b" Nov 24 09:08:30 crc kubenswrapper[4886]: I1124 09:08:30.168975 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ae356139-b1b8-4952-9aa4-e233d04a9a08-ovsdbserver-nb\") pod \"dnsmasq-dns-6578955fd5-5mr4b\" (UID: \"ae356139-b1b8-4952-9aa4-e233d04a9a08\") " pod="openstack/dnsmasq-dns-6578955fd5-5mr4b" Nov 24 09:08:30 crc kubenswrapper[4886]: I1124 09:08:30.169384 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Nov 24 09:08:30 crc kubenswrapper[4886]: W1124 09:08:30.202302 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddbac479b_45c3_44c6_985a_b88d878f3506.slice/crio-3c65ddea40b6a39588b588c9eb16dc50e0ed3126b076bdc3124924af65a2f7fc WatchSource:0}: Error finding container 3c65ddea40b6a39588b588c9eb16dc50e0ed3126b076bdc3124924af65a2f7fc: Status 404 returned error can't find the container with id 3c65ddea40b6a39588b588c9eb16dc50e0ed3126b076bdc3124924af65a2f7fc Nov 24 09:08:30 crc kubenswrapper[4886]: I1124 09:08:30.209823 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-8499689d4b-dvrvg"] Nov 24 09:08:30 crc kubenswrapper[4886]: I1124 09:08:30.274317 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-848cf88cfc-2l7b6"] Nov 24 09:08:30 crc kubenswrapper[4886]: I1124 09:08:30.276023 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ae356139-b1b8-4952-9aa4-e233d04a9a08-ovsdbserver-nb\") pod \"dnsmasq-dns-6578955fd5-5mr4b\" (UID: \"ae356139-b1b8-4952-9aa4-e233d04a9a08\") " pod="openstack/dnsmasq-dns-6578955fd5-5mr4b" Nov 24 09:08:30 crc kubenswrapper[4886]: I1124 09:08:30.276065 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ae356139-b1b8-4952-9aa4-e233d04a9a08-dns-svc\") pod \"dnsmasq-dns-6578955fd5-5mr4b\" (UID: \"ae356139-b1b8-4952-9aa4-e233d04a9a08\") " pod="openstack/dnsmasq-dns-6578955fd5-5mr4b" Nov 24 09:08:30 crc kubenswrapper[4886]: I1124 09:08:30.276087 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5f448bf9-baf9-4218-9549-6852ad6257ac-config-data-custom\") pod \"cinder-api-0\" (UID: \"5f448bf9-baf9-4218-9549-6852ad6257ac\") " pod="openstack/cinder-api-0" Nov 24 09:08:30 crc kubenswrapper[4886]: I1124 09:08:30.276105 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5f448bf9-baf9-4218-9549-6852ad6257ac-etc-machine-id\") pod \"cinder-api-0\" (UID: \"5f448bf9-baf9-4218-9549-6852ad6257ac\") " pod="openstack/cinder-api-0" Nov 24 09:08:30 crc kubenswrapper[4886]: I1124 09:08:30.276123 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ae356139-b1b8-4952-9aa4-e233d04a9a08-ovsdbserver-sb\") pod \"dnsmasq-dns-6578955fd5-5mr4b\" (UID: \"ae356139-b1b8-4952-9aa4-e233d04a9a08\") " pod="openstack/dnsmasq-dns-6578955fd5-5mr4b" Nov 24 09:08:30 crc kubenswrapper[4886]: I1124 09:08:30.276145 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5f448bf9-baf9-4218-9549-6852ad6257ac-scripts\") pod \"cinder-api-0\" (UID: \"5f448bf9-baf9-4218-9549-6852ad6257ac\") " pod="openstack/cinder-api-0" Nov 24 09:08:30 crc kubenswrapper[4886]: I1124 09:08:30.276214 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hk5tt\" (UniqueName: \"kubernetes.io/projected/ae356139-b1b8-4952-9aa4-e233d04a9a08-kube-api-access-hk5tt\") pod \"dnsmasq-dns-6578955fd5-5mr4b\" (UID: \"ae356139-b1b8-4952-9aa4-e233d04a9a08\") " pod="openstack/dnsmasq-dns-6578955fd5-5mr4b" Nov 24 09:08:30 crc kubenswrapper[4886]: I1124 09:08:30.276233 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f448bf9-baf9-4218-9549-6852ad6257ac-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"5f448bf9-baf9-4218-9549-6852ad6257ac\") " pod="openstack/cinder-api-0" Nov 24 09:08:30 crc kubenswrapper[4886]: I1124 09:08:30.276249 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5f448bf9-baf9-4218-9549-6852ad6257ac-logs\") pod \"cinder-api-0\" (UID: \"5f448bf9-baf9-4218-9549-6852ad6257ac\") " pod="openstack/cinder-api-0" Nov 24 09:08:30 crc kubenswrapper[4886]: I1124 09:08:30.276278 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5f448bf9-baf9-4218-9549-6852ad6257ac-config-data\") pod \"cinder-api-0\" (UID: \"5f448bf9-baf9-4218-9549-6852ad6257ac\") " pod="openstack/cinder-api-0" Nov 24 09:08:30 crc kubenswrapper[4886]: I1124 09:08:30.276307 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ae356139-b1b8-4952-9aa4-e233d04a9a08-dns-swift-storage-0\") pod \"dnsmasq-dns-6578955fd5-5mr4b\" (UID: \"ae356139-b1b8-4952-9aa4-e233d04a9a08\") " pod="openstack/dnsmasq-dns-6578955fd5-5mr4b" Nov 24 09:08:30 crc kubenswrapper[4886]: I1124 09:08:30.276342 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ghklc\" (UniqueName: \"kubernetes.io/projected/5f448bf9-baf9-4218-9549-6852ad6257ac-kube-api-access-ghklc\") pod \"cinder-api-0\" (UID: \"5f448bf9-baf9-4218-9549-6852ad6257ac\") " pod="openstack/cinder-api-0" Nov 24 09:08:30 crc kubenswrapper[4886]: I1124 09:08:30.276382 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ae356139-b1b8-4952-9aa4-e233d04a9a08-config\") pod \"dnsmasq-dns-6578955fd5-5mr4b\" (UID: \"ae356139-b1b8-4952-9aa4-e233d04a9a08\") " pod="openstack/dnsmasq-dns-6578955fd5-5mr4b" Nov 24 09:08:30 crc kubenswrapper[4886]: I1124 09:08:30.277621 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ae356139-b1b8-4952-9aa4-e233d04a9a08-config\") pod \"dnsmasq-dns-6578955fd5-5mr4b\" (UID: \"ae356139-b1b8-4952-9aa4-e233d04a9a08\") " pod="openstack/dnsmasq-dns-6578955fd5-5mr4b" Nov 24 09:08:30 crc kubenswrapper[4886]: I1124 09:08:30.277735 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ae356139-b1b8-4952-9aa4-e233d04a9a08-ovsdbserver-nb\") pod \"dnsmasq-dns-6578955fd5-5mr4b\" (UID: \"ae356139-b1b8-4952-9aa4-e233d04a9a08\") " pod="openstack/dnsmasq-dns-6578955fd5-5mr4b" Nov 24 09:08:30 crc kubenswrapper[4886]: I1124 09:08:30.278477 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ae356139-b1b8-4952-9aa4-e233d04a9a08-dns-svc\") pod \"dnsmasq-dns-6578955fd5-5mr4b\" (UID: \"ae356139-b1b8-4952-9aa4-e233d04a9a08\") " pod="openstack/dnsmasq-dns-6578955fd5-5mr4b" Nov 24 09:08:30 crc kubenswrapper[4886]: I1124 09:08:30.281469 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ae356139-b1b8-4952-9aa4-e233d04a9a08-dns-swift-storage-0\") pod \"dnsmasq-dns-6578955fd5-5mr4b\" (UID: \"ae356139-b1b8-4952-9aa4-e233d04a9a08\") " pod="openstack/dnsmasq-dns-6578955fd5-5mr4b" Nov 24 09:08:30 crc kubenswrapper[4886]: I1124 09:08:30.281883 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ae356139-b1b8-4952-9aa4-e233d04a9a08-ovsdbserver-sb\") pod \"dnsmasq-dns-6578955fd5-5mr4b\" (UID: \"ae356139-b1b8-4952-9aa4-e233d04a9a08\") " pod="openstack/dnsmasq-dns-6578955fd5-5mr4b" Nov 24 09:08:30 crc kubenswrapper[4886]: I1124 09:08:30.311923 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hk5tt\" (UniqueName: \"kubernetes.io/projected/ae356139-b1b8-4952-9aa4-e233d04a9a08-kube-api-access-hk5tt\") pod \"dnsmasq-dns-6578955fd5-5mr4b\" (UID: \"ae356139-b1b8-4952-9aa4-e233d04a9a08\") " pod="openstack/dnsmasq-dns-6578955fd5-5mr4b" Nov 24 09:08:30 crc kubenswrapper[4886]: I1124 09:08:30.378657 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5f448bf9-baf9-4218-9549-6852ad6257ac-config-data-custom\") pod \"cinder-api-0\" (UID: \"5f448bf9-baf9-4218-9549-6852ad6257ac\") " pod="openstack/cinder-api-0" Nov 24 09:08:30 crc kubenswrapper[4886]: I1124 09:08:30.379124 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5f448bf9-baf9-4218-9549-6852ad6257ac-etc-machine-id\") pod \"cinder-api-0\" (UID: \"5f448bf9-baf9-4218-9549-6852ad6257ac\") " pod="openstack/cinder-api-0" Nov 24 09:08:30 crc kubenswrapper[4886]: I1124 09:08:30.379194 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5f448bf9-baf9-4218-9549-6852ad6257ac-scripts\") pod \"cinder-api-0\" (UID: \"5f448bf9-baf9-4218-9549-6852ad6257ac\") " pod="openstack/cinder-api-0" Nov 24 09:08:30 crc kubenswrapper[4886]: I1124 09:08:30.379274 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f448bf9-baf9-4218-9549-6852ad6257ac-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"5f448bf9-baf9-4218-9549-6852ad6257ac\") " pod="openstack/cinder-api-0" Nov 24 09:08:30 crc kubenswrapper[4886]: I1124 09:08:30.379296 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5f448bf9-baf9-4218-9549-6852ad6257ac-logs\") pod \"cinder-api-0\" (UID: \"5f448bf9-baf9-4218-9549-6852ad6257ac\") " pod="openstack/cinder-api-0" Nov 24 09:08:30 crc kubenswrapper[4886]: I1124 09:08:30.379340 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5f448bf9-baf9-4218-9549-6852ad6257ac-config-data\") pod \"cinder-api-0\" (UID: \"5f448bf9-baf9-4218-9549-6852ad6257ac\") " pod="openstack/cinder-api-0" Nov 24 09:08:30 crc kubenswrapper[4886]: I1124 09:08:30.379409 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ghklc\" (UniqueName: \"kubernetes.io/projected/5f448bf9-baf9-4218-9549-6852ad6257ac-kube-api-access-ghklc\") pod \"cinder-api-0\" (UID: \"5f448bf9-baf9-4218-9549-6852ad6257ac\") " pod="openstack/cinder-api-0" Nov 24 09:08:30 crc kubenswrapper[4886]: I1124 09:08:30.382878 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5f448bf9-baf9-4218-9549-6852ad6257ac-etc-machine-id\") pod \"cinder-api-0\" (UID: \"5f448bf9-baf9-4218-9549-6852ad6257ac\") " pod="openstack/cinder-api-0" Nov 24 09:08:30 crc kubenswrapper[4886]: I1124 09:08:30.383325 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5f448bf9-baf9-4218-9549-6852ad6257ac-logs\") pod \"cinder-api-0\" (UID: \"5f448bf9-baf9-4218-9549-6852ad6257ac\") " pod="openstack/cinder-api-0" Nov 24 09:08:30 crc kubenswrapper[4886]: I1124 09:08:30.385785 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f448bf9-baf9-4218-9549-6852ad6257ac-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"5f448bf9-baf9-4218-9549-6852ad6257ac\") " pod="openstack/cinder-api-0" Nov 24 09:08:30 crc kubenswrapper[4886]: I1124 09:08:30.388446 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5f448bf9-baf9-4218-9549-6852ad6257ac-config-data-custom\") pod \"cinder-api-0\" (UID: \"5f448bf9-baf9-4218-9549-6852ad6257ac\") " pod="openstack/cinder-api-0" Nov 24 09:08:30 crc kubenswrapper[4886]: I1124 09:08:30.389483 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5f448bf9-baf9-4218-9549-6852ad6257ac-scripts\") pod \"cinder-api-0\" (UID: \"5f448bf9-baf9-4218-9549-6852ad6257ac\") " pod="openstack/cinder-api-0" Nov 24 09:08:30 crc kubenswrapper[4886]: I1124 09:08:30.391377 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5f448bf9-baf9-4218-9549-6852ad6257ac-config-data\") pod \"cinder-api-0\" (UID: \"5f448bf9-baf9-4218-9549-6852ad6257ac\") " pod="openstack/cinder-api-0" Nov 24 09:08:30 crc kubenswrapper[4886]: I1124 09:08:30.422036 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ghklc\" (UniqueName: \"kubernetes.io/projected/5f448bf9-baf9-4218-9549-6852ad6257ac-kube-api-access-ghklc\") pod \"cinder-api-0\" (UID: \"5f448bf9-baf9-4218-9549-6852ad6257ac\") " pod="openstack/cinder-api-0" Nov 24 09:08:30 crc kubenswrapper[4886]: I1124 09:08:30.453100 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6578955fd5-5mr4b" Nov 24 09:08:30 crc kubenswrapper[4886]: I1124 09:08:30.484954 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 24 09:08:30 crc kubenswrapper[4886]: I1124 09:08:30.513671 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-df69f5cf-v8lvl" event={"ID":"903a1b7e-92e3-455b-af86-c46c9a290f11","Type":"ContainerStarted","Data":"a7c3b76b81420ec62fac7f95634f1ec38166ac5a57e67341f8c4b6641a5ffb56"} Nov 24 09:08:30 crc kubenswrapper[4886]: I1124 09:08:30.515958 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-8499689d4b-dvrvg" event={"ID":"dbac479b-45c3-44c6-985a-b88d878f3506","Type":"ContainerStarted","Data":"3c65ddea40b6a39588b588c9eb16dc50e0ed3126b076bdc3124924af65a2f7fc"} Nov 24 09:08:30 crc kubenswrapper[4886]: I1124 09:08:30.517230 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-8b4cf4966-gt5q7" event={"ID":"cf27d89f-7c4b-49b5-a993-b851f86a2994","Type":"ContainerStarted","Data":"fecb61db24f9235d3ede2185ead377e443cb0d3321cb4c6775dff3b960f004a7"} Nov 24 09:08:30 crc kubenswrapper[4886]: I1124 09:08:30.523097 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-848cf88cfc-2l7b6" event={"ID":"051f4292-b5ec-4727-b48b-2b18b3f24b4f","Type":"ContainerStarted","Data":"f99542f470165462eb5b49edbc1c2ed2be08048cefd70dd0c32e16c2ff0bc1d3"} Nov 24 09:08:30 crc kubenswrapper[4886]: I1124 09:08:30.913167 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 24 09:08:30 crc kubenswrapper[4886]: I1124 09:08:30.970650 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-5mr4b"] Nov 24 09:08:30 crc kubenswrapper[4886]: W1124 09:08:30.994010 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podae356139_b1b8_4952_9aa4_e233d04a9a08.slice/crio-f8de8427e939fb3d07342d41a287b1f0ed6ce3cca1ea67df27095399343cf7af WatchSource:0}: Error finding container f8de8427e939fb3d07342d41a287b1f0ed6ce3cca1ea67df27095399343cf7af: Status 404 returned error can't find the container with id f8de8427e939fb3d07342d41a287b1f0ed6ce3cca1ea67df27095399343cf7af Nov 24 09:08:31 crc kubenswrapper[4886]: I1124 09:08:31.260634 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Nov 24 09:08:31 crc kubenswrapper[4886]: I1124 09:08:31.556543 4886 generic.go:334] "Generic (PLEG): container finished" podID="051f4292-b5ec-4727-b48b-2b18b3f24b4f" containerID="daef656352f05d784e4243bf911160e59aa25b20a438cef2952e3ddcea909e87" exitCode=0 Nov 24 09:08:31 crc kubenswrapper[4886]: I1124 09:08:31.557193 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-848cf88cfc-2l7b6" event={"ID":"051f4292-b5ec-4727-b48b-2b18b3f24b4f","Type":"ContainerDied","Data":"daef656352f05d784e4243bf911160e59aa25b20a438cef2952e3ddcea909e87"} Nov 24 09:08:31 crc kubenswrapper[4886]: I1124 09:08:31.565880 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"5f448bf9-baf9-4218-9549-6852ad6257ac","Type":"ContainerStarted","Data":"1e014d3a7dac88abb24865c840712e44f50d0bad16875241b38f06a9fcfe29a9"} Nov 24 09:08:31 crc kubenswrapper[4886]: I1124 09:08:31.568450 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-8499689d4b-dvrvg" event={"ID":"dbac479b-45c3-44c6-985a-b88d878f3506","Type":"ContainerStarted","Data":"2c6616ba5f79dbe2e925f8c267f6a0309d0f5343f60a70b1827a51fd481f46d9"} Nov 24 09:08:31 crc kubenswrapper[4886]: I1124 09:08:31.568508 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-8499689d4b-dvrvg" event={"ID":"dbac479b-45c3-44c6-985a-b88d878f3506","Type":"ContainerStarted","Data":"1a369f84939ccbd739030f6008173c67a0bd3a6677dd8192d9079309c8b47bbe"} Nov 24 09:08:31 crc kubenswrapper[4886]: I1124 09:08:31.569830 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-8499689d4b-dvrvg" Nov 24 09:08:31 crc kubenswrapper[4886]: I1124 09:08:31.569864 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-8499689d4b-dvrvg" Nov 24 09:08:31 crc kubenswrapper[4886]: I1124 09:08:31.572406 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-5mr4b" event={"ID":"ae356139-b1b8-4952-9aa4-e233d04a9a08","Type":"ContainerStarted","Data":"f8de8427e939fb3d07342d41a287b1f0ed6ce3cca1ea67df27095399343cf7af"} Nov 24 09:08:31 crc kubenswrapper[4886]: I1124 09:08:31.580428 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"01e6ef25-7f8f-4ebb-9f73-d05aecd19942","Type":"ContainerStarted","Data":"faeb8d32b1462126c14dd028e6622054d465dbc49ff181367a9d176022a5ab2f"} Nov 24 09:08:31 crc kubenswrapper[4886]: I1124 09:08:31.619812 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-8499689d4b-dvrvg" podStartSLOduration=3.619778168 podStartE2EDuration="3.619778168s" podCreationTimestamp="2025-11-24 09:08:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 09:08:31.613599365 +0000 UTC m=+1167.500337500" watchObservedRunningTime="2025-11-24 09:08:31.619778168 +0000 UTC m=+1167.506516303" Nov 24 09:08:32 crc kubenswrapper[4886]: I1124 09:08:32.322832 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-848cf88cfc-2l7b6" Nov 24 09:08:32 crc kubenswrapper[4886]: I1124 09:08:32.472502 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/051f4292-b5ec-4727-b48b-2b18b3f24b4f-dns-swift-storage-0\") pod \"051f4292-b5ec-4727-b48b-2b18b3f24b4f\" (UID: \"051f4292-b5ec-4727-b48b-2b18b3f24b4f\") " Nov 24 09:08:32 crc kubenswrapper[4886]: I1124 09:08:32.472635 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/051f4292-b5ec-4727-b48b-2b18b3f24b4f-ovsdbserver-sb\") pod \"051f4292-b5ec-4727-b48b-2b18b3f24b4f\" (UID: \"051f4292-b5ec-4727-b48b-2b18b3f24b4f\") " Nov 24 09:08:32 crc kubenswrapper[4886]: I1124 09:08:32.472737 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tg67s\" (UniqueName: \"kubernetes.io/projected/051f4292-b5ec-4727-b48b-2b18b3f24b4f-kube-api-access-tg67s\") pod \"051f4292-b5ec-4727-b48b-2b18b3f24b4f\" (UID: \"051f4292-b5ec-4727-b48b-2b18b3f24b4f\") " Nov 24 09:08:32 crc kubenswrapper[4886]: I1124 09:08:32.473046 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/051f4292-b5ec-4727-b48b-2b18b3f24b4f-config\") pod \"051f4292-b5ec-4727-b48b-2b18b3f24b4f\" (UID: \"051f4292-b5ec-4727-b48b-2b18b3f24b4f\") " Nov 24 09:08:32 crc kubenswrapper[4886]: I1124 09:08:32.473213 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/051f4292-b5ec-4727-b48b-2b18b3f24b4f-ovsdbserver-nb\") pod \"051f4292-b5ec-4727-b48b-2b18b3f24b4f\" (UID: \"051f4292-b5ec-4727-b48b-2b18b3f24b4f\") " Nov 24 09:08:32 crc kubenswrapper[4886]: I1124 09:08:32.473232 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/051f4292-b5ec-4727-b48b-2b18b3f24b4f-dns-svc\") pod \"051f4292-b5ec-4727-b48b-2b18b3f24b4f\" (UID: \"051f4292-b5ec-4727-b48b-2b18b3f24b4f\") " Nov 24 09:08:32 crc kubenswrapper[4886]: I1124 09:08:32.510493 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/051f4292-b5ec-4727-b48b-2b18b3f24b4f-kube-api-access-tg67s" (OuterVolumeSpecName: "kube-api-access-tg67s") pod "051f4292-b5ec-4727-b48b-2b18b3f24b4f" (UID: "051f4292-b5ec-4727-b48b-2b18b3f24b4f"). InnerVolumeSpecName "kube-api-access-tg67s". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 09:08:32 crc kubenswrapper[4886]: I1124 09:08:32.540729 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/051f4292-b5ec-4727-b48b-2b18b3f24b4f-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "051f4292-b5ec-4727-b48b-2b18b3f24b4f" (UID: "051f4292-b5ec-4727-b48b-2b18b3f24b4f"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 09:08:32 crc kubenswrapper[4886]: I1124 09:08:32.561530 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/051f4292-b5ec-4727-b48b-2b18b3f24b4f-config" (OuterVolumeSpecName: "config") pod "051f4292-b5ec-4727-b48b-2b18b3f24b4f" (UID: "051f4292-b5ec-4727-b48b-2b18b3f24b4f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 09:08:32 crc kubenswrapper[4886]: I1124 09:08:32.562927 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/051f4292-b5ec-4727-b48b-2b18b3f24b4f-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "051f4292-b5ec-4727-b48b-2b18b3f24b4f" (UID: "051f4292-b5ec-4727-b48b-2b18b3f24b4f"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 09:08:32 crc kubenswrapper[4886]: I1124 09:08:32.576504 4886 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/051f4292-b5ec-4727-b48b-2b18b3f24b4f-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 24 09:08:32 crc kubenswrapper[4886]: I1124 09:08:32.576541 4886 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/051f4292-b5ec-4727-b48b-2b18b3f24b4f-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 24 09:08:32 crc kubenswrapper[4886]: I1124 09:08:32.576551 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tg67s\" (UniqueName: \"kubernetes.io/projected/051f4292-b5ec-4727-b48b-2b18b3f24b4f-kube-api-access-tg67s\") on node \"crc\" DevicePath \"\"" Nov 24 09:08:32 crc kubenswrapper[4886]: I1124 09:08:32.576563 4886 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/051f4292-b5ec-4727-b48b-2b18b3f24b4f-config\") on node \"crc\" DevicePath \"\"" Nov 24 09:08:32 crc kubenswrapper[4886]: I1124 09:08:32.602368 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/051f4292-b5ec-4727-b48b-2b18b3f24b4f-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "051f4292-b5ec-4727-b48b-2b18b3f24b4f" (UID: "051f4292-b5ec-4727-b48b-2b18b3f24b4f"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 09:08:32 crc kubenswrapper[4886]: I1124 09:08:32.602399 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/051f4292-b5ec-4727-b48b-2b18b3f24b4f-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "051f4292-b5ec-4727-b48b-2b18b3f24b4f" (UID: "051f4292-b5ec-4727-b48b-2b18b3f24b4f"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 09:08:32 crc kubenswrapper[4886]: I1124 09:08:32.604377 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-848cf88cfc-2l7b6" event={"ID":"051f4292-b5ec-4727-b48b-2b18b3f24b4f","Type":"ContainerDied","Data":"f99542f470165462eb5b49edbc1c2ed2be08048cefd70dd0c32e16c2ff0bc1d3"} Nov 24 09:08:32 crc kubenswrapper[4886]: I1124 09:08:32.604449 4886 scope.go:117] "RemoveContainer" containerID="daef656352f05d784e4243bf911160e59aa25b20a438cef2952e3ddcea909e87" Nov 24 09:08:32 crc kubenswrapper[4886]: I1124 09:08:32.604616 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-848cf88cfc-2l7b6" Nov 24 09:08:32 crc kubenswrapper[4886]: I1124 09:08:32.609503 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"5f448bf9-baf9-4218-9549-6852ad6257ac","Type":"ContainerStarted","Data":"bfbaef69f3f21662738d02fb7297b355b475cea7c5f981ec7d7fcdd110173134"} Nov 24 09:08:32 crc kubenswrapper[4886]: I1124 09:08:32.620946 4886 generic.go:334] "Generic (PLEG): container finished" podID="ae356139-b1b8-4952-9aa4-e233d04a9a08" containerID="1d6704eda7e501572eb7ab47130a3a5c5fca8e659fac7045175870e3bbdb594e" exitCode=0 Nov 24 09:08:32 crc kubenswrapper[4886]: I1124 09:08:32.622847 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-5mr4b" event={"ID":"ae356139-b1b8-4952-9aa4-e233d04a9a08","Type":"ContainerDied","Data":"1d6704eda7e501572eb7ab47130a3a5c5fca8e659fac7045175870e3bbdb594e"} Nov 24 09:08:32 crc kubenswrapper[4886]: I1124 09:08:32.682769 4886 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/051f4292-b5ec-4727-b48b-2b18b3f24b4f-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 24 09:08:32 crc kubenswrapper[4886]: I1124 09:08:32.682810 4886 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/051f4292-b5ec-4727-b48b-2b18b3f24b4f-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 24 09:08:32 crc kubenswrapper[4886]: E1124 09:08:32.694410 4886 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod051f4292_b5ec_4727_b48b_2b18b3f24b4f.slice/crio-f99542f470165462eb5b49edbc1c2ed2be08048cefd70dd0c32e16c2ff0bc1d3\": RecentStats: unable to find data in memory cache]" Nov 24 09:08:32 crc kubenswrapper[4886]: I1124 09:08:32.724693 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-848cf88cfc-2l7b6"] Nov 24 09:08:32 crc kubenswrapper[4886]: I1124 09:08:32.735798 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-848cf88cfc-2l7b6"] Nov 24 09:08:32 crc kubenswrapper[4886]: I1124 09:08:32.870345 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="051f4292-b5ec-4727-b48b-2b18b3f24b4f" path="/var/lib/kubelet/pods/051f4292-b5ec-4727-b48b-2b18b3f24b4f/volumes" Nov 24 09:08:33 crc kubenswrapper[4886]: I1124 09:08:33.220431 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Nov 24 09:08:33 crc kubenswrapper[4886]: I1124 09:08:33.642296 4886 generic.go:334] "Generic (PLEG): container finished" podID="2b936c64-69ae-43db-9d33-9ed58719be26" containerID="043796bb0b674d4e5dc3a373e0e41904fae080bfdbe53f3350ea850561868dce" exitCode=137 Nov 24 09:08:33 crc kubenswrapper[4886]: I1124 09:08:33.642352 4886 generic.go:334] "Generic (PLEG): container finished" podID="2b936c64-69ae-43db-9d33-9ed58719be26" containerID="bfc902b8166c3fe12c6fbd87a7114a3179672f82f0eaf46d09d5d4af87c8a9f7" exitCode=137 Nov 24 09:08:33 crc kubenswrapper[4886]: I1124 09:08:33.642415 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5654c4b6cf-6mjl4" event={"ID":"2b936c64-69ae-43db-9d33-9ed58719be26","Type":"ContainerDied","Data":"043796bb0b674d4e5dc3a373e0e41904fae080bfdbe53f3350ea850561868dce"} Nov 24 09:08:33 crc kubenswrapper[4886]: I1124 09:08:33.642453 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5654c4b6cf-6mjl4" event={"ID":"2b936c64-69ae-43db-9d33-9ed58719be26","Type":"ContainerDied","Data":"bfc902b8166c3fe12c6fbd87a7114a3179672f82f0eaf46d09d5d4af87c8a9f7"} Nov 24 09:08:33 crc kubenswrapper[4886]: I1124 09:08:33.645820 4886 generic.go:334] "Generic (PLEG): container finished" podID="6e8de0fe-585f-4cb8-8deb-d788e443fbd2" containerID="a76fc97611d3be26cdd4e41ad350c79afbc3b4cd82d9c74df46643996426a51b" exitCode=0 Nov 24 09:08:33 crc kubenswrapper[4886]: I1124 09:08:33.646012 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6e8de0fe-585f-4cb8-8deb-d788e443fbd2","Type":"ContainerDied","Data":"a76fc97611d3be26cdd4e41ad350c79afbc3b4cd82d9c74df46643996426a51b"} Nov 24 09:08:33 crc kubenswrapper[4886]: I1124 09:08:33.736403 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-8568c485b4-l582l" Nov 24 09:08:34 crc kubenswrapper[4886]: I1124 09:08:34.382259 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5654c4b6cf-6mjl4" Nov 24 09:08:34 crc kubenswrapper[4886]: I1124 09:08:34.430732 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/2b936c64-69ae-43db-9d33-9ed58719be26-horizon-secret-key\") pod \"2b936c64-69ae-43db-9d33-9ed58719be26\" (UID: \"2b936c64-69ae-43db-9d33-9ed58719be26\") " Nov 24 09:08:34 crc kubenswrapper[4886]: I1124 09:08:34.430788 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2b936c64-69ae-43db-9d33-9ed58719be26-scripts\") pod \"2b936c64-69ae-43db-9d33-9ed58719be26\" (UID: \"2b936c64-69ae-43db-9d33-9ed58719be26\") " Nov 24 09:08:34 crc kubenswrapper[4886]: I1124 09:08:34.430860 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2b936c64-69ae-43db-9d33-9ed58719be26-config-data\") pod \"2b936c64-69ae-43db-9d33-9ed58719be26\" (UID: \"2b936c64-69ae-43db-9d33-9ed58719be26\") " Nov 24 09:08:34 crc kubenswrapper[4886]: I1124 09:08:34.431075 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2b936c64-69ae-43db-9d33-9ed58719be26-logs\") pod \"2b936c64-69ae-43db-9d33-9ed58719be26\" (UID: \"2b936c64-69ae-43db-9d33-9ed58719be26\") " Nov 24 09:08:34 crc kubenswrapper[4886]: I1124 09:08:34.431127 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hfhgr\" (UniqueName: \"kubernetes.io/projected/2b936c64-69ae-43db-9d33-9ed58719be26-kube-api-access-hfhgr\") pod \"2b936c64-69ae-43db-9d33-9ed58719be26\" (UID: \"2b936c64-69ae-43db-9d33-9ed58719be26\") " Nov 24 09:08:34 crc kubenswrapper[4886]: I1124 09:08:34.436812 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2b936c64-69ae-43db-9d33-9ed58719be26-logs" (OuterVolumeSpecName: "logs") pod "2b936c64-69ae-43db-9d33-9ed58719be26" (UID: "2b936c64-69ae-43db-9d33-9ed58719be26"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 09:08:34 crc kubenswrapper[4886]: I1124 09:08:34.448972 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2b936c64-69ae-43db-9d33-9ed58719be26-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "2b936c64-69ae-43db-9d33-9ed58719be26" (UID: "2b936c64-69ae-43db-9d33-9ed58719be26"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:08:34 crc kubenswrapper[4886]: I1124 09:08:34.451380 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2b936c64-69ae-43db-9d33-9ed58719be26-kube-api-access-hfhgr" (OuterVolumeSpecName: "kube-api-access-hfhgr") pod "2b936c64-69ae-43db-9d33-9ed58719be26" (UID: "2b936c64-69ae-43db-9d33-9ed58719be26"). InnerVolumeSpecName "kube-api-access-hfhgr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 09:08:34 crc kubenswrapper[4886]: I1124 09:08:34.496752 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2b936c64-69ae-43db-9d33-9ed58719be26-config-data" (OuterVolumeSpecName: "config-data") pod "2b936c64-69ae-43db-9d33-9ed58719be26" (UID: "2b936c64-69ae-43db-9d33-9ed58719be26"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 09:08:34 crc kubenswrapper[4886]: I1124 09:08:34.502731 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2b936c64-69ae-43db-9d33-9ed58719be26-scripts" (OuterVolumeSpecName: "scripts") pod "2b936c64-69ae-43db-9d33-9ed58719be26" (UID: "2b936c64-69ae-43db-9d33-9ed58719be26"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 09:08:34 crc kubenswrapper[4886]: I1124 09:08:34.533655 4886 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/2b936c64-69ae-43db-9d33-9ed58719be26-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Nov 24 09:08:34 crc kubenswrapper[4886]: I1124 09:08:34.533693 4886 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2b936c64-69ae-43db-9d33-9ed58719be26-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 09:08:34 crc kubenswrapper[4886]: I1124 09:08:34.533712 4886 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2b936c64-69ae-43db-9d33-9ed58719be26-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 09:08:34 crc kubenswrapper[4886]: I1124 09:08:34.533721 4886 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2b936c64-69ae-43db-9d33-9ed58719be26-logs\") on node \"crc\" DevicePath \"\"" Nov 24 09:08:34 crc kubenswrapper[4886]: I1124 09:08:34.533730 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hfhgr\" (UniqueName: \"kubernetes.io/projected/2b936c64-69ae-43db-9d33-9ed58719be26-kube-api-access-hfhgr\") on node \"crc\" DevicePath \"\"" Nov 24 09:08:34 crc kubenswrapper[4886]: I1124 09:08:34.695440 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-8b4cf4966-gt5q7" event={"ID":"cf27d89f-7c4b-49b5-a993-b851f86a2994","Type":"ContainerStarted","Data":"c46ac9aacaf84422cbfb3c763eb955ab2ef556ea5faad25ffff36a01183e3b60"} Nov 24 09:08:34 crc kubenswrapper[4886]: I1124 09:08:34.705435 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-df69f5cf-v8lvl" event={"ID":"903a1b7e-92e3-455b-af86-c46c9a290f11","Type":"ContainerStarted","Data":"d5c0965ace1845dbcf46397accdc6360e4cb659334cdc348b83e5f7b6e660781"} Nov 24 09:08:34 crc kubenswrapper[4886]: I1124 09:08:34.711924 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-5mr4b" event={"ID":"ae356139-b1b8-4952-9aa4-e233d04a9a08","Type":"ContainerStarted","Data":"ab8d7eb39a71552b9e686400215af23d4665ff6caeed8b010eccf04885875aa4"} Nov 24 09:08:34 crc kubenswrapper[4886]: I1124 09:08:34.713375 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6578955fd5-5mr4b" Nov 24 09:08:34 crc kubenswrapper[4886]: I1124 09:08:34.721671 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5654c4b6cf-6mjl4" event={"ID":"2b936c64-69ae-43db-9d33-9ed58719be26","Type":"ContainerDied","Data":"8b06a7c5fafc3293b46c9472a0530b27467a672c2189bfe8a5b4bf7ab36d68c3"} Nov 24 09:08:34 crc kubenswrapper[4886]: I1124 09:08:34.721722 4886 scope.go:117] "RemoveContainer" containerID="043796bb0b674d4e5dc3a373e0e41904fae080bfdbe53f3350ea850561868dce" Nov 24 09:08:34 crc kubenswrapper[4886]: I1124 09:08:34.721813 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5654c4b6cf-6mjl4" Nov 24 09:08:34 crc kubenswrapper[4886]: I1124 09:08:34.749606 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6578955fd5-5mr4b" podStartSLOduration=5.749586766 podStartE2EDuration="5.749586766s" podCreationTimestamp="2025-11-24 09:08:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 09:08:34.738383162 +0000 UTC m=+1170.625121297" watchObservedRunningTime="2025-11-24 09:08:34.749586766 +0000 UTC m=+1170.636324901" Nov 24 09:08:34 crc kubenswrapper[4886]: I1124 09:08:34.818362 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-5654c4b6cf-6mjl4"] Nov 24 09:08:34 crc kubenswrapper[4886]: I1124 09:08:34.839941 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-5654c4b6cf-6mjl4"] Nov 24 09:08:34 crc kubenswrapper[4886]: I1124 09:08:34.883267 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2b936c64-69ae-43db-9d33-9ed58719be26" path="/var/lib/kubelet/pods/2b936c64-69ae-43db-9d33-9ed58719be26/volumes" Nov 24 09:08:35 crc kubenswrapper[4886]: I1124 09:08:35.081102 4886 scope.go:117] "RemoveContainer" containerID="bfc902b8166c3fe12c6fbd87a7114a3179672f82f0eaf46d09d5d4af87c8a9f7" Nov 24 09:08:35 crc kubenswrapper[4886]: I1124 09:08:35.771678 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"01e6ef25-7f8f-4ebb-9f73-d05aecd19942","Type":"ContainerStarted","Data":"13630456e2200514f22f6c0d41a0156da7e8e3f4be23605e8f8ad37556501a63"} Nov 24 09:08:35 crc kubenswrapper[4886]: I1124 09:08:35.776499 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-8b4cf4966-gt5q7" event={"ID":"cf27d89f-7c4b-49b5-a993-b851f86a2994","Type":"ContainerStarted","Data":"7a916ee9b764543e221df0f3d0a3e2324a19eeefd18d48da45c94422ef524ba3"} Nov 24 09:08:35 crc kubenswrapper[4886]: I1124 09:08:35.798765 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-df69f5cf-v8lvl" event={"ID":"903a1b7e-92e3-455b-af86-c46c9a290f11","Type":"ContainerStarted","Data":"45b67f39f7a28dc67a127ee340f6f16a51284a63f11f78dd2b48a433ae8d7a35"} Nov 24 09:08:35 crc kubenswrapper[4886]: I1124 09:08:35.803534 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"5f448bf9-baf9-4218-9549-6852ad6257ac","Type":"ContainerStarted","Data":"4c5ddcfb8cbc81d15833b0232436f04e6a9b690a9319412aa7a3cc57c9b145d9"} Nov 24 09:08:35 crc kubenswrapper[4886]: I1124 09:08:35.803728 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="5f448bf9-baf9-4218-9549-6852ad6257ac" containerName="cinder-api-log" containerID="cri-o://bfbaef69f3f21662738d02fb7297b355b475cea7c5f981ec7d7fcdd110173134" gracePeriod=30 Nov 24 09:08:35 crc kubenswrapper[4886]: I1124 09:08:35.803855 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="5f448bf9-baf9-4218-9549-6852ad6257ac" containerName="cinder-api" containerID="cri-o://4c5ddcfb8cbc81d15833b0232436f04e6a9b690a9319412aa7a3cc57c9b145d9" gracePeriod=30 Nov 24 09:08:35 crc kubenswrapper[4886]: I1124 09:08:35.818027 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-8b4cf4966-gt5q7" podStartSLOduration=3.841465282 podStartE2EDuration="7.817995243s" podCreationTimestamp="2025-11-24 09:08:28 +0000 UTC" firstStartedPulling="2025-11-24 09:08:30.04688255 +0000 UTC m=+1165.933620685" lastFinishedPulling="2025-11-24 09:08:34.023412511 +0000 UTC m=+1169.910150646" observedRunningTime="2025-11-24 09:08:35.809257248 +0000 UTC m=+1171.695995403" watchObservedRunningTime="2025-11-24 09:08:35.817995243 +0000 UTC m=+1171.704733398" Nov 24 09:08:35 crc kubenswrapper[4886]: I1124 09:08:35.853836 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-df69f5cf-v8lvl" podStartSLOduration=3.414567325 podStartE2EDuration="7.853812276s" podCreationTimestamp="2025-11-24 09:08:28 +0000 UTC" firstStartedPulling="2025-11-24 09:08:29.583127171 +0000 UTC m=+1165.469865306" lastFinishedPulling="2025-11-24 09:08:34.022372122 +0000 UTC m=+1169.909110257" observedRunningTime="2025-11-24 09:08:35.845113402 +0000 UTC m=+1171.731851537" watchObservedRunningTime="2025-11-24 09:08:35.853812276 +0000 UTC m=+1171.740550401" Nov 24 09:08:35 crc kubenswrapper[4886]: I1124 09:08:35.883857 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=5.8838350980000005 podStartE2EDuration="5.883835098s" podCreationTimestamp="2025-11-24 09:08:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 09:08:35.882292984 +0000 UTC m=+1171.769031119" watchObservedRunningTime="2025-11-24 09:08:35.883835098 +0000 UTC m=+1171.770573233" Nov 24 09:08:36 crc kubenswrapper[4886]: I1124 09:08:36.205388 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-58775dd67f-bvv4s" Nov 24 09:08:36 crc kubenswrapper[4886]: I1124 09:08:36.301586 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-8568c485b4-l582l"] Nov 24 09:08:36 crc kubenswrapper[4886]: I1124 09:08:36.301904 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-8568c485b4-l582l" podUID="58e8a97b-bd79-4e90-99ef-4e0eab79c454" containerName="neutron-api" containerID="cri-o://8d13265eb27febd68b62cd429cdb79754e348aaaf855824ed40c4f409263550d" gracePeriod=30 Nov 24 09:08:36 crc kubenswrapper[4886]: I1124 09:08:36.302347 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-8568c485b4-l582l" podUID="58e8a97b-bd79-4e90-99ef-4e0eab79c454" containerName="neutron-httpd" containerID="cri-o://889ae79220a89d15eec8f56867b0331a8b9a2285a600e5e3ca9e38df50c45943" gracePeriod=30 Nov 24 09:08:36 crc kubenswrapper[4886]: I1124 09:08:36.799689 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-76455fdd78-8k9rz"] Nov 24 09:08:36 crc kubenswrapper[4886]: E1124 09:08:36.801310 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b936c64-69ae-43db-9d33-9ed58719be26" containerName="horizon" Nov 24 09:08:36 crc kubenswrapper[4886]: I1124 09:08:36.801386 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b936c64-69ae-43db-9d33-9ed58719be26" containerName="horizon" Nov 24 09:08:36 crc kubenswrapper[4886]: E1124 09:08:36.801460 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b936c64-69ae-43db-9d33-9ed58719be26" containerName="horizon-log" Nov 24 09:08:36 crc kubenswrapper[4886]: I1124 09:08:36.801511 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b936c64-69ae-43db-9d33-9ed58719be26" containerName="horizon-log" Nov 24 09:08:36 crc kubenswrapper[4886]: E1124 09:08:36.801575 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="051f4292-b5ec-4727-b48b-2b18b3f24b4f" containerName="init" Nov 24 09:08:36 crc kubenswrapper[4886]: I1124 09:08:36.801630 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="051f4292-b5ec-4727-b48b-2b18b3f24b4f" containerName="init" Nov 24 09:08:36 crc kubenswrapper[4886]: I1124 09:08:36.801890 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="2b936c64-69ae-43db-9d33-9ed58719be26" containerName="horizon" Nov 24 09:08:36 crc kubenswrapper[4886]: I1124 09:08:36.801978 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="2b936c64-69ae-43db-9d33-9ed58719be26" containerName="horizon-log" Nov 24 09:08:36 crc kubenswrapper[4886]: I1124 09:08:36.802051 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="051f4292-b5ec-4727-b48b-2b18b3f24b4f" containerName="init" Nov 24 09:08:36 crc kubenswrapper[4886]: I1124 09:08:36.803291 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-76455fdd78-8k9rz" Nov 24 09:08:36 crc kubenswrapper[4886]: I1124 09:08:36.808986 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Nov 24 09:08:36 crc kubenswrapper[4886]: I1124 09:08:36.809244 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Nov 24 09:08:36 crc kubenswrapper[4886]: I1124 09:08:36.815211 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"01e6ef25-7f8f-4ebb-9f73-d05aecd19942","Type":"ContainerStarted","Data":"9e97d86b69b54d23565e7fb48930e481f69e1f1df5e12bafbf4041b42ab637eb"} Nov 24 09:08:36 crc kubenswrapper[4886]: I1124 09:08:36.817838 4886 generic.go:334] "Generic (PLEG): container finished" podID="5f448bf9-baf9-4218-9549-6852ad6257ac" containerID="4c5ddcfb8cbc81d15833b0232436f04e6a9b690a9319412aa7a3cc57c9b145d9" exitCode=0 Nov 24 09:08:36 crc kubenswrapper[4886]: I1124 09:08:36.817880 4886 generic.go:334] "Generic (PLEG): container finished" podID="5f448bf9-baf9-4218-9549-6852ad6257ac" containerID="bfbaef69f3f21662738d02fb7297b355b475cea7c5f981ec7d7fcdd110173134" exitCode=143 Nov 24 09:08:36 crc kubenswrapper[4886]: I1124 09:08:36.817998 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"5f448bf9-baf9-4218-9549-6852ad6257ac","Type":"ContainerDied","Data":"4c5ddcfb8cbc81d15833b0232436f04e6a9b690a9319412aa7a3cc57c9b145d9"} Nov 24 09:08:36 crc kubenswrapper[4886]: I1124 09:08:36.818054 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"5f448bf9-baf9-4218-9549-6852ad6257ac","Type":"ContainerDied","Data":"bfbaef69f3f21662738d02fb7297b355b475cea7c5f981ec7d7fcdd110173134"} Nov 24 09:08:36 crc kubenswrapper[4886]: I1124 09:08:36.827691 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-76455fdd78-8k9rz"] Nov 24 09:08:36 crc kubenswrapper[4886]: I1124 09:08:36.898579 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5555aeec-470e-473c-ad74-de78791861dc-combined-ca-bundle\") pod \"barbican-api-76455fdd78-8k9rz\" (UID: \"5555aeec-470e-473c-ad74-de78791861dc\") " pod="openstack/barbican-api-76455fdd78-8k9rz" Nov 24 09:08:36 crc kubenswrapper[4886]: I1124 09:08:36.898718 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5555aeec-470e-473c-ad74-de78791861dc-internal-tls-certs\") pod \"barbican-api-76455fdd78-8k9rz\" (UID: \"5555aeec-470e-473c-ad74-de78791861dc\") " pod="openstack/barbican-api-76455fdd78-8k9rz" Nov 24 09:08:36 crc kubenswrapper[4886]: I1124 09:08:36.898837 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5555aeec-470e-473c-ad74-de78791861dc-logs\") pod \"barbican-api-76455fdd78-8k9rz\" (UID: \"5555aeec-470e-473c-ad74-de78791861dc\") " pod="openstack/barbican-api-76455fdd78-8k9rz" Nov 24 09:08:36 crc kubenswrapper[4886]: I1124 09:08:36.898875 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pn7zh\" (UniqueName: \"kubernetes.io/projected/5555aeec-470e-473c-ad74-de78791861dc-kube-api-access-pn7zh\") pod \"barbican-api-76455fdd78-8k9rz\" (UID: \"5555aeec-470e-473c-ad74-de78791861dc\") " pod="openstack/barbican-api-76455fdd78-8k9rz" Nov 24 09:08:36 crc kubenswrapper[4886]: I1124 09:08:36.899076 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5555aeec-470e-473c-ad74-de78791861dc-config-data-custom\") pod \"barbican-api-76455fdd78-8k9rz\" (UID: \"5555aeec-470e-473c-ad74-de78791861dc\") " pod="openstack/barbican-api-76455fdd78-8k9rz" Nov 24 09:08:36 crc kubenswrapper[4886]: I1124 09:08:36.899206 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5555aeec-470e-473c-ad74-de78791861dc-config-data\") pod \"barbican-api-76455fdd78-8k9rz\" (UID: \"5555aeec-470e-473c-ad74-de78791861dc\") " pod="openstack/barbican-api-76455fdd78-8k9rz" Nov 24 09:08:36 crc kubenswrapper[4886]: I1124 09:08:36.899272 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5555aeec-470e-473c-ad74-de78791861dc-public-tls-certs\") pod \"barbican-api-76455fdd78-8k9rz\" (UID: \"5555aeec-470e-473c-ad74-de78791861dc\") " pod="openstack/barbican-api-76455fdd78-8k9rz" Nov 24 09:08:37 crc kubenswrapper[4886]: I1124 09:08:37.003550 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5555aeec-470e-473c-ad74-de78791861dc-config-data\") pod \"barbican-api-76455fdd78-8k9rz\" (UID: \"5555aeec-470e-473c-ad74-de78791861dc\") " pod="openstack/barbican-api-76455fdd78-8k9rz" Nov 24 09:08:37 crc kubenswrapper[4886]: I1124 09:08:37.003662 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5555aeec-470e-473c-ad74-de78791861dc-public-tls-certs\") pod \"barbican-api-76455fdd78-8k9rz\" (UID: \"5555aeec-470e-473c-ad74-de78791861dc\") " pod="openstack/barbican-api-76455fdd78-8k9rz" Nov 24 09:08:37 crc kubenswrapper[4886]: I1124 09:08:37.003819 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5555aeec-470e-473c-ad74-de78791861dc-combined-ca-bundle\") pod \"barbican-api-76455fdd78-8k9rz\" (UID: \"5555aeec-470e-473c-ad74-de78791861dc\") " pod="openstack/barbican-api-76455fdd78-8k9rz" Nov 24 09:08:37 crc kubenswrapper[4886]: I1124 09:08:37.003872 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5555aeec-470e-473c-ad74-de78791861dc-internal-tls-certs\") pod \"barbican-api-76455fdd78-8k9rz\" (UID: \"5555aeec-470e-473c-ad74-de78791861dc\") " pod="openstack/barbican-api-76455fdd78-8k9rz" Nov 24 09:08:37 crc kubenswrapper[4886]: I1124 09:08:37.003960 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5555aeec-470e-473c-ad74-de78791861dc-logs\") pod \"barbican-api-76455fdd78-8k9rz\" (UID: \"5555aeec-470e-473c-ad74-de78791861dc\") " pod="openstack/barbican-api-76455fdd78-8k9rz" Nov 24 09:08:37 crc kubenswrapper[4886]: I1124 09:08:37.003989 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pn7zh\" (UniqueName: \"kubernetes.io/projected/5555aeec-470e-473c-ad74-de78791861dc-kube-api-access-pn7zh\") pod \"barbican-api-76455fdd78-8k9rz\" (UID: \"5555aeec-470e-473c-ad74-de78791861dc\") " pod="openstack/barbican-api-76455fdd78-8k9rz" Nov 24 09:08:37 crc kubenswrapper[4886]: I1124 09:08:37.004087 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5555aeec-470e-473c-ad74-de78791861dc-config-data-custom\") pod \"barbican-api-76455fdd78-8k9rz\" (UID: \"5555aeec-470e-473c-ad74-de78791861dc\") " pod="openstack/barbican-api-76455fdd78-8k9rz" Nov 24 09:08:37 crc kubenswrapper[4886]: I1124 09:08:37.005315 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5555aeec-470e-473c-ad74-de78791861dc-logs\") pod \"barbican-api-76455fdd78-8k9rz\" (UID: \"5555aeec-470e-473c-ad74-de78791861dc\") " pod="openstack/barbican-api-76455fdd78-8k9rz" Nov 24 09:08:37 crc kubenswrapper[4886]: I1124 09:08:37.011275 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5555aeec-470e-473c-ad74-de78791861dc-config-data-custom\") pod \"barbican-api-76455fdd78-8k9rz\" (UID: \"5555aeec-470e-473c-ad74-de78791861dc\") " pod="openstack/barbican-api-76455fdd78-8k9rz" Nov 24 09:08:37 crc kubenswrapper[4886]: I1124 09:08:37.014415 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5555aeec-470e-473c-ad74-de78791861dc-config-data\") pod \"barbican-api-76455fdd78-8k9rz\" (UID: \"5555aeec-470e-473c-ad74-de78791861dc\") " pod="openstack/barbican-api-76455fdd78-8k9rz" Nov 24 09:08:37 crc kubenswrapper[4886]: I1124 09:08:37.014993 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5555aeec-470e-473c-ad74-de78791861dc-combined-ca-bundle\") pod \"barbican-api-76455fdd78-8k9rz\" (UID: \"5555aeec-470e-473c-ad74-de78791861dc\") " pod="openstack/barbican-api-76455fdd78-8k9rz" Nov 24 09:08:37 crc kubenswrapper[4886]: I1124 09:08:37.018062 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5555aeec-470e-473c-ad74-de78791861dc-public-tls-certs\") pod \"barbican-api-76455fdd78-8k9rz\" (UID: \"5555aeec-470e-473c-ad74-de78791861dc\") " pod="openstack/barbican-api-76455fdd78-8k9rz" Nov 24 09:08:37 crc kubenswrapper[4886]: I1124 09:08:37.027861 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5555aeec-470e-473c-ad74-de78791861dc-internal-tls-certs\") pod \"barbican-api-76455fdd78-8k9rz\" (UID: \"5555aeec-470e-473c-ad74-de78791861dc\") " pod="openstack/barbican-api-76455fdd78-8k9rz" Nov 24 09:08:37 crc kubenswrapper[4886]: I1124 09:08:37.028063 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pn7zh\" (UniqueName: \"kubernetes.io/projected/5555aeec-470e-473c-ad74-de78791861dc-kube-api-access-pn7zh\") pod \"barbican-api-76455fdd78-8k9rz\" (UID: \"5555aeec-470e-473c-ad74-de78791861dc\") " pod="openstack/barbican-api-76455fdd78-8k9rz" Nov 24 09:08:37 crc kubenswrapper[4886]: I1124 09:08:37.124747 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-76455fdd78-8k9rz" Nov 24 09:08:37 crc kubenswrapper[4886]: I1124 09:08:37.698935 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 24 09:08:37 crc kubenswrapper[4886]: I1124 09:08:37.734590 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=5.673038371 podStartE2EDuration="8.734563624s" podCreationTimestamp="2025-11-24 09:08:29 +0000 UTC" firstStartedPulling="2025-11-24 09:08:30.959216453 +0000 UTC m=+1166.845954588" lastFinishedPulling="2025-11-24 09:08:34.020741706 +0000 UTC m=+1169.907479841" observedRunningTime="2025-11-24 09:08:36.892783789 +0000 UTC m=+1172.779521924" watchObservedRunningTime="2025-11-24 09:08:37.734563624 +0000 UTC m=+1173.621301759" Nov 24 09:08:37 crc kubenswrapper[4886]: I1124 09:08:37.738397 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5f448bf9-baf9-4218-9549-6852ad6257ac-scripts\") pod \"5f448bf9-baf9-4218-9549-6852ad6257ac\" (UID: \"5f448bf9-baf9-4218-9549-6852ad6257ac\") " Nov 24 09:08:37 crc kubenswrapper[4886]: I1124 09:08:37.738576 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5f448bf9-baf9-4218-9549-6852ad6257ac-config-data\") pod \"5f448bf9-baf9-4218-9549-6852ad6257ac\" (UID: \"5f448bf9-baf9-4218-9549-6852ad6257ac\") " Nov 24 09:08:37 crc kubenswrapper[4886]: I1124 09:08:37.738639 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f448bf9-baf9-4218-9549-6852ad6257ac-combined-ca-bundle\") pod \"5f448bf9-baf9-4218-9549-6852ad6257ac\" (UID: \"5f448bf9-baf9-4218-9549-6852ad6257ac\") " Nov 24 09:08:37 crc kubenswrapper[4886]: I1124 09:08:37.738664 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5f448bf9-baf9-4218-9549-6852ad6257ac-config-data-custom\") pod \"5f448bf9-baf9-4218-9549-6852ad6257ac\" (UID: \"5f448bf9-baf9-4218-9549-6852ad6257ac\") " Nov 24 09:08:37 crc kubenswrapper[4886]: I1124 09:08:37.738702 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5f448bf9-baf9-4218-9549-6852ad6257ac-logs\") pod \"5f448bf9-baf9-4218-9549-6852ad6257ac\" (UID: \"5f448bf9-baf9-4218-9549-6852ad6257ac\") " Nov 24 09:08:37 crc kubenswrapper[4886]: I1124 09:08:37.738853 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5f448bf9-baf9-4218-9549-6852ad6257ac-etc-machine-id\") pod \"5f448bf9-baf9-4218-9549-6852ad6257ac\" (UID: \"5f448bf9-baf9-4218-9549-6852ad6257ac\") " Nov 24 09:08:37 crc kubenswrapper[4886]: I1124 09:08:37.738978 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ghklc\" (UniqueName: \"kubernetes.io/projected/5f448bf9-baf9-4218-9549-6852ad6257ac-kube-api-access-ghklc\") pod \"5f448bf9-baf9-4218-9549-6852ad6257ac\" (UID: \"5f448bf9-baf9-4218-9549-6852ad6257ac\") " Nov 24 09:08:37 crc kubenswrapper[4886]: I1124 09:08:37.739541 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5f448bf9-baf9-4218-9549-6852ad6257ac-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "5f448bf9-baf9-4218-9549-6852ad6257ac" (UID: "5f448bf9-baf9-4218-9549-6852ad6257ac"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 09:08:37 crc kubenswrapper[4886]: I1124 09:08:37.740206 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5f448bf9-baf9-4218-9549-6852ad6257ac-logs" (OuterVolumeSpecName: "logs") pod "5f448bf9-baf9-4218-9549-6852ad6257ac" (UID: "5f448bf9-baf9-4218-9549-6852ad6257ac"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 09:08:37 crc kubenswrapper[4886]: I1124 09:08:37.747395 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5f448bf9-baf9-4218-9549-6852ad6257ac-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "5f448bf9-baf9-4218-9549-6852ad6257ac" (UID: "5f448bf9-baf9-4218-9549-6852ad6257ac"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:08:37 crc kubenswrapper[4886]: I1124 09:08:37.747684 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5f448bf9-baf9-4218-9549-6852ad6257ac-scripts" (OuterVolumeSpecName: "scripts") pod "5f448bf9-baf9-4218-9549-6852ad6257ac" (UID: "5f448bf9-baf9-4218-9549-6852ad6257ac"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:08:37 crc kubenswrapper[4886]: I1124 09:08:37.760660 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5f448bf9-baf9-4218-9549-6852ad6257ac-kube-api-access-ghklc" (OuterVolumeSpecName: "kube-api-access-ghklc") pod "5f448bf9-baf9-4218-9549-6852ad6257ac" (UID: "5f448bf9-baf9-4218-9549-6852ad6257ac"). InnerVolumeSpecName "kube-api-access-ghklc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 09:08:37 crc kubenswrapper[4886]: I1124 09:08:37.841752 4886 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5f448bf9-baf9-4218-9549-6852ad6257ac-etc-machine-id\") on node \"crc\" DevicePath \"\"" Nov 24 09:08:37 crc kubenswrapper[4886]: I1124 09:08:37.846031 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ghklc\" (UniqueName: \"kubernetes.io/projected/5f448bf9-baf9-4218-9549-6852ad6257ac-kube-api-access-ghklc\") on node \"crc\" DevicePath \"\"" Nov 24 09:08:37 crc kubenswrapper[4886]: I1124 09:08:37.846059 4886 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5f448bf9-baf9-4218-9549-6852ad6257ac-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 09:08:37 crc kubenswrapper[4886]: I1124 09:08:37.846068 4886 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5f448bf9-baf9-4218-9549-6852ad6257ac-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 24 09:08:37 crc kubenswrapper[4886]: I1124 09:08:37.846076 4886 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5f448bf9-baf9-4218-9549-6852ad6257ac-logs\") on node \"crc\" DevicePath \"\"" Nov 24 09:08:37 crc kubenswrapper[4886]: I1124 09:08:37.852450 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5f448bf9-baf9-4218-9549-6852ad6257ac-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5f448bf9-baf9-4218-9549-6852ad6257ac" (UID: "5f448bf9-baf9-4218-9549-6852ad6257ac"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:08:37 crc kubenswrapper[4886]: I1124 09:08:37.872683 4886 generic.go:334] "Generic (PLEG): container finished" podID="58e8a97b-bd79-4e90-99ef-4e0eab79c454" containerID="889ae79220a89d15eec8f56867b0331a8b9a2285a600e5e3ca9e38df50c45943" exitCode=0 Nov 24 09:08:37 crc kubenswrapper[4886]: I1124 09:08:37.872971 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-8568c485b4-l582l" event={"ID":"58e8a97b-bd79-4e90-99ef-4e0eab79c454","Type":"ContainerDied","Data":"889ae79220a89d15eec8f56867b0331a8b9a2285a600e5e3ca9e38df50c45943"} Nov 24 09:08:37 crc kubenswrapper[4886]: I1124 09:08:37.885138 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 24 09:08:37 crc kubenswrapper[4886]: I1124 09:08:37.885427 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"5f448bf9-baf9-4218-9549-6852ad6257ac","Type":"ContainerDied","Data":"1e014d3a7dac88abb24865c840712e44f50d0bad16875241b38f06a9fcfe29a9"} Nov 24 09:08:37 crc kubenswrapper[4886]: I1124 09:08:37.885532 4886 scope.go:117] "RemoveContainer" containerID="4c5ddcfb8cbc81d15833b0232436f04e6a9b690a9319412aa7a3cc57c9b145d9" Nov 24 09:08:37 crc kubenswrapper[4886]: I1124 09:08:37.906332 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5f448bf9-baf9-4218-9549-6852ad6257ac-config-data" (OuterVolumeSpecName: "config-data") pod "5f448bf9-baf9-4218-9549-6852ad6257ac" (UID: "5f448bf9-baf9-4218-9549-6852ad6257ac"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:08:37 crc kubenswrapper[4886]: I1124 09:08:37.920852 4886 scope.go:117] "RemoveContainer" containerID="bfbaef69f3f21662738d02fb7297b355b475cea7c5f981ec7d7fcdd110173134" Nov 24 09:08:37 crc kubenswrapper[4886]: I1124 09:08:37.961256 4886 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5f448bf9-baf9-4218-9549-6852ad6257ac-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 09:08:37 crc kubenswrapper[4886]: I1124 09:08:37.961313 4886 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f448bf9-baf9-4218-9549-6852ad6257ac-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 09:08:38 crc kubenswrapper[4886]: I1124 09:08:38.007116 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-76455fdd78-8k9rz"] Nov 24 09:08:38 crc kubenswrapper[4886]: W1124 09:08:38.015536 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5555aeec_470e_473c_ad74_de78791861dc.slice/crio-46f768cbe2691ea7351ad8378de39e6d9f0415fb62c1e917b80c3747d2897a23 WatchSource:0}: Error finding container 46f768cbe2691ea7351ad8378de39e6d9f0415fb62c1e917b80c3747d2897a23: Status 404 returned error can't find the container with id 46f768cbe2691ea7351ad8378de39e6d9f0415fb62c1e917b80c3747d2897a23 Nov 24 09:08:38 crc kubenswrapper[4886]: I1124 09:08:38.228198 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Nov 24 09:08:38 crc kubenswrapper[4886]: I1124 09:08:38.268293 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Nov 24 09:08:38 crc kubenswrapper[4886]: I1124 09:08:38.291474 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Nov 24 09:08:38 crc kubenswrapper[4886]: E1124 09:08:38.291976 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f448bf9-baf9-4218-9549-6852ad6257ac" containerName="cinder-api-log" Nov 24 09:08:38 crc kubenswrapper[4886]: I1124 09:08:38.292001 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f448bf9-baf9-4218-9549-6852ad6257ac" containerName="cinder-api-log" Nov 24 09:08:38 crc kubenswrapper[4886]: E1124 09:08:38.292035 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f448bf9-baf9-4218-9549-6852ad6257ac" containerName="cinder-api" Nov 24 09:08:38 crc kubenswrapper[4886]: I1124 09:08:38.292043 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f448bf9-baf9-4218-9549-6852ad6257ac" containerName="cinder-api" Nov 24 09:08:38 crc kubenswrapper[4886]: I1124 09:08:38.292310 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="5f448bf9-baf9-4218-9549-6852ad6257ac" containerName="cinder-api" Nov 24 09:08:38 crc kubenswrapper[4886]: I1124 09:08:38.292350 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="5f448bf9-baf9-4218-9549-6852ad6257ac" containerName="cinder-api-log" Nov 24 09:08:38 crc kubenswrapper[4886]: I1124 09:08:38.299678 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 24 09:08:38 crc kubenswrapper[4886]: I1124 09:08:38.305014 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Nov 24 09:08:38 crc kubenswrapper[4886]: I1124 09:08:38.305733 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Nov 24 09:08:38 crc kubenswrapper[4886]: I1124 09:08:38.306942 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Nov 24 09:08:38 crc kubenswrapper[4886]: I1124 09:08:38.310807 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Nov 24 09:08:38 crc kubenswrapper[4886]: I1124 09:08:38.370850 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/22fb5c5f-d94b-4069-bef0-62e95c42e89e-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"22fb5c5f-d94b-4069-bef0-62e95c42e89e\") " pod="openstack/cinder-api-0" Nov 24 09:08:38 crc kubenswrapper[4886]: I1124 09:08:38.370944 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/22fb5c5f-d94b-4069-bef0-62e95c42e89e-etc-machine-id\") pod \"cinder-api-0\" (UID: \"22fb5c5f-d94b-4069-bef0-62e95c42e89e\") " pod="openstack/cinder-api-0" Nov 24 09:08:38 crc kubenswrapper[4886]: I1124 09:08:38.370976 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hrpjq\" (UniqueName: \"kubernetes.io/projected/22fb5c5f-d94b-4069-bef0-62e95c42e89e-kube-api-access-hrpjq\") pod \"cinder-api-0\" (UID: \"22fb5c5f-d94b-4069-bef0-62e95c42e89e\") " pod="openstack/cinder-api-0" Nov 24 09:08:38 crc kubenswrapper[4886]: I1124 09:08:38.371005 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/22fb5c5f-d94b-4069-bef0-62e95c42e89e-public-tls-certs\") pod \"cinder-api-0\" (UID: \"22fb5c5f-d94b-4069-bef0-62e95c42e89e\") " pod="openstack/cinder-api-0" Nov 24 09:08:38 crc kubenswrapper[4886]: I1124 09:08:38.371056 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/22fb5c5f-d94b-4069-bef0-62e95c42e89e-scripts\") pod \"cinder-api-0\" (UID: \"22fb5c5f-d94b-4069-bef0-62e95c42e89e\") " pod="openstack/cinder-api-0" Nov 24 09:08:38 crc kubenswrapper[4886]: I1124 09:08:38.371176 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/22fb5c5f-d94b-4069-bef0-62e95c42e89e-config-data\") pod \"cinder-api-0\" (UID: \"22fb5c5f-d94b-4069-bef0-62e95c42e89e\") " pod="openstack/cinder-api-0" Nov 24 09:08:38 crc kubenswrapper[4886]: I1124 09:08:38.371269 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/22fb5c5f-d94b-4069-bef0-62e95c42e89e-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"22fb5c5f-d94b-4069-bef0-62e95c42e89e\") " pod="openstack/cinder-api-0" Nov 24 09:08:38 crc kubenswrapper[4886]: I1124 09:08:38.371308 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/22fb5c5f-d94b-4069-bef0-62e95c42e89e-config-data-custom\") pod \"cinder-api-0\" (UID: \"22fb5c5f-d94b-4069-bef0-62e95c42e89e\") " pod="openstack/cinder-api-0" Nov 24 09:08:38 crc kubenswrapper[4886]: I1124 09:08:38.371433 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/22fb5c5f-d94b-4069-bef0-62e95c42e89e-logs\") pod \"cinder-api-0\" (UID: \"22fb5c5f-d94b-4069-bef0-62e95c42e89e\") " pod="openstack/cinder-api-0" Nov 24 09:08:38 crc kubenswrapper[4886]: I1124 09:08:38.481544 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/22fb5c5f-d94b-4069-bef0-62e95c42e89e-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"22fb5c5f-d94b-4069-bef0-62e95c42e89e\") " pod="openstack/cinder-api-0" Nov 24 09:08:38 crc kubenswrapper[4886]: I1124 09:08:38.481778 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/22fb5c5f-d94b-4069-bef0-62e95c42e89e-etc-machine-id\") pod \"cinder-api-0\" (UID: \"22fb5c5f-d94b-4069-bef0-62e95c42e89e\") " pod="openstack/cinder-api-0" Nov 24 09:08:38 crc kubenswrapper[4886]: I1124 09:08:38.481802 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hrpjq\" (UniqueName: \"kubernetes.io/projected/22fb5c5f-d94b-4069-bef0-62e95c42e89e-kube-api-access-hrpjq\") pod \"cinder-api-0\" (UID: \"22fb5c5f-d94b-4069-bef0-62e95c42e89e\") " pod="openstack/cinder-api-0" Nov 24 09:08:38 crc kubenswrapper[4886]: I1124 09:08:38.481832 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/22fb5c5f-d94b-4069-bef0-62e95c42e89e-public-tls-certs\") pod \"cinder-api-0\" (UID: \"22fb5c5f-d94b-4069-bef0-62e95c42e89e\") " pod="openstack/cinder-api-0" Nov 24 09:08:38 crc kubenswrapper[4886]: I1124 09:08:38.481860 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/22fb5c5f-d94b-4069-bef0-62e95c42e89e-scripts\") pod \"cinder-api-0\" (UID: \"22fb5c5f-d94b-4069-bef0-62e95c42e89e\") " pod="openstack/cinder-api-0" Nov 24 09:08:38 crc kubenswrapper[4886]: I1124 09:08:38.481899 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/22fb5c5f-d94b-4069-bef0-62e95c42e89e-config-data\") pod \"cinder-api-0\" (UID: \"22fb5c5f-d94b-4069-bef0-62e95c42e89e\") " pod="openstack/cinder-api-0" Nov 24 09:08:38 crc kubenswrapper[4886]: I1124 09:08:38.481927 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/22fb5c5f-d94b-4069-bef0-62e95c42e89e-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"22fb5c5f-d94b-4069-bef0-62e95c42e89e\") " pod="openstack/cinder-api-0" Nov 24 09:08:38 crc kubenswrapper[4886]: I1124 09:08:38.481946 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/22fb5c5f-d94b-4069-bef0-62e95c42e89e-config-data-custom\") pod \"cinder-api-0\" (UID: \"22fb5c5f-d94b-4069-bef0-62e95c42e89e\") " pod="openstack/cinder-api-0" Nov 24 09:08:38 crc kubenswrapper[4886]: I1124 09:08:38.481981 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/22fb5c5f-d94b-4069-bef0-62e95c42e89e-logs\") pod \"cinder-api-0\" (UID: \"22fb5c5f-d94b-4069-bef0-62e95c42e89e\") " pod="openstack/cinder-api-0" Nov 24 09:08:38 crc kubenswrapper[4886]: I1124 09:08:38.482656 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/22fb5c5f-d94b-4069-bef0-62e95c42e89e-logs\") pod \"cinder-api-0\" (UID: \"22fb5c5f-d94b-4069-bef0-62e95c42e89e\") " pod="openstack/cinder-api-0" Nov 24 09:08:38 crc kubenswrapper[4886]: I1124 09:08:38.488119 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/22fb5c5f-d94b-4069-bef0-62e95c42e89e-etc-machine-id\") pod \"cinder-api-0\" (UID: \"22fb5c5f-d94b-4069-bef0-62e95c42e89e\") " pod="openstack/cinder-api-0" Nov 24 09:08:38 crc kubenswrapper[4886]: I1124 09:08:38.497792 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/22fb5c5f-d94b-4069-bef0-62e95c42e89e-scripts\") pod \"cinder-api-0\" (UID: \"22fb5c5f-d94b-4069-bef0-62e95c42e89e\") " pod="openstack/cinder-api-0" Nov 24 09:08:38 crc kubenswrapper[4886]: I1124 09:08:38.498070 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/22fb5c5f-d94b-4069-bef0-62e95c42e89e-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"22fb5c5f-d94b-4069-bef0-62e95c42e89e\") " pod="openstack/cinder-api-0" Nov 24 09:08:38 crc kubenswrapper[4886]: I1124 09:08:38.499955 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/22fb5c5f-d94b-4069-bef0-62e95c42e89e-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"22fb5c5f-d94b-4069-bef0-62e95c42e89e\") " pod="openstack/cinder-api-0" Nov 24 09:08:38 crc kubenswrapper[4886]: I1124 09:08:38.505027 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/22fb5c5f-d94b-4069-bef0-62e95c42e89e-config-data-custom\") pod \"cinder-api-0\" (UID: \"22fb5c5f-d94b-4069-bef0-62e95c42e89e\") " pod="openstack/cinder-api-0" Nov 24 09:08:38 crc kubenswrapper[4886]: I1124 09:08:38.509830 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/22fb5c5f-d94b-4069-bef0-62e95c42e89e-public-tls-certs\") pod \"cinder-api-0\" (UID: \"22fb5c5f-d94b-4069-bef0-62e95c42e89e\") " pod="openstack/cinder-api-0" Nov 24 09:08:38 crc kubenswrapper[4886]: I1124 09:08:38.516335 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/22fb5c5f-d94b-4069-bef0-62e95c42e89e-config-data\") pod \"cinder-api-0\" (UID: \"22fb5c5f-d94b-4069-bef0-62e95c42e89e\") " pod="openstack/cinder-api-0" Nov 24 09:08:38 crc kubenswrapper[4886]: I1124 09:08:38.527787 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hrpjq\" (UniqueName: \"kubernetes.io/projected/22fb5c5f-d94b-4069-bef0-62e95c42e89e-kube-api-access-hrpjq\") pod \"cinder-api-0\" (UID: \"22fb5c5f-d94b-4069-bef0-62e95c42e89e\") " pod="openstack/cinder-api-0" Nov 24 09:08:38 crc kubenswrapper[4886]: I1124 09:08:38.618726 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 24 09:08:38 crc kubenswrapper[4886]: I1124 09:08:38.865801 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5f448bf9-baf9-4218-9549-6852ad6257ac" path="/var/lib/kubelet/pods/5f448bf9-baf9-4218-9549-6852ad6257ac/volumes" Nov 24 09:08:38 crc kubenswrapper[4886]: I1124 09:08:38.910833 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-76455fdd78-8k9rz" event={"ID":"5555aeec-470e-473c-ad74-de78791861dc","Type":"ContainerStarted","Data":"46f768cbe2691ea7351ad8378de39e6d9f0415fb62c1e917b80c3747d2897a23"} Nov 24 09:08:39 crc kubenswrapper[4886]: I1124 09:08:39.167607 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Nov 24 09:08:39 crc kubenswrapper[4886]: I1124 09:08:39.935992 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"22fb5c5f-d94b-4069-bef0-62e95c42e89e","Type":"ContainerStarted","Data":"8181f3cb523b81570d1102996a640328ed8de7a0565b8cd839ae7224df2f13e0"} Nov 24 09:08:40 crc kubenswrapper[4886]: I1124 09:08:40.106124 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Nov 24 09:08:40 crc kubenswrapper[4886]: I1124 09:08:40.223495 4886 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-8499689d4b-dvrvg" podUID="dbac479b-45c3-44c6-985a-b88d878f3506" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.157:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 24 09:08:40 crc kubenswrapper[4886]: I1124 09:08:40.455465 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6578955fd5-5mr4b" Nov 24 09:08:40 crc kubenswrapper[4886]: I1124 09:08:40.615732 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6b7b667979-j6rct"] Nov 24 09:08:40 crc kubenswrapper[4886]: I1124 09:08:40.616100 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6b7b667979-j6rct" podUID="a9ffb58a-a86d-40b8-9dc7-3388e0eedb2a" containerName="dnsmasq-dns" containerID="cri-o://0333e0897d0b7c2f487364113d5b89b47d1882e24e7f01a38ec764a7b93dda93" gracePeriod=10 Nov 24 09:08:40 crc kubenswrapper[4886]: I1124 09:08:40.958131 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-76455fdd78-8k9rz" event={"ID":"5555aeec-470e-473c-ad74-de78791861dc","Type":"ContainerStarted","Data":"ede791930135049df8f8b37ece3ed7eb8af669fed0e2133117d35ecf44556b2d"} Nov 24 09:08:40 crc kubenswrapper[4886]: I1124 09:08:40.963209 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"22fb5c5f-d94b-4069-bef0-62e95c42e89e","Type":"ContainerStarted","Data":"ba27feff5a0912245829012a095bec943eb1177b537b04d35a27b391a0649ba4"} Nov 24 09:08:41 crc kubenswrapper[4886]: I1124 09:08:41.035678 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Nov 24 09:08:41 crc kubenswrapper[4886]: I1124 09:08:41.084148 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 24 09:08:41 crc kubenswrapper[4886]: I1124 09:08:41.346592 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-8499689d4b-dvrvg" Nov 24 09:08:41 crc kubenswrapper[4886]: I1124 09:08:41.774405 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-8499689d4b-dvrvg" Nov 24 09:08:42 crc kubenswrapper[4886]: I1124 09:08:42.005064 4886 generic.go:334] "Generic (PLEG): container finished" podID="a9ffb58a-a86d-40b8-9dc7-3388e0eedb2a" containerID="0333e0897d0b7c2f487364113d5b89b47d1882e24e7f01a38ec764a7b93dda93" exitCode=0 Nov 24 09:08:42 crc kubenswrapper[4886]: I1124 09:08:42.005334 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7b667979-j6rct" event={"ID":"a9ffb58a-a86d-40b8-9dc7-3388e0eedb2a","Type":"ContainerDied","Data":"0333e0897d0b7c2f487364113d5b89b47d1882e24e7f01a38ec764a7b93dda93"} Nov 24 09:08:42 crc kubenswrapper[4886]: I1124 09:08:42.006766 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="01e6ef25-7f8f-4ebb-9f73-d05aecd19942" containerName="cinder-scheduler" containerID="cri-o://13630456e2200514f22f6c0d41a0156da7e8e3f4be23605e8f8ad37556501a63" gracePeriod=30 Nov 24 09:08:42 crc kubenswrapper[4886]: I1124 09:08:42.007274 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="01e6ef25-7f8f-4ebb-9f73-d05aecd19942" containerName="probe" containerID="cri-o://9e97d86b69b54d23565e7fb48930e481f69e1f1df5e12bafbf4041b42ab637eb" gracePeriod=30 Nov 24 09:08:42 crc kubenswrapper[4886]: I1124 09:08:42.790220 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-75ffb75746-pwc5g" Nov 24 09:08:42 crc kubenswrapper[4886]: E1124 09:08:42.974869 4886 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod01e6ef25_7f8f_4ebb_9f73_d05aecd19942.slice/crio-9e97d86b69b54d23565e7fb48930e481f69e1f1df5e12bafbf4041b42ab637eb.scope\": RecentStats: unable to find data in memory cache]" Nov 24 09:08:43 crc kubenswrapper[4886]: I1124 09:08:43.033633 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-76455fdd78-8k9rz" event={"ID":"5555aeec-470e-473c-ad74-de78791861dc","Type":"ContainerStarted","Data":"7b5aba7a50eb066aafbda10a153162ebe1e04ba535e4a57d87e20fe7b307e166"} Nov 24 09:08:43 crc kubenswrapper[4886]: I1124 09:08:43.370900 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-664f9d77dd-zw4gm" Nov 24 09:08:43 crc kubenswrapper[4886]: I1124 09:08:43.469216 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-6b7b667979-j6rct" podUID="a9ffb58a-a86d-40b8-9dc7-3388e0eedb2a" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.149:5353: connect: connection refused" Nov 24 09:08:43 crc kubenswrapper[4886]: I1124 09:08:43.920744 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-646878466-vzd4z" Nov 24 09:08:43 crc kubenswrapper[4886]: I1124 09:08:43.924024 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-646878466-vzd4z" Nov 24 09:08:44 crc kubenswrapper[4886]: I1124 09:08:44.050880 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"22fb5c5f-d94b-4069-bef0-62e95c42e89e","Type":"ContainerStarted","Data":"553c9ebfb39b73b3da53409fbbcd4b4cecbbf26dd305639d8131b798c073c43b"} Nov 24 09:08:44 crc kubenswrapper[4886]: I1124 09:08:44.926004 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-75ffb75746-pwc5g" Nov 24 09:08:45 crc kubenswrapper[4886]: I1124 09:08:45.254713 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-664f9d77dd-zw4gm" Nov 24 09:08:45 crc kubenswrapper[4886]: I1124 09:08:45.336556 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-75ffb75746-pwc5g"] Nov 24 09:08:45 crc kubenswrapper[4886]: I1124 09:08:45.336799 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-75ffb75746-pwc5g" podUID="54dfe4a2-2d0f-417f-b1d6-df18e5581bfc" containerName="horizon-log" containerID="cri-o://692850d79528a83bba835c99fe3464de2f71e29a9c94c5526cca407f8eedbbb6" gracePeriod=30 Nov 24 09:08:45 crc kubenswrapper[4886]: I1124 09:08:45.337054 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-75ffb75746-pwc5g" podUID="54dfe4a2-2d0f-417f-b1d6-df18e5581bfc" containerName="horizon" containerID="cri-o://06e9411832b7ffda01b80eacb06c2049544b01676f1e649af4e6b2361c635c65" gracePeriod=30 Nov 24 09:08:46 crc kubenswrapper[4886]: I1124 09:08:46.079869 4886 generic.go:334] "Generic (PLEG): container finished" podID="01e6ef25-7f8f-4ebb-9f73-d05aecd19942" containerID="9e97d86b69b54d23565e7fb48930e481f69e1f1df5e12bafbf4041b42ab637eb" exitCode=0 Nov 24 09:08:46 crc kubenswrapper[4886]: I1124 09:08:46.079936 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"01e6ef25-7f8f-4ebb-9f73-d05aecd19942","Type":"ContainerDied","Data":"9e97d86b69b54d23565e7fb48930e481f69e1f1df5e12bafbf4041b42ab637eb"} Nov 24 09:08:48 crc kubenswrapper[4886]: I1124 09:08:48.468256 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-6b7b667979-j6rct" podUID="a9ffb58a-a86d-40b8-9dc7-3388e0eedb2a" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.149:5353: connect: connection refused" Nov 24 09:08:49 crc kubenswrapper[4886]: I1124 09:08:49.111530 4886 generic.go:334] "Generic (PLEG): container finished" podID="54dfe4a2-2d0f-417f-b1d6-df18e5581bfc" containerID="06e9411832b7ffda01b80eacb06c2049544b01676f1e649af4e6b2361c635c65" exitCode=0 Nov 24 09:08:49 crc kubenswrapper[4886]: I1124 09:08:49.111611 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-75ffb75746-pwc5g" event={"ID":"54dfe4a2-2d0f-417f-b1d6-df18e5581bfc","Type":"ContainerDied","Data":"06e9411832b7ffda01b80eacb06c2049544b01676f1e649af4e6b2361c635c65"} Nov 24 09:08:49 crc kubenswrapper[4886]: I1124 09:08:49.112223 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Nov 24 09:08:49 crc kubenswrapper[4886]: I1124 09:08:49.146924 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=11.14689336 podStartE2EDuration="11.14689336s" podCreationTimestamp="2025-11-24 09:08:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 09:08:49.13190441 +0000 UTC m=+1185.018642555" watchObservedRunningTime="2025-11-24 09:08:49.14689336 +0000 UTC m=+1185.033631505" Nov 24 09:08:49 crc kubenswrapper[4886]: I1124 09:08:49.497051 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-78ff5b5cf5-swx4n" Nov 24 09:08:51 crc kubenswrapper[4886]: I1124 09:08:49.866825 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-75ffb75746-pwc5g" podUID="54dfe4a2-2d0f-417f-b1d6-df18e5581bfc" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.144:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.144:8443: connect: connection refused" Nov 24 09:08:51 crc kubenswrapper[4886]: I1124 09:08:51.142702 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Nov 24 09:08:51 crc kubenswrapper[4886]: I1124 09:08:51.897958 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b7b667979-j6rct" Nov 24 09:08:51 crc kubenswrapper[4886]: I1124 09:08:51.938119 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a9ffb58a-a86d-40b8-9dc7-3388e0eedb2a-dns-swift-storage-0\") pod \"a9ffb58a-a86d-40b8-9dc7-3388e0eedb2a\" (UID: \"a9ffb58a-a86d-40b8-9dc7-3388e0eedb2a\") " Nov 24 09:08:51 crc kubenswrapper[4886]: I1124 09:08:51.939207 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4z8vc\" (UniqueName: \"kubernetes.io/projected/a9ffb58a-a86d-40b8-9dc7-3388e0eedb2a-kube-api-access-4z8vc\") pod \"a9ffb58a-a86d-40b8-9dc7-3388e0eedb2a\" (UID: \"a9ffb58a-a86d-40b8-9dc7-3388e0eedb2a\") " Nov 24 09:08:51 crc kubenswrapper[4886]: I1124 09:08:51.939314 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a9ffb58a-a86d-40b8-9dc7-3388e0eedb2a-ovsdbserver-nb\") pod \"a9ffb58a-a86d-40b8-9dc7-3388e0eedb2a\" (UID: \"a9ffb58a-a86d-40b8-9dc7-3388e0eedb2a\") " Nov 24 09:08:51 crc kubenswrapper[4886]: I1124 09:08:51.939417 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a9ffb58a-a86d-40b8-9dc7-3388e0eedb2a-config\") pod \"a9ffb58a-a86d-40b8-9dc7-3388e0eedb2a\" (UID: \"a9ffb58a-a86d-40b8-9dc7-3388e0eedb2a\") " Nov 24 09:08:51 crc kubenswrapper[4886]: I1124 09:08:51.939455 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a9ffb58a-a86d-40b8-9dc7-3388e0eedb2a-ovsdbserver-sb\") pod \"a9ffb58a-a86d-40b8-9dc7-3388e0eedb2a\" (UID: \"a9ffb58a-a86d-40b8-9dc7-3388e0eedb2a\") " Nov 24 09:08:51 crc kubenswrapper[4886]: I1124 09:08:51.939516 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a9ffb58a-a86d-40b8-9dc7-3388e0eedb2a-dns-svc\") pod \"a9ffb58a-a86d-40b8-9dc7-3388e0eedb2a\" (UID: \"a9ffb58a-a86d-40b8-9dc7-3388e0eedb2a\") " Nov 24 09:08:51 crc kubenswrapper[4886]: I1124 09:08:51.965668 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a9ffb58a-a86d-40b8-9dc7-3388e0eedb2a-kube-api-access-4z8vc" (OuterVolumeSpecName: "kube-api-access-4z8vc") pod "a9ffb58a-a86d-40b8-9dc7-3388e0eedb2a" (UID: "a9ffb58a-a86d-40b8-9dc7-3388e0eedb2a"). InnerVolumeSpecName "kube-api-access-4z8vc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 09:08:52 crc kubenswrapper[4886]: I1124 09:08:52.030252 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a9ffb58a-a86d-40b8-9dc7-3388e0eedb2a-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "a9ffb58a-a86d-40b8-9dc7-3388e0eedb2a" (UID: "a9ffb58a-a86d-40b8-9dc7-3388e0eedb2a"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 09:08:52 crc kubenswrapper[4886]: I1124 09:08:52.041068 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4z8vc\" (UniqueName: \"kubernetes.io/projected/a9ffb58a-a86d-40b8-9dc7-3388e0eedb2a-kube-api-access-4z8vc\") on node \"crc\" DevicePath \"\"" Nov 24 09:08:52 crc kubenswrapper[4886]: I1124 09:08:52.041095 4886 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a9ffb58a-a86d-40b8-9dc7-3388e0eedb2a-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 24 09:08:52 crc kubenswrapper[4886]: I1124 09:08:52.044323 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a9ffb58a-a86d-40b8-9dc7-3388e0eedb2a-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "a9ffb58a-a86d-40b8-9dc7-3388e0eedb2a" (UID: "a9ffb58a-a86d-40b8-9dc7-3388e0eedb2a"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 09:08:52 crc kubenswrapper[4886]: I1124 09:08:52.044988 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a9ffb58a-a86d-40b8-9dc7-3388e0eedb2a-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "a9ffb58a-a86d-40b8-9dc7-3388e0eedb2a" (UID: "a9ffb58a-a86d-40b8-9dc7-3388e0eedb2a"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 09:08:52 crc kubenswrapper[4886]: I1124 09:08:52.068096 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a9ffb58a-a86d-40b8-9dc7-3388e0eedb2a-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "a9ffb58a-a86d-40b8-9dc7-3388e0eedb2a" (UID: "a9ffb58a-a86d-40b8-9dc7-3388e0eedb2a"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 09:08:52 crc kubenswrapper[4886]: I1124 09:08:52.095303 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a9ffb58a-a86d-40b8-9dc7-3388e0eedb2a-config" (OuterVolumeSpecName: "config") pod "a9ffb58a-a86d-40b8-9dc7-3388e0eedb2a" (UID: "a9ffb58a-a86d-40b8-9dc7-3388e0eedb2a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 09:08:52 crc kubenswrapper[4886]: I1124 09:08:52.142810 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7b667979-j6rct" event={"ID":"a9ffb58a-a86d-40b8-9dc7-3388e0eedb2a","Type":"ContainerDied","Data":"4a0dc994b42d8ef6230d62072cfecd8ea1d2bd0e18badfd7053072217377ffc2"} Nov 24 09:08:52 crc kubenswrapper[4886]: I1124 09:08:52.142951 4886 scope.go:117] "RemoveContainer" containerID="0333e0897d0b7c2f487364113d5b89b47d1882e24e7f01a38ec764a7b93dda93" Nov 24 09:08:52 crc kubenswrapper[4886]: I1124 09:08:52.142950 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b7b667979-j6rct" Nov 24 09:08:52 crc kubenswrapper[4886]: I1124 09:08:52.144542 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-76455fdd78-8k9rz" Nov 24 09:08:52 crc kubenswrapper[4886]: I1124 09:08:52.144573 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-76455fdd78-8k9rz" Nov 24 09:08:52 crc kubenswrapper[4886]: I1124 09:08:52.160917 4886 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a9ffb58a-a86d-40b8-9dc7-3388e0eedb2a-config\") on node \"crc\" DevicePath \"\"" Nov 24 09:08:52 crc kubenswrapper[4886]: I1124 09:08:52.160956 4886 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a9ffb58a-a86d-40b8-9dc7-3388e0eedb2a-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 24 09:08:52 crc kubenswrapper[4886]: I1124 09:08:52.160972 4886 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a9ffb58a-a86d-40b8-9dc7-3388e0eedb2a-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 24 09:08:52 crc kubenswrapper[4886]: I1124 09:08:52.160985 4886 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a9ffb58a-a86d-40b8-9dc7-3388e0eedb2a-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 24 09:08:52 crc kubenswrapper[4886]: I1124 09:08:52.172447 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-76455fdd78-8k9rz" podStartSLOduration=16.172421995 podStartE2EDuration="16.172421995s" podCreationTimestamp="2025-11-24 09:08:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 09:08:52.16976513 +0000 UTC m=+1188.056503285" watchObservedRunningTime="2025-11-24 09:08:52.172421995 +0000 UTC m=+1188.059160130" Nov 24 09:08:52 crc kubenswrapper[4886]: I1124 09:08:52.200017 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6b7b667979-j6rct"] Nov 24 09:08:52 crc kubenswrapper[4886]: I1124 09:08:52.207220 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6b7b667979-j6rct"] Nov 24 09:08:52 crc kubenswrapper[4886]: I1124 09:08:52.774327 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Nov 24 09:08:52 crc kubenswrapper[4886]: E1124 09:08:52.774851 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a9ffb58a-a86d-40b8-9dc7-3388e0eedb2a" containerName="dnsmasq-dns" Nov 24 09:08:52 crc kubenswrapper[4886]: I1124 09:08:52.774898 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9ffb58a-a86d-40b8-9dc7-3388e0eedb2a" containerName="dnsmasq-dns" Nov 24 09:08:52 crc kubenswrapper[4886]: E1124 09:08:52.774923 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a9ffb58a-a86d-40b8-9dc7-3388e0eedb2a" containerName="init" Nov 24 09:08:52 crc kubenswrapper[4886]: I1124 09:08:52.774931 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9ffb58a-a86d-40b8-9dc7-3388e0eedb2a" containerName="init" Nov 24 09:08:52 crc kubenswrapper[4886]: I1124 09:08:52.775195 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="a9ffb58a-a86d-40b8-9dc7-3388e0eedb2a" containerName="dnsmasq-dns" Nov 24 09:08:52 crc kubenswrapper[4886]: I1124 09:08:52.776028 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Nov 24 09:08:52 crc kubenswrapper[4886]: I1124 09:08:52.778139 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-zr4cr" Nov 24 09:08:52 crc kubenswrapper[4886]: I1124 09:08:52.778510 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Nov 24 09:08:52 crc kubenswrapper[4886]: I1124 09:08:52.778675 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Nov 24 09:08:52 crc kubenswrapper[4886]: I1124 09:08:52.790720 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Nov 24 09:08:52 crc kubenswrapper[4886]: I1124 09:08:52.866104 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a9ffb58a-a86d-40b8-9dc7-3388e0eedb2a" path="/var/lib/kubelet/pods/a9ffb58a-a86d-40b8-9dc7-3388e0eedb2a/volumes" Nov 24 09:08:52 crc kubenswrapper[4886]: I1124 09:08:52.873144 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/801740d3-12c4-4576-a79d-186b36e3f079-combined-ca-bundle\") pod \"openstackclient\" (UID: \"801740d3-12c4-4576-a79d-186b36e3f079\") " pod="openstack/openstackclient" Nov 24 09:08:52 crc kubenswrapper[4886]: I1124 09:08:52.873290 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/801740d3-12c4-4576-a79d-186b36e3f079-openstack-config\") pod \"openstackclient\" (UID: \"801740d3-12c4-4576-a79d-186b36e3f079\") " pod="openstack/openstackclient" Nov 24 09:08:52 crc kubenswrapper[4886]: I1124 09:08:52.873323 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4ppdd\" (UniqueName: \"kubernetes.io/projected/801740d3-12c4-4576-a79d-186b36e3f079-kube-api-access-4ppdd\") pod \"openstackclient\" (UID: \"801740d3-12c4-4576-a79d-186b36e3f079\") " pod="openstack/openstackclient" Nov 24 09:08:52 crc kubenswrapper[4886]: I1124 09:08:52.873371 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/801740d3-12c4-4576-a79d-186b36e3f079-openstack-config-secret\") pod \"openstackclient\" (UID: \"801740d3-12c4-4576-a79d-186b36e3f079\") " pod="openstack/openstackclient" Nov 24 09:08:52 crc kubenswrapper[4886]: I1124 09:08:52.975384 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/801740d3-12c4-4576-a79d-186b36e3f079-combined-ca-bundle\") pod \"openstackclient\" (UID: \"801740d3-12c4-4576-a79d-186b36e3f079\") " pod="openstack/openstackclient" Nov 24 09:08:52 crc kubenswrapper[4886]: I1124 09:08:52.975784 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/801740d3-12c4-4576-a79d-186b36e3f079-openstack-config\") pod \"openstackclient\" (UID: \"801740d3-12c4-4576-a79d-186b36e3f079\") " pod="openstack/openstackclient" Nov 24 09:08:52 crc kubenswrapper[4886]: I1124 09:08:52.975817 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4ppdd\" (UniqueName: \"kubernetes.io/projected/801740d3-12c4-4576-a79d-186b36e3f079-kube-api-access-4ppdd\") pod \"openstackclient\" (UID: \"801740d3-12c4-4576-a79d-186b36e3f079\") " pod="openstack/openstackclient" Nov 24 09:08:52 crc kubenswrapper[4886]: I1124 09:08:52.975870 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/801740d3-12c4-4576-a79d-186b36e3f079-openstack-config-secret\") pod \"openstackclient\" (UID: \"801740d3-12c4-4576-a79d-186b36e3f079\") " pod="openstack/openstackclient" Nov 24 09:08:52 crc kubenswrapper[4886]: I1124 09:08:52.977243 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/801740d3-12c4-4576-a79d-186b36e3f079-openstack-config\") pod \"openstackclient\" (UID: \"801740d3-12c4-4576-a79d-186b36e3f079\") " pod="openstack/openstackclient" Nov 24 09:08:52 crc kubenswrapper[4886]: I1124 09:08:52.979963 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/801740d3-12c4-4576-a79d-186b36e3f079-openstack-config-secret\") pod \"openstackclient\" (UID: \"801740d3-12c4-4576-a79d-186b36e3f079\") " pod="openstack/openstackclient" Nov 24 09:08:52 crc kubenswrapper[4886]: I1124 09:08:52.987745 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/801740d3-12c4-4576-a79d-186b36e3f079-combined-ca-bundle\") pod \"openstackclient\" (UID: \"801740d3-12c4-4576-a79d-186b36e3f079\") " pod="openstack/openstackclient" Nov 24 09:08:52 crc kubenswrapper[4886]: I1124 09:08:52.993395 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4ppdd\" (UniqueName: \"kubernetes.io/projected/801740d3-12c4-4576-a79d-186b36e3f079-kube-api-access-4ppdd\") pod \"openstackclient\" (UID: \"801740d3-12c4-4576-a79d-186b36e3f079\") " pod="openstack/openstackclient" Nov 24 09:08:53 crc kubenswrapper[4886]: I1124 09:08:53.152198 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Nov 24 09:08:53 crc kubenswrapper[4886]: I1124 09:08:53.603723 4886 scope.go:117] "RemoveContainer" containerID="212424341bf6e05770364590b05890293fe77b0701e9d44f15856d2d159517f7" Nov 24 09:08:54 crc kubenswrapper[4886]: I1124 09:08:54.045044 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-76455fdd78-8k9rz" Nov 24 09:08:54 crc kubenswrapper[4886]: I1124 09:08:54.194963 4886 generic.go:334] "Generic (PLEG): container finished" podID="58e8a97b-bd79-4e90-99ef-4e0eab79c454" containerID="8d13265eb27febd68b62cd429cdb79754e348aaaf855824ed40c4f409263550d" exitCode=0 Nov 24 09:08:54 crc kubenswrapper[4886]: I1124 09:08:54.195045 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-8568c485b4-l582l" event={"ID":"58e8a97b-bd79-4e90-99ef-4e0eab79c454","Type":"ContainerDied","Data":"8d13265eb27febd68b62cd429cdb79754e348aaaf855824ed40c4f409263550d"} Nov 24 09:08:54 crc kubenswrapper[4886]: I1124 09:08:54.198731 4886 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 24 09:08:54 crc kubenswrapper[4886]: I1124 09:08:54.214671 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-76455fdd78-8k9rz" Nov 24 09:08:54 crc kubenswrapper[4886]: I1124 09:08:54.266890 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Nov 24 09:08:54 crc kubenswrapper[4886]: I1124 09:08:54.294909 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-8499689d4b-dvrvg"] Nov 24 09:08:54 crc kubenswrapper[4886]: I1124 09:08:54.298646 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-8499689d4b-dvrvg" podUID="dbac479b-45c3-44c6-985a-b88d878f3506" containerName="barbican-api-log" containerID="cri-o://1a369f84939ccbd739030f6008173c67a0bd3a6677dd8192d9079309c8b47bbe" gracePeriod=30 Nov 24 09:08:54 crc kubenswrapper[4886]: I1124 09:08:54.299414 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-8499689d4b-dvrvg" podUID="dbac479b-45c3-44c6-985a-b88d878f3506" containerName="barbican-api" containerID="cri-o://2c6616ba5f79dbe2e925f8c267f6a0309d0f5343f60a70b1827a51fd481f46d9" gracePeriod=30 Nov 24 09:08:54 crc kubenswrapper[4886]: W1124 09:08:54.303895 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod801740d3_12c4_4576_a79d_186b36e3f079.slice/crio-653c66faf6cc0048c580111c5d22127d3ef66c0efd2510acc5714e504492764e WatchSource:0}: Error finding container 653c66faf6cc0048c580111c5d22127d3ef66c0efd2510acc5714e504492764e: Status 404 returned error can't find the container with id 653c66faf6cc0048c580111c5d22127d3ef66c0efd2510acc5714e504492764e Nov 24 09:08:54 crc kubenswrapper[4886]: I1124 09:08:54.402379 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-8568c485b4-l582l" Nov 24 09:08:54 crc kubenswrapper[4886]: I1124 09:08:54.520362 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/58e8a97b-bd79-4e90-99ef-4e0eab79c454-ovndb-tls-certs\") pod \"58e8a97b-bd79-4e90-99ef-4e0eab79c454\" (UID: \"58e8a97b-bd79-4e90-99ef-4e0eab79c454\") " Nov 24 09:08:54 crc kubenswrapper[4886]: I1124 09:08:54.520524 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/58e8a97b-bd79-4e90-99ef-4e0eab79c454-httpd-config\") pod \"58e8a97b-bd79-4e90-99ef-4e0eab79c454\" (UID: \"58e8a97b-bd79-4e90-99ef-4e0eab79c454\") " Nov 24 09:08:54 crc kubenswrapper[4886]: I1124 09:08:54.520597 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/58e8a97b-bd79-4e90-99ef-4e0eab79c454-config\") pod \"58e8a97b-bd79-4e90-99ef-4e0eab79c454\" (UID: \"58e8a97b-bd79-4e90-99ef-4e0eab79c454\") " Nov 24 09:08:54 crc kubenswrapper[4886]: I1124 09:08:54.520703 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2mgzr\" (UniqueName: \"kubernetes.io/projected/58e8a97b-bd79-4e90-99ef-4e0eab79c454-kube-api-access-2mgzr\") pod \"58e8a97b-bd79-4e90-99ef-4e0eab79c454\" (UID: \"58e8a97b-bd79-4e90-99ef-4e0eab79c454\") " Nov 24 09:08:54 crc kubenswrapper[4886]: I1124 09:08:54.520763 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58e8a97b-bd79-4e90-99ef-4e0eab79c454-combined-ca-bundle\") pod \"58e8a97b-bd79-4e90-99ef-4e0eab79c454\" (UID: \"58e8a97b-bd79-4e90-99ef-4e0eab79c454\") " Nov 24 09:08:54 crc kubenswrapper[4886]: I1124 09:08:54.529079 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/58e8a97b-bd79-4e90-99ef-4e0eab79c454-kube-api-access-2mgzr" (OuterVolumeSpecName: "kube-api-access-2mgzr") pod "58e8a97b-bd79-4e90-99ef-4e0eab79c454" (UID: "58e8a97b-bd79-4e90-99ef-4e0eab79c454"). InnerVolumeSpecName "kube-api-access-2mgzr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 09:08:54 crc kubenswrapper[4886]: I1124 09:08:54.542135 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/58e8a97b-bd79-4e90-99ef-4e0eab79c454-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "58e8a97b-bd79-4e90-99ef-4e0eab79c454" (UID: "58e8a97b-bd79-4e90-99ef-4e0eab79c454"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:08:54 crc kubenswrapper[4886]: I1124 09:08:54.601503 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/58e8a97b-bd79-4e90-99ef-4e0eab79c454-config" (OuterVolumeSpecName: "config") pod "58e8a97b-bd79-4e90-99ef-4e0eab79c454" (UID: "58e8a97b-bd79-4e90-99ef-4e0eab79c454"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:08:54 crc kubenswrapper[4886]: I1124 09:08:54.622972 4886 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/58e8a97b-bd79-4e90-99ef-4e0eab79c454-httpd-config\") on node \"crc\" DevicePath \"\"" Nov 24 09:08:54 crc kubenswrapper[4886]: I1124 09:08:54.623009 4886 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/58e8a97b-bd79-4e90-99ef-4e0eab79c454-config\") on node \"crc\" DevicePath \"\"" Nov 24 09:08:54 crc kubenswrapper[4886]: I1124 09:08:54.623024 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2mgzr\" (UniqueName: \"kubernetes.io/projected/58e8a97b-bd79-4e90-99ef-4e0eab79c454-kube-api-access-2mgzr\") on node \"crc\" DevicePath \"\"" Nov 24 09:08:54 crc kubenswrapper[4886]: I1124 09:08:54.629116 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/58e8a97b-bd79-4e90-99ef-4e0eab79c454-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "58e8a97b-bd79-4e90-99ef-4e0eab79c454" (UID: "58e8a97b-bd79-4e90-99ef-4e0eab79c454"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:08:54 crc kubenswrapper[4886]: I1124 09:08:54.654413 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/58e8a97b-bd79-4e90-99ef-4e0eab79c454-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "58e8a97b-bd79-4e90-99ef-4e0eab79c454" (UID: "58e8a97b-bd79-4e90-99ef-4e0eab79c454"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:08:54 crc kubenswrapper[4886]: I1124 09:08:54.725035 4886 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/58e8a97b-bd79-4e90-99ef-4e0eab79c454-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 24 09:08:54 crc kubenswrapper[4886]: I1124 09:08:54.725076 4886 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58e8a97b-bd79-4e90-99ef-4e0eab79c454-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 09:08:55 crc kubenswrapper[4886]: I1124 09:08:55.018477 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 24 09:08:55 crc kubenswrapper[4886]: I1124 09:08:55.132165 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01e6ef25-7f8f-4ebb-9f73-d05aecd19942-combined-ca-bundle\") pod \"01e6ef25-7f8f-4ebb-9f73-d05aecd19942\" (UID: \"01e6ef25-7f8f-4ebb-9f73-d05aecd19942\") " Nov 24 09:08:55 crc kubenswrapper[4886]: I1124 09:08:55.132274 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/01e6ef25-7f8f-4ebb-9f73-d05aecd19942-config-data\") pod \"01e6ef25-7f8f-4ebb-9f73-d05aecd19942\" (UID: \"01e6ef25-7f8f-4ebb-9f73-d05aecd19942\") " Nov 24 09:08:55 crc kubenswrapper[4886]: I1124 09:08:55.132426 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-976z5\" (UniqueName: \"kubernetes.io/projected/01e6ef25-7f8f-4ebb-9f73-d05aecd19942-kube-api-access-976z5\") pod \"01e6ef25-7f8f-4ebb-9f73-d05aecd19942\" (UID: \"01e6ef25-7f8f-4ebb-9f73-d05aecd19942\") " Nov 24 09:08:55 crc kubenswrapper[4886]: I1124 09:08:55.132485 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/01e6ef25-7f8f-4ebb-9f73-d05aecd19942-scripts\") pod \"01e6ef25-7f8f-4ebb-9f73-d05aecd19942\" (UID: \"01e6ef25-7f8f-4ebb-9f73-d05aecd19942\") " Nov 24 09:08:55 crc kubenswrapper[4886]: I1124 09:08:55.132548 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/01e6ef25-7f8f-4ebb-9f73-d05aecd19942-etc-machine-id\") pod \"01e6ef25-7f8f-4ebb-9f73-d05aecd19942\" (UID: \"01e6ef25-7f8f-4ebb-9f73-d05aecd19942\") " Nov 24 09:08:55 crc kubenswrapper[4886]: I1124 09:08:55.132621 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/01e6ef25-7f8f-4ebb-9f73-d05aecd19942-config-data-custom\") pod \"01e6ef25-7f8f-4ebb-9f73-d05aecd19942\" (UID: \"01e6ef25-7f8f-4ebb-9f73-d05aecd19942\") " Nov 24 09:08:55 crc kubenswrapper[4886]: I1124 09:08:55.136821 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/01e6ef25-7f8f-4ebb-9f73-d05aecd19942-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "01e6ef25-7f8f-4ebb-9f73-d05aecd19942" (UID: "01e6ef25-7f8f-4ebb-9f73-d05aecd19942"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 09:08:55 crc kubenswrapper[4886]: I1124 09:08:55.155375 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01e6ef25-7f8f-4ebb-9f73-d05aecd19942-scripts" (OuterVolumeSpecName: "scripts") pod "01e6ef25-7f8f-4ebb-9f73-d05aecd19942" (UID: "01e6ef25-7f8f-4ebb-9f73-d05aecd19942"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:08:55 crc kubenswrapper[4886]: I1124 09:08:55.164834 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01e6ef25-7f8f-4ebb-9f73-d05aecd19942-kube-api-access-976z5" (OuterVolumeSpecName: "kube-api-access-976z5") pod "01e6ef25-7f8f-4ebb-9f73-d05aecd19942" (UID: "01e6ef25-7f8f-4ebb-9f73-d05aecd19942"). InnerVolumeSpecName "kube-api-access-976z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 09:08:55 crc kubenswrapper[4886]: I1124 09:08:55.169496 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01e6ef25-7f8f-4ebb-9f73-d05aecd19942-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "01e6ef25-7f8f-4ebb-9f73-d05aecd19942" (UID: "01e6ef25-7f8f-4ebb-9f73-d05aecd19942"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:08:55 crc kubenswrapper[4886]: I1124 09:08:55.227523 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01e6ef25-7f8f-4ebb-9f73-d05aecd19942-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "01e6ef25-7f8f-4ebb-9f73-d05aecd19942" (UID: "01e6ef25-7f8f-4ebb-9f73-d05aecd19942"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:08:55 crc kubenswrapper[4886]: I1124 09:08:55.227830 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-8568c485b4-l582l" Nov 24 09:08:55 crc kubenswrapper[4886]: I1124 09:08:55.228801 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-8568c485b4-l582l" event={"ID":"58e8a97b-bd79-4e90-99ef-4e0eab79c454","Type":"ContainerDied","Data":"43c4f475050ee811619b226dfd84d447e64cc208fec0b54014e2c6f1eadd6569"} Nov 24 09:08:55 crc kubenswrapper[4886]: I1124 09:08:55.228842 4886 scope.go:117] "RemoveContainer" containerID="889ae79220a89d15eec8f56867b0331a8b9a2285a600e5e3ca9e38df50c45943" Nov 24 09:08:55 crc kubenswrapper[4886]: I1124 09:08:55.235129 4886 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01e6ef25-7f8f-4ebb-9f73-d05aecd19942-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 09:08:55 crc kubenswrapper[4886]: I1124 09:08:55.236633 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-976z5\" (UniqueName: \"kubernetes.io/projected/01e6ef25-7f8f-4ebb-9f73-d05aecd19942-kube-api-access-976z5\") on node \"crc\" DevicePath \"\"" Nov 24 09:08:55 crc kubenswrapper[4886]: I1124 09:08:55.236652 4886 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/01e6ef25-7f8f-4ebb-9f73-d05aecd19942-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 09:08:55 crc kubenswrapper[4886]: I1124 09:08:55.236665 4886 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/01e6ef25-7f8f-4ebb-9f73-d05aecd19942-etc-machine-id\") on node \"crc\" DevicePath \"\"" Nov 24 09:08:55 crc kubenswrapper[4886]: I1124 09:08:55.236677 4886 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/01e6ef25-7f8f-4ebb-9f73-d05aecd19942-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 24 09:08:55 crc kubenswrapper[4886]: I1124 09:08:55.243872 4886 generic.go:334] "Generic (PLEG): container finished" podID="dbac479b-45c3-44c6-985a-b88d878f3506" containerID="1a369f84939ccbd739030f6008173c67a0bd3a6677dd8192d9079309c8b47bbe" exitCode=143 Nov 24 09:08:55 crc kubenswrapper[4886]: I1124 09:08:55.243944 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-8499689d4b-dvrvg" event={"ID":"dbac479b-45c3-44c6-985a-b88d878f3506","Type":"ContainerDied","Data":"1a369f84939ccbd739030f6008173c67a0bd3a6677dd8192d9079309c8b47bbe"} Nov 24 09:08:55 crc kubenswrapper[4886]: I1124 09:08:55.253285 4886 generic.go:334] "Generic (PLEG): container finished" podID="01e6ef25-7f8f-4ebb-9f73-d05aecd19942" containerID="13630456e2200514f22f6c0d41a0156da7e8e3f4be23605e8f8ad37556501a63" exitCode=0 Nov 24 09:08:55 crc kubenswrapper[4886]: I1124 09:08:55.253368 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"01e6ef25-7f8f-4ebb-9f73-d05aecd19942","Type":"ContainerDied","Data":"13630456e2200514f22f6c0d41a0156da7e8e3f4be23605e8f8ad37556501a63"} Nov 24 09:08:55 crc kubenswrapper[4886]: I1124 09:08:55.253397 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"01e6ef25-7f8f-4ebb-9f73-d05aecd19942","Type":"ContainerDied","Data":"faeb8d32b1462126c14dd028e6622054d465dbc49ff181367a9d176022a5ab2f"} Nov 24 09:08:55 crc kubenswrapper[4886]: I1124 09:08:55.253460 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 24 09:08:55 crc kubenswrapper[4886]: I1124 09:08:55.266347 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"801740d3-12c4-4576-a79d-186b36e3f079","Type":"ContainerStarted","Data":"653c66faf6cc0048c580111c5d22127d3ef66c0efd2510acc5714e504492764e"} Nov 24 09:08:55 crc kubenswrapper[4886]: I1124 09:08:55.296169 4886 scope.go:117] "RemoveContainer" containerID="8d13265eb27febd68b62cd429cdb79754e348aaaf855824ed40c4f409263550d" Nov 24 09:08:55 crc kubenswrapper[4886]: I1124 09:08:55.299065 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-8568c485b4-l582l"] Nov 24 09:08:55 crc kubenswrapper[4886]: I1124 09:08:55.333608 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-8568c485b4-l582l"] Nov 24 09:08:55 crc kubenswrapper[4886]: I1124 09:08:55.343757 4886 scope.go:117] "RemoveContainer" containerID="9e97d86b69b54d23565e7fb48930e481f69e1f1df5e12bafbf4041b42ab637eb" Nov 24 09:08:55 crc kubenswrapper[4886]: I1124 09:08:55.368850 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01e6ef25-7f8f-4ebb-9f73-d05aecd19942-config-data" (OuterVolumeSpecName: "config-data") pod "01e6ef25-7f8f-4ebb-9f73-d05aecd19942" (UID: "01e6ef25-7f8f-4ebb-9f73-d05aecd19942"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:08:55 crc kubenswrapper[4886]: I1124 09:08:55.377416 4886 scope.go:117] "RemoveContainer" containerID="13630456e2200514f22f6c0d41a0156da7e8e3f4be23605e8f8ad37556501a63" Nov 24 09:08:55 crc kubenswrapper[4886]: I1124 09:08:55.415518 4886 scope.go:117] "RemoveContainer" containerID="9e97d86b69b54d23565e7fb48930e481f69e1f1df5e12bafbf4041b42ab637eb" Nov 24 09:08:55 crc kubenswrapper[4886]: E1124 09:08:55.418797 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9e97d86b69b54d23565e7fb48930e481f69e1f1df5e12bafbf4041b42ab637eb\": container with ID starting with 9e97d86b69b54d23565e7fb48930e481f69e1f1df5e12bafbf4041b42ab637eb not found: ID does not exist" containerID="9e97d86b69b54d23565e7fb48930e481f69e1f1df5e12bafbf4041b42ab637eb" Nov 24 09:08:55 crc kubenswrapper[4886]: I1124 09:08:55.418826 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9e97d86b69b54d23565e7fb48930e481f69e1f1df5e12bafbf4041b42ab637eb"} err="failed to get container status \"9e97d86b69b54d23565e7fb48930e481f69e1f1df5e12bafbf4041b42ab637eb\": rpc error: code = NotFound desc = could not find container \"9e97d86b69b54d23565e7fb48930e481f69e1f1df5e12bafbf4041b42ab637eb\": container with ID starting with 9e97d86b69b54d23565e7fb48930e481f69e1f1df5e12bafbf4041b42ab637eb not found: ID does not exist" Nov 24 09:08:55 crc kubenswrapper[4886]: I1124 09:08:55.418850 4886 scope.go:117] "RemoveContainer" containerID="13630456e2200514f22f6c0d41a0156da7e8e3f4be23605e8f8ad37556501a63" Nov 24 09:08:55 crc kubenswrapper[4886]: E1124 09:08:55.423798 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"13630456e2200514f22f6c0d41a0156da7e8e3f4be23605e8f8ad37556501a63\": container with ID starting with 13630456e2200514f22f6c0d41a0156da7e8e3f4be23605e8f8ad37556501a63 not found: ID does not exist" containerID="13630456e2200514f22f6c0d41a0156da7e8e3f4be23605e8f8ad37556501a63" Nov 24 09:08:55 crc kubenswrapper[4886]: I1124 09:08:55.423941 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"13630456e2200514f22f6c0d41a0156da7e8e3f4be23605e8f8ad37556501a63"} err="failed to get container status \"13630456e2200514f22f6c0d41a0156da7e8e3f4be23605e8f8ad37556501a63\": rpc error: code = NotFound desc = could not find container \"13630456e2200514f22f6c0d41a0156da7e8e3f4be23605e8f8ad37556501a63\": container with ID starting with 13630456e2200514f22f6c0d41a0156da7e8e3f4be23605e8f8ad37556501a63 not found: ID does not exist" Nov 24 09:08:55 crc kubenswrapper[4886]: I1124 09:08:55.444983 4886 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/01e6ef25-7f8f-4ebb-9f73-d05aecd19942-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 09:08:55 crc kubenswrapper[4886]: I1124 09:08:55.609242 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 24 09:08:55 crc kubenswrapper[4886]: I1124 09:08:55.625830 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 24 09:08:55 crc kubenswrapper[4886]: I1124 09:08:55.642282 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Nov 24 09:08:55 crc kubenswrapper[4886]: E1124 09:08:55.642717 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01e6ef25-7f8f-4ebb-9f73-d05aecd19942" containerName="cinder-scheduler" Nov 24 09:08:55 crc kubenswrapper[4886]: I1124 09:08:55.642735 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="01e6ef25-7f8f-4ebb-9f73-d05aecd19942" containerName="cinder-scheduler" Nov 24 09:08:55 crc kubenswrapper[4886]: E1124 09:08:55.642760 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01e6ef25-7f8f-4ebb-9f73-d05aecd19942" containerName="probe" Nov 24 09:08:55 crc kubenswrapper[4886]: I1124 09:08:55.642768 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="01e6ef25-7f8f-4ebb-9f73-d05aecd19942" containerName="probe" Nov 24 09:08:55 crc kubenswrapper[4886]: E1124 09:08:55.642798 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="58e8a97b-bd79-4e90-99ef-4e0eab79c454" containerName="neutron-httpd" Nov 24 09:08:55 crc kubenswrapper[4886]: I1124 09:08:55.642809 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="58e8a97b-bd79-4e90-99ef-4e0eab79c454" containerName="neutron-httpd" Nov 24 09:08:55 crc kubenswrapper[4886]: E1124 09:08:55.642829 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="58e8a97b-bd79-4e90-99ef-4e0eab79c454" containerName="neutron-api" Nov 24 09:08:55 crc kubenswrapper[4886]: I1124 09:08:55.642835 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="58e8a97b-bd79-4e90-99ef-4e0eab79c454" containerName="neutron-api" Nov 24 09:08:55 crc kubenswrapper[4886]: I1124 09:08:55.647324 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="58e8a97b-bd79-4e90-99ef-4e0eab79c454" containerName="neutron-httpd" Nov 24 09:08:55 crc kubenswrapper[4886]: I1124 09:08:55.647390 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="01e6ef25-7f8f-4ebb-9f73-d05aecd19942" containerName="cinder-scheduler" Nov 24 09:08:55 crc kubenswrapper[4886]: I1124 09:08:55.647415 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="58e8a97b-bd79-4e90-99ef-4e0eab79c454" containerName="neutron-api" Nov 24 09:08:55 crc kubenswrapper[4886]: I1124 09:08:55.647427 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="01e6ef25-7f8f-4ebb-9f73-d05aecd19942" containerName="probe" Nov 24 09:08:55 crc kubenswrapper[4886]: I1124 09:08:55.657024 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 24 09:08:55 crc kubenswrapper[4886]: I1124 09:08:55.664765 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Nov 24 09:08:55 crc kubenswrapper[4886]: I1124 09:08:55.667077 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 24 09:08:55 crc kubenswrapper[4886]: I1124 09:08:55.753677 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/20a1d599-cfce-400c-a6d9-9a060ebe4b8e-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"20a1d599-cfce-400c-a6d9-9a060ebe4b8e\") " pod="openstack/cinder-scheduler-0" Nov 24 09:08:55 crc kubenswrapper[4886]: I1124 09:08:55.753813 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4ghbz\" (UniqueName: \"kubernetes.io/projected/20a1d599-cfce-400c-a6d9-9a060ebe4b8e-kube-api-access-4ghbz\") pod \"cinder-scheduler-0\" (UID: \"20a1d599-cfce-400c-a6d9-9a060ebe4b8e\") " pod="openstack/cinder-scheduler-0" Nov 24 09:08:55 crc kubenswrapper[4886]: I1124 09:08:55.753881 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/20a1d599-cfce-400c-a6d9-9a060ebe4b8e-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"20a1d599-cfce-400c-a6d9-9a060ebe4b8e\") " pod="openstack/cinder-scheduler-0" Nov 24 09:08:55 crc kubenswrapper[4886]: I1124 09:08:55.753937 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20a1d599-cfce-400c-a6d9-9a060ebe4b8e-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"20a1d599-cfce-400c-a6d9-9a060ebe4b8e\") " pod="openstack/cinder-scheduler-0" Nov 24 09:08:55 crc kubenswrapper[4886]: I1124 09:08:55.753963 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20a1d599-cfce-400c-a6d9-9a060ebe4b8e-config-data\") pod \"cinder-scheduler-0\" (UID: \"20a1d599-cfce-400c-a6d9-9a060ebe4b8e\") " pod="openstack/cinder-scheduler-0" Nov 24 09:08:55 crc kubenswrapper[4886]: I1124 09:08:55.754058 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/20a1d599-cfce-400c-a6d9-9a060ebe4b8e-scripts\") pod \"cinder-scheduler-0\" (UID: \"20a1d599-cfce-400c-a6d9-9a060ebe4b8e\") " pod="openstack/cinder-scheduler-0" Nov 24 09:08:55 crc kubenswrapper[4886]: I1124 09:08:55.856628 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/20a1d599-cfce-400c-a6d9-9a060ebe4b8e-scripts\") pod \"cinder-scheduler-0\" (UID: \"20a1d599-cfce-400c-a6d9-9a060ebe4b8e\") " pod="openstack/cinder-scheduler-0" Nov 24 09:08:55 crc kubenswrapper[4886]: I1124 09:08:55.856699 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/20a1d599-cfce-400c-a6d9-9a060ebe4b8e-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"20a1d599-cfce-400c-a6d9-9a060ebe4b8e\") " pod="openstack/cinder-scheduler-0" Nov 24 09:08:55 crc kubenswrapper[4886]: I1124 09:08:55.856778 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4ghbz\" (UniqueName: \"kubernetes.io/projected/20a1d599-cfce-400c-a6d9-9a060ebe4b8e-kube-api-access-4ghbz\") pod \"cinder-scheduler-0\" (UID: \"20a1d599-cfce-400c-a6d9-9a060ebe4b8e\") " pod="openstack/cinder-scheduler-0" Nov 24 09:08:55 crc kubenswrapper[4886]: I1124 09:08:55.856830 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/20a1d599-cfce-400c-a6d9-9a060ebe4b8e-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"20a1d599-cfce-400c-a6d9-9a060ebe4b8e\") " pod="openstack/cinder-scheduler-0" Nov 24 09:08:55 crc kubenswrapper[4886]: I1124 09:08:55.856876 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20a1d599-cfce-400c-a6d9-9a060ebe4b8e-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"20a1d599-cfce-400c-a6d9-9a060ebe4b8e\") " pod="openstack/cinder-scheduler-0" Nov 24 09:08:55 crc kubenswrapper[4886]: I1124 09:08:55.856908 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20a1d599-cfce-400c-a6d9-9a060ebe4b8e-config-data\") pod \"cinder-scheduler-0\" (UID: \"20a1d599-cfce-400c-a6d9-9a060ebe4b8e\") " pod="openstack/cinder-scheduler-0" Nov 24 09:08:55 crc kubenswrapper[4886]: I1124 09:08:55.867662 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/20a1d599-cfce-400c-a6d9-9a060ebe4b8e-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"20a1d599-cfce-400c-a6d9-9a060ebe4b8e\") " pod="openstack/cinder-scheduler-0" Nov 24 09:08:55 crc kubenswrapper[4886]: I1124 09:08:55.872224 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/20a1d599-cfce-400c-a6d9-9a060ebe4b8e-scripts\") pod \"cinder-scheduler-0\" (UID: \"20a1d599-cfce-400c-a6d9-9a060ebe4b8e\") " pod="openstack/cinder-scheduler-0" Nov 24 09:08:55 crc kubenswrapper[4886]: I1124 09:08:55.878107 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/20a1d599-cfce-400c-a6d9-9a060ebe4b8e-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"20a1d599-cfce-400c-a6d9-9a060ebe4b8e\") " pod="openstack/cinder-scheduler-0" Nov 24 09:08:55 crc kubenswrapper[4886]: I1124 09:08:55.888185 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4ghbz\" (UniqueName: \"kubernetes.io/projected/20a1d599-cfce-400c-a6d9-9a060ebe4b8e-kube-api-access-4ghbz\") pod \"cinder-scheduler-0\" (UID: \"20a1d599-cfce-400c-a6d9-9a060ebe4b8e\") " pod="openstack/cinder-scheduler-0" Nov 24 09:08:55 crc kubenswrapper[4886]: I1124 09:08:55.890447 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20a1d599-cfce-400c-a6d9-9a060ebe4b8e-config-data\") pod \"cinder-scheduler-0\" (UID: \"20a1d599-cfce-400c-a6d9-9a060ebe4b8e\") " pod="openstack/cinder-scheduler-0" Nov 24 09:08:55 crc kubenswrapper[4886]: I1124 09:08:55.895399 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20a1d599-cfce-400c-a6d9-9a060ebe4b8e-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"20a1d599-cfce-400c-a6d9-9a060ebe4b8e\") " pod="openstack/cinder-scheduler-0" Nov 24 09:08:56 crc kubenswrapper[4886]: I1124 09:08:56.001081 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 24 09:08:56 crc kubenswrapper[4886]: I1124 09:08:56.614697 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 24 09:08:56 crc kubenswrapper[4886]: I1124 09:08:56.868547 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01e6ef25-7f8f-4ebb-9f73-d05aecd19942" path="/var/lib/kubelet/pods/01e6ef25-7f8f-4ebb-9f73-d05aecd19942/volumes" Nov 24 09:08:56 crc kubenswrapper[4886]: I1124 09:08:56.869946 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="58e8a97b-bd79-4e90-99ef-4e0eab79c454" path="/var/lib/kubelet/pods/58e8a97b-bd79-4e90-99ef-4e0eab79c454/volumes" Nov 24 09:08:57 crc kubenswrapper[4886]: I1124 09:08:57.303066 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"20a1d599-cfce-400c-a6d9-9a060ebe4b8e","Type":"ContainerStarted","Data":"bb43ea245a534c6a77e133cddfffc276018b00d81475163dffd5923a7d30463b"} Nov 24 09:08:58 crc kubenswrapper[4886]: I1124 09:08:58.044709 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 09:08:58 crc kubenswrapper[4886]: I1124 09:08:58.123074 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6e8de0fe-585f-4cb8-8deb-d788e443fbd2-config-data\") pod \"6e8de0fe-585f-4cb8-8deb-d788e443fbd2\" (UID: \"6e8de0fe-585f-4cb8-8deb-d788e443fbd2\") " Nov 24 09:08:58 crc kubenswrapper[4886]: I1124 09:08:58.123180 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6e8de0fe-585f-4cb8-8deb-d788e443fbd2-run-httpd\") pod \"6e8de0fe-585f-4cb8-8deb-d788e443fbd2\" (UID: \"6e8de0fe-585f-4cb8-8deb-d788e443fbd2\") " Nov 24 09:08:58 crc kubenswrapper[4886]: I1124 09:08:58.123214 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e8de0fe-585f-4cb8-8deb-d788e443fbd2-combined-ca-bundle\") pod \"6e8de0fe-585f-4cb8-8deb-d788e443fbd2\" (UID: \"6e8de0fe-585f-4cb8-8deb-d788e443fbd2\") " Nov 24 09:08:58 crc kubenswrapper[4886]: I1124 09:08:58.123304 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6e8de0fe-585f-4cb8-8deb-d788e443fbd2-log-httpd\") pod \"6e8de0fe-585f-4cb8-8deb-d788e443fbd2\" (UID: \"6e8de0fe-585f-4cb8-8deb-d788e443fbd2\") " Nov 24 09:08:58 crc kubenswrapper[4886]: I1124 09:08:58.123327 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6e8de0fe-585f-4cb8-8deb-d788e443fbd2-sg-core-conf-yaml\") pod \"6e8de0fe-585f-4cb8-8deb-d788e443fbd2\" (UID: \"6e8de0fe-585f-4cb8-8deb-d788e443fbd2\") " Nov 24 09:08:58 crc kubenswrapper[4886]: I1124 09:08:58.123412 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sqlz2\" (UniqueName: \"kubernetes.io/projected/6e8de0fe-585f-4cb8-8deb-d788e443fbd2-kube-api-access-sqlz2\") pod \"6e8de0fe-585f-4cb8-8deb-d788e443fbd2\" (UID: \"6e8de0fe-585f-4cb8-8deb-d788e443fbd2\") " Nov 24 09:08:58 crc kubenswrapper[4886]: I1124 09:08:58.123467 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6e8de0fe-585f-4cb8-8deb-d788e443fbd2-scripts\") pod \"6e8de0fe-585f-4cb8-8deb-d788e443fbd2\" (UID: \"6e8de0fe-585f-4cb8-8deb-d788e443fbd2\") " Nov 24 09:08:58 crc kubenswrapper[4886]: I1124 09:08:58.129897 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6e8de0fe-585f-4cb8-8deb-d788e443fbd2-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "6e8de0fe-585f-4cb8-8deb-d788e443fbd2" (UID: "6e8de0fe-585f-4cb8-8deb-d788e443fbd2"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 09:08:58 crc kubenswrapper[4886]: I1124 09:08:58.130467 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6e8de0fe-585f-4cb8-8deb-d788e443fbd2-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "6e8de0fe-585f-4cb8-8deb-d788e443fbd2" (UID: "6e8de0fe-585f-4cb8-8deb-d788e443fbd2"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 09:08:58 crc kubenswrapper[4886]: I1124 09:08:58.147454 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6e8de0fe-585f-4cb8-8deb-d788e443fbd2-kube-api-access-sqlz2" (OuterVolumeSpecName: "kube-api-access-sqlz2") pod "6e8de0fe-585f-4cb8-8deb-d788e443fbd2" (UID: "6e8de0fe-585f-4cb8-8deb-d788e443fbd2"). InnerVolumeSpecName "kube-api-access-sqlz2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 09:08:58 crc kubenswrapper[4886]: I1124 09:08:58.147468 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6e8de0fe-585f-4cb8-8deb-d788e443fbd2-scripts" (OuterVolumeSpecName: "scripts") pod "6e8de0fe-585f-4cb8-8deb-d788e443fbd2" (UID: "6e8de0fe-585f-4cb8-8deb-d788e443fbd2"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:08:58 crc kubenswrapper[4886]: I1124 09:08:58.159254 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6e8de0fe-585f-4cb8-8deb-d788e443fbd2-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "6e8de0fe-585f-4cb8-8deb-d788e443fbd2" (UID: "6e8de0fe-585f-4cb8-8deb-d788e443fbd2"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:08:58 crc kubenswrapper[4886]: I1124 09:08:58.164696 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-8499689d4b-dvrvg" Nov 24 09:08:58 crc kubenswrapper[4886]: I1124 09:08:58.224776 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/dbac479b-45c3-44c6-985a-b88d878f3506-config-data-custom\") pod \"dbac479b-45c3-44c6-985a-b88d878f3506\" (UID: \"dbac479b-45c3-44c6-985a-b88d878f3506\") " Nov 24 09:08:58 crc kubenswrapper[4886]: I1124 09:08:58.224903 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dbac479b-45c3-44c6-985a-b88d878f3506-logs\") pod \"dbac479b-45c3-44c6-985a-b88d878f3506\" (UID: \"dbac479b-45c3-44c6-985a-b88d878f3506\") " Nov 24 09:08:58 crc kubenswrapper[4886]: I1124 09:08:58.224929 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbac479b-45c3-44c6-985a-b88d878f3506-combined-ca-bundle\") pod \"dbac479b-45c3-44c6-985a-b88d878f3506\" (UID: \"dbac479b-45c3-44c6-985a-b88d878f3506\") " Nov 24 09:08:58 crc kubenswrapper[4886]: I1124 09:08:58.225003 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-84jg9\" (UniqueName: \"kubernetes.io/projected/dbac479b-45c3-44c6-985a-b88d878f3506-kube-api-access-84jg9\") pod \"dbac479b-45c3-44c6-985a-b88d878f3506\" (UID: \"dbac479b-45c3-44c6-985a-b88d878f3506\") " Nov 24 09:08:58 crc kubenswrapper[4886]: I1124 09:08:58.225110 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dbac479b-45c3-44c6-985a-b88d878f3506-config-data\") pod \"dbac479b-45c3-44c6-985a-b88d878f3506\" (UID: \"dbac479b-45c3-44c6-985a-b88d878f3506\") " Nov 24 09:08:58 crc kubenswrapper[4886]: I1124 09:08:58.227695 4886 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6e8de0fe-585f-4cb8-8deb-d788e443fbd2-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 24 09:08:58 crc kubenswrapper[4886]: I1124 09:08:58.227737 4886 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6e8de0fe-585f-4cb8-8deb-d788e443fbd2-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 24 09:08:58 crc kubenswrapper[4886]: I1124 09:08:58.227754 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sqlz2\" (UniqueName: \"kubernetes.io/projected/6e8de0fe-585f-4cb8-8deb-d788e443fbd2-kube-api-access-sqlz2\") on node \"crc\" DevicePath \"\"" Nov 24 09:08:58 crc kubenswrapper[4886]: I1124 09:08:58.227768 4886 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6e8de0fe-585f-4cb8-8deb-d788e443fbd2-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 09:08:58 crc kubenswrapper[4886]: I1124 09:08:58.227779 4886 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6e8de0fe-585f-4cb8-8deb-d788e443fbd2-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 24 09:08:58 crc kubenswrapper[4886]: I1124 09:08:58.229581 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dbac479b-45c3-44c6-985a-b88d878f3506-logs" (OuterVolumeSpecName: "logs") pod "dbac479b-45c3-44c6-985a-b88d878f3506" (UID: "dbac479b-45c3-44c6-985a-b88d878f3506"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 09:08:58 crc kubenswrapper[4886]: I1124 09:08:58.242904 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6e8de0fe-585f-4cb8-8deb-d788e443fbd2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6e8de0fe-585f-4cb8-8deb-d788e443fbd2" (UID: "6e8de0fe-585f-4cb8-8deb-d788e443fbd2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:08:58 crc kubenswrapper[4886]: I1124 09:08:58.259523 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dbac479b-45c3-44c6-985a-b88d878f3506-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "dbac479b-45c3-44c6-985a-b88d878f3506" (UID: "dbac479b-45c3-44c6-985a-b88d878f3506"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:08:58 crc kubenswrapper[4886]: I1124 09:08:58.276429 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dbac479b-45c3-44c6-985a-b88d878f3506-kube-api-access-84jg9" (OuterVolumeSpecName: "kube-api-access-84jg9") pod "dbac479b-45c3-44c6-985a-b88d878f3506" (UID: "dbac479b-45c3-44c6-985a-b88d878f3506"). InnerVolumeSpecName "kube-api-access-84jg9". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 09:08:58 crc kubenswrapper[4886]: I1124 09:08:58.321530 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dbac479b-45c3-44c6-985a-b88d878f3506-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "dbac479b-45c3-44c6-985a-b88d878f3506" (UID: "dbac479b-45c3-44c6-985a-b88d878f3506"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:08:58 crc kubenswrapper[4886]: I1124 09:08:58.323402 4886 generic.go:334] "Generic (PLEG): container finished" podID="6e8de0fe-585f-4cb8-8deb-d788e443fbd2" containerID="d60efe63b5b14f25d7b74a58e5802f3f1e8d9632266b07f9a2ec246deebd6174" exitCode=137 Nov 24 09:08:58 crc kubenswrapper[4886]: I1124 09:08:58.323470 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6e8de0fe-585f-4cb8-8deb-d788e443fbd2","Type":"ContainerDied","Data":"d60efe63b5b14f25d7b74a58e5802f3f1e8d9632266b07f9a2ec246deebd6174"} Nov 24 09:08:58 crc kubenswrapper[4886]: I1124 09:08:58.323501 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6e8de0fe-585f-4cb8-8deb-d788e443fbd2","Type":"ContainerDied","Data":"e4fe14f89fe8f1c0e76de2a63b4b7641b5a9028dd172bdce6649992e9ff00e85"} Nov 24 09:08:58 crc kubenswrapper[4886]: I1124 09:08:58.323521 4886 scope.go:117] "RemoveContainer" containerID="d60efe63b5b14f25d7b74a58e5802f3f1e8d9632266b07f9a2ec246deebd6174" Nov 24 09:08:58 crc kubenswrapper[4886]: I1124 09:08:58.323673 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 09:08:58 crc kubenswrapper[4886]: I1124 09:08:58.328000 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6e8de0fe-585f-4cb8-8deb-d788e443fbd2-config-data" (OuterVolumeSpecName: "config-data") pod "6e8de0fe-585f-4cb8-8deb-d788e443fbd2" (UID: "6e8de0fe-585f-4cb8-8deb-d788e443fbd2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:08:58 crc kubenswrapper[4886]: I1124 09:08:58.329673 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-84jg9\" (UniqueName: \"kubernetes.io/projected/dbac479b-45c3-44c6-985a-b88d878f3506-kube-api-access-84jg9\") on node \"crc\" DevicePath \"\"" Nov 24 09:08:58 crc kubenswrapper[4886]: I1124 09:08:58.330852 4886 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6e8de0fe-585f-4cb8-8deb-d788e443fbd2-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 09:08:58 crc kubenswrapper[4886]: I1124 09:08:58.330966 4886 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e8de0fe-585f-4cb8-8deb-d788e443fbd2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 09:08:58 crc kubenswrapper[4886]: I1124 09:08:58.331045 4886 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/dbac479b-45c3-44c6-985a-b88d878f3506-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 24 09:08:58 crc kubenswrapper[4886]: I1124 09:08:58.331129 4886 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dbac479b-45c3-44c6-985a-b88d878f3506-logs\") on node \"crc\" DevicePath \"\"" Nov 24 09:08:58 crc kubenswrapper[4886]: I1124 09:08:58.331242 4886 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbac479b-45c3-44c6-985a-b88d878f3506-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 09:08:58 crc kubenswrapper[4886]: I1124 09:08:58.333105 4886 generic.go:334] "Generic (PLEG): container finished" podID="dbac479b-45c3-44c6-985a-b88d878f3506" containerID="2c6616ba5f79dbe2e925f8c267f6a0309d0f5343f60a70b1827a51fd481f46d9" exitCode=0 Nov 24 09:08:58 crc kubenswrapper[4886]: I1124 09:08:58.333217 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-8499689d4b-dvrvg" event={"ID":"dbac479b-45c3-44c6-985a-b88d878f3506","Type":"ContainerDied","Data":"2c6616ba5f79dbe2e925f8c267f6a0309d0f5343f60a70b1827a51fd481f46d9"} Nov 24 09:08:58 crc kubenswrapper[4886]: I1124 09:08:58.333257 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-8499689d4b-dvrvg" event={"ID":"dbac479b-45c3-44c6-985a-b88d878f3506","Type":"ContainerDied","Data":"3c65ddea40b6a39588b588c9eb16dc50e0ed3126b076bdc3124924af65a2f7fc"} Nov 24 09:08:58 crc kubenswrapper[4886]: I1124 09:08:58.333347 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-8499689d4b-dvrvg" Nov 24 09:08:58 crc kubenswrapper[4886]: I1124 09:08:58.344165 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"20a1d599-cfce-400c-a6d9-9a060ebe4b8e","Type":"ContainerStarted","Data":"708c78f55fb448a9458f5e0a36c2ce6647e313e9a6d0a299e7601a0fae2f7ae6"} Nov 24 09:08:58 crc kubenswrapper[4886]: I1124 09:08:58.349096 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dbac479b-45c3-44c6-985a-b88d878f3506-config-data" (OuterVolumeSpecName: "config-data") pod "dbac479b-45c3-44c6-985a-b88d878f3506" (UID: "dbac479b-45c3-44c6-985a-b88d878f3506"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:08:58 crc kubenswrapper[4886]: I1124 09:08:58.352523 4886 scope.go:117] "RemoveContainer" containerID="69b02fe9c17baa35f3e9f196d186d3b4a8906e2c02a6354e1311d809807c4d5d" Nov 24 09:08:58 crc kubenswrapper[4886]: I1124 09:08:58.378087 4886 scope.go:117] "RemoveContainer" containerID="a76fc97611d3be26cdd4e41ad350c79afbc3b4cd82d9c74df46643996426a51b" Nov 24 09:08:58 crc kubenswrapper[4886]: I1124 09:08:58.427369 4886 scope.go:117] "RemoveContainer" containerID="d60efe63b5b14f25d7b74a58e5802f3f1e8d9632266b07f9a2ec246deebd6174" Nov 24 09:08:58 crc kubenswrapper[4886]: E1124 09:08:58.427856 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d60efe63b5b14f25d7b74a58e5802f3f1e8d9632266b07f9a2ec246deebd6174\": container with ID starting with d60efe63b5b14f25d7b74a58e5802f3f1e8d9632266b07f9a2ec246deebd6174 not found: ID does not exist" containerID="d60efe63b5b14f25d7b74a58e5802f3f1e8d9632266b07f9a2ec246deebd6174" Nov 24 09:08:58 crc kubenswrapper[4886]: I1124 09:08:58.427897 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d60efe63b5b14f25d7b74a58e5802f3f1e8d9632266b07f9a2ec246deebd6174"} err="failed to get container status \"d60efe63b5b14f25d7b74a58e5802f3f1e8d9632266b07f9a2ec246deebd6174\": rpc error: code = NotFound desc = could not find container \"d60efe63b5b14f25d7b74a58e5802f3f1e8d9632266b07f9a2ec246deebd6174\": container with ID starting with d60efe63b5b14f25d7b74a58e5802f3f1e8d9632266b07f9a2ec246deebd6174 not found: ID does not exist" Nov 24 09:08:58 crc kubenswrapper[4886]: I1124 09:08:58.427970 4886 scope.go:117] "RemoveContainer" containerID="69b02fe9c17baa35f3e9f196d186d3b4a8906e2c02a6354e1311d809807c4d5d" Nov 24 09:08:58 crc kubenswrapper[4886]: E1124 09:08:58.428555 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"69b02fe9c17baa35f3e9f196d186d3b4a8906e2c02a6354e1311d809807c4d5d\": container with ID starting with 69b02fe9c17baa35f3e9f196d186d3b4a8906e2c02a6354e1311d809807c4d5d not found: ID does not exist" containerID="69b02fe9c17baa35f3e9f196d186d3b4a8906e2c02a6354e1311d809807c4d5d" Nov 24 09:08:58 crc kubenswrapper[4886]: I1124 09:08:58.428584 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"69b02fe9c17baa35f3e9f196d186d3b4a8906e2c02a6354e1311d809807c4d5d"} err="failed to get container status \"69b02fe9c17baa35f3e9f196d186d3b4a8906e2c02a6354e1311d809807c4d5d\": rpc error: code = NotFound desc = could not find container \"69b02fe9c17baa35f3e9f196d186d3b4a8906e2c02a6354e1311d809807c4d5d\": container with ID starting with 69b02fe9c17baa35f3e9f196d186d3b4a8906e2c02a6354e1311d809807c4d5d not found: ID does not exist" Nov 24 09:08:58 crc kubenswrapper[4886]: I1124 09:08:58.428609 4886 scope.go:117] "RemoveContainer" containerID="a76fc97611d3be26cdd4e41ad350c79afbc3b4cd82d9c74df46643996426a51b" Nov 24 09:08:58 crc kubenswrapper[4886]: E1124 09:08:58.428978 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a76fc97611d3be26cdd4e41ad350c79afbc3b4cd82d9c74df46643996426a51b\": container with ID starting with a76fc97611d3be26cdd4e41ad350c79afbc3b4cd82d9c74df46643996426a51b not found: ID does not exist" containerID="a76fc97611d3be26cdd4e41ad350c79afbc3b4cd82d9c74df46643996426a51b" Nov 24 09:08:58 crc kubenswrapper[4886]: I1124 09:08:58.429012 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a76fc97611d3be26cdd4e41ad350c79afbc3b4cd82d9c74df46643996426a51b"} err="failed to get container status \"a76fc97611d3be26cdd4e41ad350c79afbc3b4cd82d9c74df46643996426a51b\": rpc error: code = NotFound desc = could not find container \"a76fc97611d3be26cdd4e41ad350c79afbc3b4cd82d9c74df46643996426a51b\": container with ID starting with a76fc97611d3be26cdd4e41ad350c79afbc3b4cd82d9c74df46643996426a51b not found: ID does not exist" Nov 24 09:08:58 crc kubenswrapper[4886]: I1124 09:08:58.429036 4886 scope.go:117] "RemoveContainer" containerID="2c6616ba5f79dbe2e925f8c267f6a0309d0f5343f60a70b1827a51fd481f46d9" Nov 24 09:08:58 crc kubenswrapper[4886]: I1124 09:08:58.435794 4886 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dbac479b-45c3-44c6-985a-b88d878f3506-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 09:08:58 crc kubenswrapper[4886]: I1124 09:08:58.465439 4886 scope.go:117] "RemoveContainer" containerID="1a369f84939ccbd739030f6008173c67a0bd3a6677dd8192d9079309c8b47bbe" Nov 24 09:08:58 crc kubenswrapper[4886]: I1124 09:08:58.491627 4886 scope.go:117] "RemoveContainer" containerID="2c6616ba5f79dbe2e925f8c267f6a0309d0f5343f60a70b1827a51fd481f46d9" Nov 24 09:08:58 crc kubenswrapper[4886]: E1124 09:08:58.492318 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2c6616ba5f79dbe2e925f8c267f6a0309d0f5343f60a70b1827a51fd481f46d9\": container with ID starting with 2c6616ba5f79dbe2e925f8c267f6a0309d0f5343f60a70b1827a51fd481f46d9 not found: ID does not exist" containerID="2c6616ba5f79dbe2e925f8c267f6a0309d0f5343f60a70b1827a51fd481f46d9" Nov 24 09:08:58 crc kubenswrapper[4886]: I1124 09:08:58.492359 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2c6616ba5f79dbe2e925f8c267f6a0309d0f5343f60a70b1827a51fd481f46d9"} err="failed to get container status \"2c6616ba5f79dbe2e925f8c267f6a0309d0f5343f60a70b1827a51fd481f46d9\": rpc error: code = NotFound desc = could not find container \"2c6616ba5f79dbe2e925f8c267f6a0309d0f5343f60a70b1827a51fd481f46d9\": container with ID starting with 2c6616ba5f79dbe2e925f8c267f6a0309d0f5343f60a70b1827a51fd481f46d9 not found: ID does not exist" Nov 24 09:08:58 crc kubenswrapper[4886]: I1124 09:08:58.492386 4886 scope.go:117] "RemoveContainer" containerID="1a369f84939ccbd739030f6008173c67a0bd3a6677dd8192d9079309c8b47bbe" Nov 24 09:08:58 crc kubenswrapper[4886]: E1124 09:08:58.492777 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1a369f84939ccbd739030f6008173c67a0bd3a6677dd8192d9079309c8b47bbe\": container with ID starting with 1a369f84939ccbd739030f6008173c67a0bd3a6677dd8192d9079309c8b47bbe not found: ID does not exist" containerID="1a369f84939ccbd739030f6008173c67a0bd3a6677dd8192d9079309c8b47bbe" Nov 24 09:08:58 crc kubenswrapper[4886]: I1124 09:08:58.492800 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1a369f84939ccbd739030f6008173c67a0bd3a6677dd8192d9079309c8b47bbe"} err="failed to get container status \"1a369f84939ccbd739030f6008173c67a0bd3a6677dd8192d9079309c8b47bbe\": rpc error: code = NotFound desc = could not find container \"1a369f84939ccbd739030f6008173c67a0bd3a6677dd8192d9079309c8b47bbe\": container with ID starting with 1a369f84939ccbd739030f6008173c67a0bd3a6677dd8192d9079309c8b47bbe not found: ID does not exist" Nov 24 09:08:58 crc kubenswrapper[4886]: I1124 09:08:58.689439 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 24 09:08:58 crc kubenswrapper[4886]: I1124 09:08:58.700662 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 24 09:08:58 crc kubenswrapper[4886]: I1124 09:08:58.710933 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-8499689d4b-dvrvg"] Nov 24 09:08:58 crc kubenswrapper[4886]: I1124 09:08:58.719756 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-8499689d4b-dvrvg"] Nov 24 09:08:58 crc kubenswrapper[4886]: I1124 09:08:58.724784 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 24 09:08:58 crc kubenswrapper[4886]: E1124 09:08:58.725284 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dbac479b-45c3-44c6-985a-b88d878f3506" containerName="barbican-api" Nov 24 09:08:58 crc kubenswrapper[4886]: I1124 09:08:58.725312 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="dbac479b-45c3-44c6-985a-b88d878f3506" containerName="barbican-api" Nov 24 09:08:58 crc kubenswrapper[4886]: E1124 09:08:58.725328 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e8de0fe-585f-4cb8-8deb-d788e443fbd2" containerName="ceilometer-notification-agent" Nov 24 09:08:58 crc kubenswrapper[4886]: I1124 09:08:58.725334 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e8de0fe-585f-4cb8-8deb-d788e443fbd2" containerName="ceilometer-notification-agent" Nov 24 09:08:58 crc kubenswrapper[4886]: E1124 09:08:58.725367 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e8de0fe-585f-4cb8-8deb-d788e443fbd2" containerName="sg-core" Nov 24 09:08:58 crc kubenswrapper[4886]: I1124 09:08:58.725375 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e8de0fe-585f-4cb8-8deb-d788e443fbd2" containerName="sg-core" Nov 24 09:08:58 crc kubenswrapper[4886]: E1124 09:08:58.725402 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dbac479b-45c3-44c6-985a-b88d878f3506" containerName="barbican-api-log" Nov 24 09:08:58 crc kubenswrapper[4886]: I1124 09:08:58.725409 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="dbac479b-45c3-44c6-985a-b88d878f3506" containerName="barbican-api-log" Nov 24 09:08:58 crc kubenswrapper[4886]: E1124 09:08:58.725432 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e8de0fe-585f-4cb8-8deb-d788e443fbd2" containerName="proxy-httpd" Nov 24 09:08:58 crc kubenswrapper[4886]: I1124 09:08:58.725440 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e8de0fe-585f-4cb8-8deb-d788e443fbd2" containerName="proxy-httpd" Nov 24 09:08:58 crc kubenswrapper[4886]: I1124 09:08:58.725662 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="dbac479b-45c3-44c6-985a-b88d878f3506" containerName="barbican-api-log" Nov 24 09:08:58 crc kubenswrapper[4886]: I1124 09:08:58.725691 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="6e8de0fe-585f-4cb8-8deb-d788e443fbd2" containerName="sg-core" Nov 24 09:08:58 crc kubenswrapper[4886]: I1124 09:08:58.725705 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="dbac479b-45c3-44c6-985a-b88d878f3506" containerName="barbican-api" Nov 24 09:08:58 crc kubenswrapper[4886]: I1124 09:08:58.725718 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="6e8de0fe-585f-4cb8-8deb-d788e443fbd2" containerName="proxy-httpd" Nov 24 09:08:58 crc kubenswrapper[4886]: I1124 09:08:58.725733 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="6e8de0fe-585f-4cb8-8deb-d788e443fbd2" containerName="ceilometer-notification-agent" Nov 24 09:08:58 crc kubenswrapper[4886]: I1124 09:08:58.727805 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 09:08:58 crc kubenswrapper[4886]: I1124 09:08:58.731822 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 24 09:08:58 crc kubenswrapper[4886]: I1124 09:08:58.731822 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 24 09:08:58 crc kubenswrapper[4886]: I1124 09:08:58.739760 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 24 09:08:58 crc kubenswrapper[4886]: I1124 09:08:58.844683 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4555cb4f-bd53-4d32-b473-1ce3cfdd95c1-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"4555cb4f-bd53-4d32-b473-1ce3cfdd95c1\") " pod="openstack/ceilometer-0" Nov 24 09:08:58 crc kubenswrapper[4886]: I1124 09:08:58.844755 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4555cb4f-bd53-4d32-b473-1ce3cfdd95c1-scripts\") pod \"ceilometer-0\" (UID: \"4555cb4f-bd53-4d32-b473-1ce3cfdd95c1\") " pod="openstack/ceilometer-0" Nov 24 09:08:58 crc kubenswrapper[4886]: I1124 09:08:58.844815 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4555cb4f-bd53-4d32-b473-1ce3cfdd95c1-log-httpd\") pod \"ceilometer-0\" (UID: \"4555cb4f-bd53-4d32-b473-1ce3cfdd95c1\") " pod="openstack/ceilometer-0" Nov 24 09:08:58 crc kubenswrapper[4886]: I1124 09:08:58.844892 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4555cb4f-bd53-4d32-b473-1ce3cfdd95c1-run-httpd\") pod \"ceilometer-0\" (UID: \"4555cb4f-bd53-4d32-b473-1ce3cfdd95c1\") " pod="openstack/ceilometer-0" Nov 24 09:08:58 crc kubenswrapper[4886]: I1124 09:08:58.844930 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4555cb4f-bd53-4d32-b473-1ce3cfdd95c1-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"4555cb4f-bd53-4d32-b473-1ce3cfdd95c1\") " pod="openstack/ceilometer-0" Nov 24 09:08:58 crc kubenswrapper[4886]: I1124 09:08:58.844964 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lrmk9\" (UniqueName: \"kubernetes.io/projected/4555cb4f-bd53-4d32-b473-1ce3cfdd95c1-kube-api-access-lrmk9\") pod \"ceilometer-0\" (UID: \"4555cb4f-bd53-4d32-b473-1ce3cfdd95c1\") " pod="openstack/ceilometer-0" Nov 24 09:08:58 crc kubenswrapper[4886]: I1124 09:08:58.844994 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4555cb4f-bd53-4d32-b473-1ce3cfdd95c1-config-data\") pod \"ceilometer-0\" (UID: \"4555cb4f-bd53-4d32-b473-1ce3cfdd95c1\") " pod="openstack/ceilometer-0" Nov 24 09:08:58 crc kubenswrapper[4886]: I1124 09:08:58.913514 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6e8de0fe-585f-4cb8-8deb-d788e443fbd2" path="/var/lib/kubelet/pods/6e8de0fe-585f-4cb8-8deb-d788e443fbd2/volumes" Nov 24 09:08:58 crc kubenswrapper[4886]: I1124 09:08:58.914799 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dbac479b-45c3-44c6-985a-b88d878f3506" path="/var/lib/kubelet/pods/dbac479b-45c3-44c6-985a-b88d878f3506/volumes" Nov 24 09:08:58 crc kubenswrapper[4886]: I1124 09:08:58.949872 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4555cb4f-bd53-4d32-b473-1ce3cfdd95c1-config-data\") pod \"ceilometer-0\" (UID: \"4555cb4f-bd53-4d32-b473-1ce3cfdd95c1\") " pod="openstack/ceilometer-0" Nov 24 09:08:58 crc kubenswrapper[4886]: I1124 09:08:58.950003 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4555cb4f-bd53-4d32-b473-1ce3cfdd95c1-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"4555cb4f-bd53-4d32-b473-1ce3cfdd95c1\") " pod="openstack/ceilometer-0" Nov 24 09:08:58 crc kubenswrapper[4886]: I1124 09:08:58.950078 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4555cb4f-bd53-4d32-b473-1ce3cfdd95c1-scripts\") pod \"ceilometer-0\" (UID: \"4555cb4f-bd53-4d32-b473-1ce3cfdd95c1\") " pod="openstack/ceilometer-0" Nov 24 09:08:58 crc kubenswrapper[4886]: I1124 09:08:58.950183 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4555cb4f-bd53-4d32-b473-1ce3cfdd95c1-log-httpd\") pod \"ceilometer-0\" (UID: \"4555cb4f-bd53-4d32-b473-1ce3cfdd95c1\") " pod="openstack/ceilometer-0" Nov 24 09:08:58 crc kubenswrapper[4886]: I1124 09:08:58.950292 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4555cb4f-bd53-4d32-b473-1ce3cfdd95c1-run-httpd\") pod \"ceilometer-0\" (UID: \"4555cb4f-bd53-4d32-b473-1ce3cfdd95c1\") " pod="openstack/ceilometer-0" Nov 24 09:08:58 crc kubenswrapper[4886]: I1124 09:08:58.950340 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4555cb4f-bd53-4d32-b473-1ce3cfdd95c1-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"4555cb4f-bd53-4d32-b473-1ce3cfdd95c1\") " pod="openstack/ceilometer-0" Nov 24 09:08:58 crc kubenswrapper[4886]: I1124 09:08:58.950370 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lrmk9\" (UniqueName: \"kubernetes.io/projected/4555cb4f-bd53-4d32-b473-1ce3cfdd95c1-kube-api-access-lrmk9\") pod \"ceilometer-0\" (UID: \"4555cb4f-bd53-4d32-b473-1ce3cfdd95c1\") " pod="openstack/ceilometer-0" Nov 24 09:08:58 crc kubenswrapper[4886]: I1124 09:08:58.953034 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4555cb4f-bd53-4d32-b473-1ce3cfdd95c1-log-httpd\") pod \"ceilometer-0\" (UID: \"4555cb4f-bd53-4d32-b473-1ce3cfdd95c1\") " pod="openstack/ceilometer-0" Nov 24 09:08:58 crc kubenswrapper[4886]: I1124 09:08:58.953129 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4555cb4f-bd53-4d32-b473-1ce3cfdd95c1-run-httpd\") pod \"ceilometer-0\" (UID: \"4555cb4f-bd53-4d32-b473-1ce3cfdd95c1\") " pod="openstack/ceilometer-0" Nov 24 09:08:58 crc kubenswrapper[4886]: I1124 09:08:58.957353 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4555cb4f-bd53-4d32-b473-1ce3cfdd95c1-scripts\") pod \"ceilometer-0\" (UID: \"4555cb4f-bd53-4d32-b473-1ce3cfdd95c1\") " pod="openstack/ceilometer-0" Nov 24 09:08:58 crc kubenswrapper[4886]: I1124 09:08:58.958498 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4555cb4f-bd53-4d32-b473-1ce3cfdd95c1-config-data\") pod \"ceilometer-0\" (UID: \"4555cb4f-bd53-4d32-b473-1ce3cfdd95c1\") " pod="openstack/ceilometer-0" Nov 24 09:08:58 crc kubenswrapper[4886]: I1124 09:08:58.972236 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4555cb4f-bd53-4d32-b473-1ce3cfdd95c1-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"4555cb4f-bd53-4d32-b473-1ce3cfdd95c1\") " pod="openstack/ceilometer-0" Nov 24 09:08:58 crc kubenswrapper[4886]: I1124 09:08:58.973708 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4555cb4f-bd53-4d32-b473-1ce3cfdd95c1-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"4555cb4f-bd53-4d32-b473-1ce3cfdd95c1\") " pod="openstack/ceilometer-0" Nov 24 09:08:58 crc kubenswrapper[4886]: I1124 09:08:58.975832 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lrmk9\" (UniqueName: \"kubernetes.io/projected/4555cb4f-bd53-4d32-b473-1ce3cfdd95c1-kube-api-access-lrmk9\") pod \"ceilometer-0\" (UID: \"4555cb4f-bd53-4d32-b473-1ce3cfdd95c1\") " pod="openstack/ceilometer-0" Nov 24 09:08:59 crc kubenswrapper[4886]: I1124 09:08:59.058299 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 09:08:59 crc kubenswrapper[4886]: I1124 09:08:59.371330 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"20a1d599-cfce-400c-a6d9-9a060ebe4b8e","Type":"ContainerStarted","Data":"8a9df3dca64c2d672fdf38c9fe964bf9729916b1c24c329bbd2021ac96faafa1"} Nov 24 09:08:59 crc kubenswrapper[4886]: I1124 09:08:59.412262 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=4.412225844 podStartE2EDuration="4.412225844s" podCreationTimestamp="2025-11-24 09:08:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 09:08:59.399503908 +0000 UTC m=+1195.286242063" watchObservedRunningTime="2025-11-24 09:08:59.412225844 +0000 UTC m=+1195.298963989" Nov 24 09:08:59 crc kubenswrapper[4886]: I1124 09:08:59.650873 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 24 09:08:59 crc kubenswrapper[4886]: I1124 09:08:59.867629 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-75ffb75746-pwc5g" podUID="54dfe4a2-2d0f-417f-b1d6-df18e5581bfc" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.144:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.144:8443: connect: connection refused" Nov 24 09:09:00 crc kubenswrapper[4886]: I1124 09:09:00.392873 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4555cb4f-bd53-4d32-b473-1ce3cfdd95c1","Type":"ContainerStarted","Data":"9ff91a58d70f1ecc134738515057d780f6030ae24ca7e6af788efad1a750ed31"} Nov 24 09:09:01 crc kubenswrapper[4886]: I1124 09:09:01.001261 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Nov 24 09:09:01 crc kubenswrapper[4886]: I1124 09:09:01.406039 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4555cb4f-bd53-4d32-b473-1ce3cfdd95c1","Type":"ContainerStarted","Data":"f7f56e2886b0914d07b49c4386195ea1f33c7a6a080d3a6058f165d109afb945"} Nov 24 09:09:01 crc kubenswrapper[4886]: I1124 09:09:01.784616 4886 patch_prober.go:28] interesting pod/machine-config-daemon-zc46q container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 09:09:01 crc kubenswrapper[4886]: I1124 09:09:01.784676 4886 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zc46q" podUID="23cb993e-0360-4449-b604-8ddd825a6502" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 09:09:02 crc kubenswrapper[4886]: I1124 09:09:02.675397 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-558564f98c-jl2ms"] Nov 24 09:09:02 crc kubenswrapper[4886]: I1124 09:09:02.678013 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-558564f98c-jl2ms" Nov 24 09:09:02 crc kubenswrapper[4886]: I1124 09:09:02.686607 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Nov 24 09:09:02 crc kubenswrapper[4886]: I1124 09:09:02.686819 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Nov 24 09:09:02 crc kubenswrapper[4886]: I1124 09:09:02.687397 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Nov 24 09:09:02 crc kubenswrapper[4886]: I1124 09:09:02.700304 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-558564f98c-jl2ms"] Nov 24 09:09:02 crc kubenswrapper[4886]: I1124 09:09:02.733870 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c1f11d5d-8b31-47b7-9ceb-197d5ca23475-log-httpd\") pod \"swift-proxy-558564f98c-jl2ms\" (UID: \"c1f11d5d-8b31-47b7-9ceb-197d5ca23475\") " pod="openstack/swift-proxy-558564f98c-jl2ms" Nov 24 09:09:02 crc kubenswrapper[4886]: I1124 09:09:02.733915 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nvbcd\" (UniqueName: \"kubernetes.io/projected/c1f11d5d-8b31-47b7-9ceb-197d5ca23475-kube-api-access-nvbcd\") pod \"swift-proxy-558564f98c-jl2ms\" (UID: \"c1f11d5d-8b31-47b7-9ceb-197d5ca23475\") " pod="openstack/swift-proxy-558564f98c-jl2ms" Nov 24 09:09:02 crc kubenswrapper[4886]: I1124 09:09:02.733953 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c1f11d5d-8b31-47b7-9ceb-197d5ca23475-etc-swift\") pod \"swift-proxy-558564f98c-jl2ms\" (UID: \"c1f11d5d-8b31-47b7-9ceb-197d5ca23475\") " pod="openstack/swift-proxy-558564f98c-jl2ms" Nov 24 09:09:02 crc kubenswrapper[4886]: I1124 09:09:02.734004 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c1f11d5d-8b31-47b7-9ceb-197d5ca23475-config-data\") pod \"swift-proxy-558564f98c-jl2ms\" (UID: \"c1f11d5d-8b31-47b7-9ceb-197d5ca23475\") " pod="openstack/swift-proxy-558564f98c-jl2ms" Nov 24 09:09:02 crc kubenswrapper[4886]: I1124 09:09:02.734027 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c1f11d5d-8b31-47b7-9ceb-197d5ca23475-public-tls-certs\") pod \"swift-proxy-558564f98c-jl2ms\" (UID: \"c1f11d5d-8b31-47b7-9ceb-197d5ca23475\") " pod="openstack/swift-proxy-558564f98c-jl2ms" Nov 24 09:09:02 crc kubenswrapper[4886]: I1124 09:09:02.734055 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c1f11d5d-8b31-47b7-9ceb-197d5ca23475-run-httpd\") pod \"swift-proxy-558564f98c-jl2ms\" (UID: \"c1f11d5d-8b31-47b7-9ceb-197d5ca23475\") " pod="openstack/swift-proxy-558564f98c-jl2ms" Nov 24 09:09:02 crc kubenswrapper[4886]: I1124 09:09:02.734086 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1f11d5d-8b31-47b7-9ceb-197d5ca23475-combined-ca-bundle\") pod \"swift-proxy-558564f98c-jl2ms\" (UID: \"c1f11d5d-8b31-47b7-9ceb-197d5ca23475\") " pod="openstack/swift-proxy-558564f98c-jl2ms" Nov 24 09:09:02 crc kubenswrapper[4886]: I1124 09:09:02.734221 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c1f11d5d-8b31-47b7-9ceb-197d5ca23475-internal-tls-certs\") pod \"swift-proxy-558564f98c-jl2ms\" (UID: \"c1f11d5d-8b31-47b7-9ceb-197d5ca23475\") " pod="openstack/swift-proxy-558564f98c-jl2ms" Nov 24 09:09:02 crc kubenswrapper[4886]: I1124 09:09:02.836622 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c1f11d5d-8b31-47b7-9ceb-197d5ca23475-internal-tls-certs\") pod \"swift-proxy-558564f98c-jl2ms\" (UID: \"c1f11d5d-8b31-47b7-9ceb-197d5ca23475\") " pod="openstack/swift-proxy-558564f98c-jl2ms" Nov 24 09:09:02 crc kubenswrapper[4886]: I1124 09:09:02.836735 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c1f11d5d-8b31-47b7-9ceb-197d5ca23475-log-httpd\") pod \"swift-proxy-558564f98c-jl2ms\" (UID: \"c1f11d5d-8b31-47b7-9ceb-197d5ca23475\") " pod="openstack/swift-proxy-558564f98c-jl2ms" Nov 24 09:09:02 crc kubenswrapper[4886]: I1124 09:09:02.836761 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nvbcd\" (UniqueName: \"kubernetes.io/projected/c1f11d5d-8b31-47b7-9ceb-197d5ca23475-kube-api-access-nvbcd\") pod \"swift-proxy-558564f98c-jl2ms\" (UID: \"c1f11d5d-8b31-47b7-9ceb-197d5ca23475\") " pod="openstack/swift-proxy-558564f98c-jl2ms" Nov 24 09:09:02 crc kubenswrapper[4886]: I1124 09:09:02.837609 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c1f11d5d-8b31-47b7-9ceb-197d5ca23475-etc-swift\") pod \"swift-proxy-558564f98c-jl2ms\" (UID: \"c1f11d5d-8b31-47b7-9ceb-197d5ca23475\") " pod="openstack/swift-proxy-558564f98c-jl2ms" Nov 24 09:09:02 crc kubenswrapper[4886]: I1124 09:09:02.837645 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c1f11d5d-8b31-47b7-9ceb-197d5ca23475-log-httpd\") pod \"swift-proxy-558564f98c-jl2ms\" (UID: \"c1f11d5d-8b31-47b7-9ceb-197d5ca23475\") " pod="openstack/swift-proxy-558564f98c-jl2ms" Nov 24 09:09:02 crc kubenswrapper[4886]: I1124 09:09:02.837807 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c1f11d5d-8b31-47b7-9ceb-197d5ca23475-config-data\") pod \"swift-proxy-558564f98c-jl2ms\" (UID: \"c1f11d5d-8b31-47b7-9ceb-197d5ca23475\") " pod="openstack/swift-proxy-558564f98c-jl2ms" Nov 24 09:09:02 crc kubenswrapper[4886]: I1124 09:09:02.837854 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c1f11d5d-8b31-47b7-9ceb-197d5ca23475-public-tls-certs\") pod \"swift-proxy-558564f98c-jl2ms\" (UID: \"c1f11d5d-8b31-47b7-9ceb-197d5ca23475\") " pod="openstack/swift-proxy-558564f98c-jl2ms" Nov 24 09:09:02 crc kubenswrapper[4886]: I1124 09:09:02.837921 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c1f11d5d-8b31-47b7-9ceb-197d5ca23475-run-httpd\") pod \"swift-proxy-558564f98c-jl2ms\" (UID: \"c1f11d5d-8b31-47b7-9ceb-197d5ca23475\") " pod="openstack/swift-proxy-558564f98c-jl2ms" Nov 24 09:09:02 crc kubenswrapper[4886]: I1124 09:09:02.838013 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1f11d5d-8b31-47b7-9ceb-197d5ca23475-combined-ca-bundle\") pod \"swift-proxy-558564f98c-jl2ms\" (UID: \"c1f11d5d-8b31-47b7-9ceb-197d5ca23475\") " pod="openstack/swift-proxy-558564f98c-jl2ms" Nov 24 09:09:02 crc kubenswrapper[4886]: I1124 09:09:02.839062 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c1f11d5d-8b31-47b7-9ceb-197d5ca23475-run-httpd\") pod \"swift-proxy-558564f98c-jl2ms\" (UID: \"c1f11d5d-8b31-47b7-9ceb-197d5ca23475\") " pod="openstack/swift-proxy-558564f98c-jl2ms" Nov 24 09:09:02 crc kubenswrapper[4886]: I1124 09:09:02.844187 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c1f11d5d-8b31-47b7-9ceb-197d5ca23475-public-tls-certs\") pod \"swift-proxy-558564f98c-jl2ms\" (UID: \"c1f11d5d-8b31-47b7-9ceb-197d5ca23475\") " pod="openstack/swift-proxy-558564f98c-jl2ms" Nov 24 09:09:02 crc kubenswrapper[4886]: I1124 09:09:02.844668 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c1f11d5d-8b31-47b7-9ceb-197d5ca23475-etc-swift\") pod \"swift-proxy-558564f98c-jl2ms\" (UID: \"c1f11d5d-8b31-47b7-9ceb-197d5ca23475\") " pod="openstack/swift-proxy-558564f98c-jl2ms" Nov 24 09:09:02 crc kubenswrapper[4886]: I1124 09:09:02.847371 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c1f11d5d-8b31-47b7-9ceb-197d5ca23475-internal-tls-certs\") pod \"swift-proxy-558564f98c-jl2ms\" (UID: \"c1f11d5d-8b31-47b7-9ceb-197d5ca23475\") " pod="openstack/swift-proxy-558564f98c-jl2ms" Nov 24 09:09:02 crc kubenswrapper[4886]: I1124 09:09:02.854530 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1f11d5d-8b31-47b7-9ceb-197d5ca23475-combined-ca-bundle\") pod \"swift-proxy-558564f98c-jl2ms\" (UID: \"c1f11d5d-8b31-47b7-9ceb-197d5ca23475\") " pod="openstack/swift-proxy-558564f98c-jl2ms" Nov 24 09:09:02 crc kubenswrapper[4886]: I1124 09:09:02.854855 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c1f11d5d-8b31-47b7-9ceb-197d5ca23475-config-data\") pod \"swift-proxy-558564f98c-jl2ms\" (UID: \"c1f11d5d-8b31-47b7-9ceb-197d5ca23475\") " pod="openstack/swift-proxy-558564f98c-jl2ms" Nov 24 09:09:02 crc kubenswrapper[4886]: I1124 09:09:02.858214 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nvbcd\" (UniqueName: \"kubernetes.io/projected/c1f11d5d-8b31-47b7-9ceb-197d5ca23475-kube-api-access-nvbcd\") pod \"swift-proxy-558564f98c-jl2ms\" (UID: \"c1f11d5d-8b31-47b7-9ceb-197d5ca23475\") " pod="openstack/swift-proxy-558564f98c-jl2ms" Nov 24 09:09:03 crc kubenswrapper[4886]: I1124 09:09:03.009423 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-558564f98c-jl2ms" Nov 24 09:09:03 crc kubenswrapper[4886]: I1124 09:09:03.156388 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 24 09:09:05 crc kubenswrapper[4886]: I1124 09:09:05.325872 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-t72qh"] Nov 24 09:09:05 crc kubenswrapper[4886]: I1124 09:09:05.329862 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-t72qh" Nov 24 09:09:05 crc kubenswrapper[4886]: I1124 09:09:05.339866 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-t72qh"] Nov 24 09:09:05 crc kubenswrapper[4886]: I1124 09:09:05.419463 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gtkxd\" (UniqueName: \"kubernetes.io/projected/902ebf1f-132f-40a8-b469-d816a555740e-kube-api-access-gtkxd\") pod \"nova-api-db-create-t72qh\" (UID: \"902ebf1f-132f-40a8-b469-d816a555740e\") " pod="openstack/nova-api-db-create-t72qh" Nov 24 09:09:05 crc kubenswrapper[4886]: I1124 09:09:05.419792 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/902ebf1f-132f-40a8-b469-d816a555740e-operator-scripts\") pod \"nova-api-db-create-t72qh\" (UID: \"902ebf1f-132f-40a8-b469-d816a555740e\") " pod="openstack/nova-api-db-create-t72qh" Nov 24 09:09:05 crc kubenswrapper[4886]: I1124 09:09:05.523739 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/902ebf1f-132f-40a8-b469-d816a555740e-operator-scripts\") pod \"nova-api-db-create-t72qh\" (UID: \"902ebf1f-132f-40a8-b469-d816a555740e\") " pod="openstack/nova-api-db-create-t72qh" Nov 24 09:09:05 crc kubenswrapper[4886]: I1124 09:09:05.523903 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gtkxd\" (UniqueName: \"kubernetes.io/projected/902ebf1f-132f-40a8-b469-d816a555740e-kube-api-access-gtkxd\") pod \"nova-api-db-create-t72qh\" (UID: \"902ebf1f-132f-40a8-b469-d816a555740e\") " pod="openstack/nova-api-db-create-t72qh" Nov 24 09:09:05 crc kubenswrapper[4886]: I1124 09:09:05.526163 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/902ebf1f-132f-40a8-b469-d816a555740e-operator-scripts\") pod \"nova-api-db-create-t72qh\" (UID: \"902ebf1f-132f-40a8-b469-d816a555740e\") " pod="openstack/nova-api-db-create-t72qh" Nov 24 09:09:05 crc kubenswrapper[4886]: I1124 09:09:05.557934 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-krgfp"] Nov 24 09:09:05 crc kubenswrapper[4886]: I1124 09:09:05.559551 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-krgfp" Nov 24 09:09:05 crc kubenswrapper[4886]: I1124 09:09:05.583838 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gtkxd\" (UniqueName: \"kubernetes.io/projected/902ebf1f-132f-40a8-b469-d816a555740e-kube-api-access-gtkxd\") pod \"nova-api-db-create-t72qh\" (UID: \"902ebf1f-132f-40a8-b469-d816a555740e\") " pod="openstack/nova-api-db-create-t72qh" Nov 24 09:09:05 crc kubenswrapper[4886]: I1124 09:09:05.625428 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-bf4c-account-create-2vcrs"] Nov 24 09:09:05 crc kubenswrapper[4886]: I1124 09:09:05.626590 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jpnb9\" (UniqueName: \"kubernetes.io/projected/8f1558f7-ba7f-4a8b-9dad-112f3aaacb7a-kube-api-access-jpnb9\") pod \"nova-cell0-db-create-krgfp\" (UID: \"8f1558f7-ba7f-4a8b-9dad-112f3aaacb7a\") " pod="openstack/nova-cell0-db-create-krgfp" Nov 24 09:09:05 crc kubenswrapper[4886]: I1124 09:09:05.626718 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8f1558f7-ba7f-4a8b-9dad-112f3aaacb7a-operator-scripts\") pod \"nova-cell0-db-create-krgfp\" (UID: \"8f1558f7-ba7f-4a8b-9dad-112f3aaacb7a\") " pod="openstack/nova-cell0-db-create-krgfp" Nov 24 09:09:05 crc kubenswrapper[4886]: I1124 09:09:05.627567 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-bf4c-account-create-2vcrs" Nov 24 09:09:05 crc kubenswrapper[4886]: I1124 09:09:05.629804 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Nov 24 09:09:05 crc kubenswrapper[4886]: I1124 09:09:05.638996 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-krgfp"] Nov 24 09:09:05 crc kubenswrapper[4886]: I1124 09:09:05.655880 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-bf4c-account-create-2vcrs"] Nov 24 09:09:05 crc kubenswrapper[4886]: I1124 09:09:05.680227 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-vbj6z"] Nov 24 09:09:05 crc kubenswrapper[4886]: I1124 09:09:05.682482 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-vbj6z" Nov 24 09:09:05 crc kubenswrapper[4886]: I1124 09:09:05.699378 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-vbj6z"] Nov 24 09:09:05 crc kubenswrapper[4886]: I1124 09:09:05.712278 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-t72qh" Nov 24 09:09:05 crc kubenswrapper[4886]: I1124 09:09:05.729973 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zswfn\" (UniqueName: \"kubernetes.io/projected/78e1de22-7cd4-4929-b282-886695a613c2-kube-api-access-zswfn\") pod \"nova-api-bf4c-account-create-2vcrs\" (UID: \"78e1de22-7cd4-4929-b282-886695a613c2\") " pod="openstack/nova-api-bf4c-account-create-2vcrs" Nov 24 09:09:05 crc kubenswrapper[4886]: I1124 09:09:05.730128 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/78e1de22-7cd4-4929-b282-886695a613c2-operator-scripts\") pod \"nova-api-bf4c-account-create-2vcrs\" (UID: \"78e1de22-7cd4-4929-b282-886695a613c2\") " pod="openstack/nova-api-bf4c-account-create-2vcrs" Nov 24 09:09:05 crc kubenswrapper[4886]: I1124 09:09:05.730180 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8f1558f7-ba7f-4a8b-9dad-112f3aaacb7a-operator-scripts\") pod \"nova-cell0-db-create-krgfp\" (UID: \"8f1558f7-ba7f-4a8b-9dad-112f3aaacb7a\") " pod="openstack/nova-cell0-db-create-krgfp" Nov 24 09:09:05 crc kubenswrapper[4886]: I1124 09:09:05.730224 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/820724e3-dec6-48f0-8626-e287d58059d3-operator-scripts\") pod \"nova-cell1-db-create-vbj6z\" (UID: \"820724e3-dec6-48f0-8626-e287d58059d3\") " pod="openstack/nova-cell1-db-create-vbj6z" Nov 24 09:09:05 crc kubenswrapper[4886]: I1124 09:09:05.730318 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zw8ss\" (UniqueName: \"kubernetes.io/projected/820724e3-dec6-48f0-8626-e287d58059d3-kube-api-access-zw8ss\") pod \"nova-cell1-db-create-vbj6z\" (UID: \"820724e3-dec6-48f0-8626-e287d58059d3\") " pod="openstack/nova-cell1-db-create-vbj6z" Nov 24 09:09:05 crc kubenswrapper[4886]: I1124 09:09:05.730343 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jpnb9\" (UniqueName: \"kubernetes.io/projected/8f1558f7-ba7f-4a8b-9dad-112f3aaacb7a-kube-api-access-jpnb9\") pod \"nova-cell0-db-create-krgfp\" (UID: \"8f1558f7-ba7f-4a8b-9dad-112f3aaacb7a\") " pod="openstack/nova-cell0-db-create-krgfp" Nov 24 09:09:05 crc kubenswrapper[4886]: I1124 09:09:05.731586 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8f1558f7-ba7f-4a8b-9dad-112f3aaacb7a-operator-scripts\") pod \"nova-cell0-db-create-krgfp\" (UID: \"8f1558f7-ba7f-4a8b-9dad-112f3aaacb7a\") " pod="openstack/nova-cell0-db-create-krgfp" Nov 24 09:09:05 crc kubenswrapper[4886]: I1124 09:09:05.755714 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jpnb9\" (UniqueName: \"kubernetes.io/projected/8f1558f7-ba7f-4a8b-9dad-112f3aaacb7a-kube-api-access-jpnb9\") pod \"nova-cell0-db-create-krgfp\" (UID: \"8f1558f7-ba7f-4a8b-9dad-112f3aaacb7a\") " pod="openstack/nova-cell0-db-create-krgfp" Nov 24 09:09:05 crc kubenswrapper[4886]: I1124 09:09:05.757089 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-3106-account-create-rd5w6"] Nov 24 09:09:05 crc kubenswrapper[4886]: I1124 09:09:05.758712 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-3106-account-create-rd5w6" Nov 24 09:09:05 crc kubenswrapper[4886]: I1124 09:09:05.762131 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Nov 24 09:09:05 crc kubenswrapper[4886]: I1124 09:09:05.768172 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-3106-account-create-rd5w6"] Nov 24 09:09:05 crc kubenswrapper[4886]: I1124 09:09:05.832841 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3d522eb1-b9f9-47ae-bb27-616dffd736d3-operator-scripts\") pod \"nova-cell0-3106-account-create-rd5w6\" (UID: \"3d522eb1-b9f9-47ae-bb27-616dffd736d3\") " pod="openstack/nova-cell0-3106-account-create-rd5w6" Nov 24 09:09:05 crc kubenswrapper[4886]: I1124 09:09:05.832906 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/820724e3-dec6-48f0-8626-e287d58059d3-operator-scripts\") pod \"nova-cell1-db-create-vbj6z\" (UID: \"820724e3-dec6-48f0-8626-e287d58059d3\") " pod="openstack/nova-cell1-db-create-vbj6z" Nov 24 09:09:05 crc kubenswrapper[4886]: I1124 09:09:05.832930 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hndkb\" (UniqueName: \"kubernetes.io/projected/3d522eb1-b9f9-47ae-bb27-616dffd736d3-kube-api-access-hndkb\") pod \"nova-cell0-3106-account-create-rd5w6\" (UID: \"3d522eb1-b9f9-47ae-bb27-616dffd736d3\") " pod="openstack/nova-cell0-3106-account-create-rd5w6" Nov 24 09:09:05 crc kubenswrapper[4886]: I1124 09:09:05.833045 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zw8ss\" (UniqueName: \"kubernetes.io/projected/820724e3-dec6-48f0-8626-e287d58059d3-kube-api-access-zw8ss\") pod \"nova-cell1-db-create-vbj6z\" (UID: \"820724e3-dec6-48f0-8626-e287d58059d3\") " pod="openstack/nova-cell1-db-create-vbj6z" Nov 24 09:09:05 crc kubenswrapper[4886]: I1124 09:09:05.833106 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zswfn\" (UniqueName: \"kubernetes.io/projected/78e1de22-7cd4-4929-b282-886695a613c2-kube-api-access-zswfn\") pod \"nova-api-bf4c-account-create-2vcrs\" (UID: \"78e1de22-7cd4-4929-b282-886695a613c2\") " pod="openstack/nova-api-bf4c-account-create-2vcrs" Nov 24 09:09:05 crc kubenswrapper[4886]: I1124 09:09:05.833241 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/78e1de22-7cd4-4929-b282-886695a613c2-operator-scripts\") pod \"nova-api-bf4c-account-create-2vcrs\" (UID: \"78e1de22-7cd4-4929-b282-886695a613c2\") " pod="openstack/nova-api-bf4c-account-create-2vcrs" Nov 24 09:09:05 crc kubenswrapper[4886]: I1124 09:09:05.833994 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/820724e3-dec6-48f0-8626-e287d58059d3-operator-scripts\") pod \"nova-cell1-db-create-vbj6z\" (UID: \"820724e3-dec6-48f0-8626-e287d58059d3\") " pod="openstack/nova-cell1-db-create-vbj6z" Nov 24 09:09:05 crc kubenswrapper[4886]: I1124 09:09:05.834679 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/78e1de22-7cd4-4929-b282-886695a613c2-operator-scripts\") pod \"nova-api-bf4c-account-create-2vcrs\" (UID: \"78e1de22-7cd4-4929-b282-886695a613c2\") " pod="openstack/nova-api-bf4c-account-create-2vcrs" Nov 24 09:09:05 crc kubenswrapper[4886]: I1124 09:09:05.857042 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zw8ss\" (UniqueName: \"kubernetes.io/projected/820724e3-dec6-48f0-8626-e287d58059d3-kube-api-access-zw8ss\") pod \"nova-cell1-db-create-vbj6z\" (UID: \"820724e3-dec6-48f0-8626-e287d58059d3\") " pod="openstack/nova-cell1-db-create-vbj6z" Nov 24 09:09:05 crc kubenswrapper[4886]: I1124 09:09:05.857746 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zswfn\" (UniqueName: \"kubernetes.io/projected/78e1de22-7cd4-4929-b282-886695a613c2-kube-api-access-zswfn\") pod \"nova-api-bf4c-account-create-2vcrs\" (UID: \"78e1de22-7cd4-4929-b282-886695a613c2\") " pod="openstack/nova-api-bf4c-account-create-2vcrs" Nov 24 09:09:05 crc kubenswrapper[4886]: I1124 09:09:05.935559 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-01c7-account-create-h5txn"] Nov 24 09:09:05 crc kubenswrapper[4886]: I1124 09:09:05.937195 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3d522eb1-b9f9-47ae-bb27-616dffd736d3-operator-scripts\") pod \"nova-cell0-3106-account-create-rd5w6\" (UID: \"3d522eb1-b9f9-47ae-bb27-616dffd736d3\") " pod="openstack/nova-cell0-3106-account-create-rd5w6" Nov 24 09:09:05 crc kubenswrapper[4886]: I1124 09:09:05.937270 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hndkb\" (UniqueName: \"kubernetes.io/projected/3d522eb1-b9f9-47ae-bb27-616dffd736d3-kube-api-access-hndkb\") pod \"nova-cell0-3106-account-create-rd5w6\" (UID: \"3d522eb1-b9f9-47ae-bb27-616dffd736d3\") " pod="openstack/nova-cell0-3106-account-create-rd5w6" Nov 24 09:09:05 crc kubenswrapper[4886]: I1124 09:09:05.938979 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3d522eb1-b9f9-47ae-bb27-616dffd736d3-operator-scripts\") pod \"nova-cell0-3106-account-create-rd5w6\" (UID: \"3d522eb1-b9f9-47ae-bb27-616dffd736d3\") " pod="openstack/nova-cell0-3106-account-create-rd5w6" Nov 24 09:09:05 crc kubenswrapper[4886]: I1124 09:09:05.947751 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-01c7-account-create-h5txn" Nov 24 09:09:05 crc kubenswrapper[4886]: I1124 09:09:05.949866 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-krgfp" Nov 24 09:09:05 crc kubenswrapper[4886]: I1124 09:09:05.953407 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Nov 24 09:09:05 crc kubenswrapper[4886]: I1124 09:09:05.959729 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-bf4c-account-create-2vcrs" Nov 24 09:09:05 crc kubenswrapper[4886]: I1124 09:09:05.970293 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hndkb\" (UniqueName: \"kubernetes.io/projected/3d522eb1-b9f9-47ae-bb27-616dffd736d3-kube-api-access-hndkb\") pod \"nova-cell0-3106-account-create-rd5w6\" (UID: \"3d522eb1-b9f9-47ae-bb27-616dffd736d3\") " pod="openstack/nova-cell0-3106-account-create-rd5w6" Nov 24 09:09:05 crc kubenswrapper[4886]: I1124 09:09:05.972312 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-01c7-account-create-h5txn"] Nov 24 09:09:06 crc kubenswrapper[4886]: I1124 09:09:06.023067 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-vbj6z" Nov 24 09:09:06 crc kubenswrapper[4886]: I1124 09:09:06.039499 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/58780350-18ab-4b0c-ace4-fa09769e0266-operator-scripts\") pod \"nova-cell1-01c7-account-create-h5txn\" (UID: \"58780350-18ab-4b0c-ace4-fa09769e0266\") " pod="openstack/nova-cell1-01c7-account-create-h5txn" Nov 24 09:09:06 crc kubenswrapper[4886]: I1124 09:09:06.040034 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f4qmk\" (UniqueName: \"kubernetes.io/projected/58780350-18ab-4b0c-ace4-fa09769e0266-kube-api-access-f4qmk\") pod \"nova-cell1-01c7-account-create-h5txn\" (UID: \"58780350-18ab-4b0c-ace4-fa09769e0266\") " pod="openstack/nova-cell1-01c7-account-create-h5txn" Nov 24 09:09:06 crc kubenswrapper[4886]: I1124 09:09:06.132760 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-3106-account-create-rd5w6" Nov 24 09:09:06 crc kubenswrapper[4886]: I1124 09:09:06.146506 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f4qmk\" (UniqueName: \"kubernetes.io/projected/58780350-18ab-4b0c-ace4-fa09769e0266-kube-api-access-f4qmk\") pod \"nova-cell1-01c7-account-create-h5txn\" (UID: \"58780350-18ab-4b0c-ace4-fa09769e0266\") " pod="openstack/nova-cell1-01c7-account-create-h5txn" Nov 24 09:09:06 crc kubenswrapper[4886]: I1124 09:09:06.146642 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/58780350-18ab-4b0c-ace4-fa09769e0266-operator-scripts\") pod \"nova-cell1-01c7-account-create-h5txn\" (UID: \"58780350-18ab-4b0c-ace4-fa09769e0266\") " pod="openstack/nova-cell1-01c7-account-create-h5txn" Nov 24 09:09:06 crc kubenswrapper[4886]: I1124 09:09:06.147680 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/58780350-18ab-4b0c-ace4-fa09769e0266-operator-scripts\") pod \"nova-cell1-01c7-account-create-h5txn\" (UID: \"58780350-18ab-4b0c-ace4-fa09769e0266\") " pod="openstack/nova-cell1-01c7-account-create-h5txn" Nov 24 09:09:06 crc kubenswrapper[4886]: I1124 09:09:06.171308 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f4qmk\" (UniqueName: \"kubernetes.io/projected/58780350-18ab-4b0c-ace4-fa09769e0266-kube-api-access-f4qmk\") pod \"nova-cell1-01c7-account-create-h5txn\" (UID: \"58780350-18ab-4b0c-ace4-fa09769e0266\") " pod="openstack/nova-cell1-01c7-account-create-h5txn" Nov 24 09:09:06 crc kubenswrapper[4886]: I1124 09:09:06.279307 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Nov 24 09:09:06 crc kubenswrapper[4886]: I1124 09:09:06.325272 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-01c7-account-create-h5txn" Nov 24 09:09:07 crc kubenswrapper[4886]: I1124 09:09:07.028478 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 24 09:09:07 crc kubenswrapper[4886]: I1124 09:09:07.028769 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="caebd1b1-b583-446f-bfc8-9c4a1be619da" containerName="glance-log" containerID="cri-o://8e39182930d884585426ec654fe44ba08ebe43f29f1ea47f6779bc6dbdc6d168" gracePeriod=30 Nov 24 09:09:07 crc kubenswrapper[4886]: I1124 09:09:07.028845 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="caebd1b1-b583-446f-bfc8-9c4a1be619da" containerName="glance-httpd" containerID="cri-o://c955b1a526ed486b0ca332997a1e4a75451163825649ca98e777b93f4c4dd6aa" gracePeriod=30 Nov 24 09:09:07 crc kubenswrapper[4886]: I1124 09:09:07.510808 4886 generic.go:334] "Generic (PLEG): container finished" podID="caebd1b1-b583-446f-bfc8-9c4a1be619da" containerID="8e39182930d884585426ec654fe44ba08ebe43f29f1ea47f6779bc6dbdc6d168" exitCode=143 Nov 24 09:09:07 crc kubenswrapper[4886]: I1124 09:09:07.510855 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"caebd1b1-b583-446f-bfc8-9c4a1be619da","Type":"ContainerDied","Data":"8e39182930d884585426ec654fe44ba08ebe43f29f1ea47f6779bc6dbdc6d168"} Nov 24 09:09:07 crc kubenswrapper[4886]: I1124 09:09:07.910189 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 24 09:09:07 crc kubenswrapper[4886]: I1124 09:09:07.910512 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="b4a3df3e-e493-448e-afb1-b52e1a50437a" containerName="glance-log" containerID="cri-o://93dad6caf21ecc145b49baafdfd8e66227d03285557979991d216a83b024b3c6" gracePeriod=30 Nov 24 09:09:07 crc kubenswrapper[4886]: I1124 09:09:07.910676 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="b4a3df3e-e493-448e-afb1-b52e1a50437a" containerName="glance-httpd" containerID="cri-o://0a2394bb23bdc576103a3a750bc16ce96e45f561bdb043e7aa24c8b8b48fbe56" gracePeriod=30 Nov 24 09:09:08 crc kubenswrapper[4886]: I1124 09:09:08.553293 4886 generic.go:334] "Generic (PLEG): container finished" podID="b4a3df3e-e493-448e-afb1-b52e1a50437a" containerID="93dad6caf21ecc145b49baafdfd8e66227d03285557979991d216a83b024b3c6" exitCode=143 Nov 24 09:09:08 crc kubenswrapper[4886]: I1124 09:09:08.555057 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"b4a3df3e-e493-448e-afb1-b52e1a50437a","Type":"ContainerDied","Data":"93dad6caf21ecc145b49baafdfd8e66227d03285557979991d216a83b024b3c6"} Nov 24 09:09:08 crc kubenswrapper[4886]: I1124 09:09:08.588664 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-vbj6z"] Nov 24 09:09:08 crc kubenswrapper[4886]: I1124 09:09:08.931511 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-bf4c-account-create-2vcrs"] Nov 24 09:09:08 crc kubenswrapper[4886]: W1124 09:09:08.934622 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod78e1de22_7cd4_4929_b282_886695a613c2.slice/crio-dbf415ade63df4df2a7dc7c6b8c9041d60f3cd45d9bb7297a96b43a0022e9ae3 WatchSource:0}: Error finding container dbf415ade63df4df2a7dc7c6b8c9041d60f3cd45d9bb7297a96b43a0022e9ae3: Status 404 returned error can't find the container with id dbf415ade63df4df2a7dc7c6b8c9041d60f3cd45d9bb7297a96b43a0022e9ae3 Nov 24 09:09:09 crc kubenswrapper[4886]: I1124 09:09:09.034701 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-krgfp"] Nov 24 09:09:09 crc kubenswrapper[4886]: I1124 09:09:09.051238 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-01c7-account-create-h5txn"] Nov 24 09:09:09 crc kubenswrapper[4886]: I1124 09:09:09.062848 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-558564f98c-jl2ms"] Nov 24 09:09:09 crc kubenswrapper[4886]: I1124 09:09:09.078248 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-3106-account-create-rd5w6"] Nov 24 09:09:09 crc kubenswrapper[4886]: I1124 09:09:09.197428 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-t72qh"] Nov 24 09:09:09 crc kubenswrapper[4886]: W1124 09:09:09.302270 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod902ebf1f_132f_40a8_b469_d816a555740e.slice/crio-a1874849dc8d2f7daebfb7fc8bc983dd01de5e3444a668e5af96c1ade874f822 WatchSource:0}: Error finding container a1874849dc8d2f7daebfb7fc8bc983dd01de5e3444a668e5af96c1ade874f822: Status 404 returned error can't find the container with id a1874849dc8d2f7daebfb7fc8bc983dd01de5e3444a668e5af96c1ade874f822 Nov 24 09:09:09 crc kubenswrapper[4886]: I1124 09:09:09.571389 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-krgfp" event={"ID":"8f1558f7-ba7f-4a8b-9dad-112f3aaacb7a","Type":"ContainerStarted","Data":"3b1153f7c29d442470085ddffc0986736145295493087cde55e5fc1945443da8"} Nov 24 09:09:09 crc kubenswrapper[4886]: I1124 09:09:09.577789 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-3106-account-create-rd5w6" event={"ID":"3d522eb1-b9f9-47ae-bb27-616dffd736d3","Type":"ContainerStarted","Data":"be9015b6e0d349f6217544b8e9645a579e66dc71c2cd4dc298773b18d97a59fa"} Nov 24 09:09:09 crc kubenswrapper[4886]: I1124 09:09:09.582810 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-01c7-account-create-h5txn" event={"ID":"58780350-18ab-4b0c-ace4-fa09769e0266","Type":"ContainerStarted","Data":"8766467d25b1045be73ca05429414766d88a4024d0760bba2df09828d4f4a0a0"} Nov 24 09:09:09 crc kubenswrapper[4886]: I1124 09:09:09.582853 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-01c7-account-create-h5txn" event={"ID":"58780350-18ab-4b0c-ace4-fa09769e0266","Type":"ContainerStarted","Data":"1747a9de11e320924ef4ac21f9e050079e90e4da17b26cdcb774cc7be0f62982"} Nov 24 09:09:09 crc kubenswrapper[4886]: I1124 09:09:09.587217 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-bf4c-account-create-2vcrs" event={"ID":"78e1de22-7cd4-4929-b282-886695a613c2","Type":"ContainerStarted","Data":"41646597baed68efe99287ccad64edc11f2729fa330d98fb1d265cf588e08baf"} Nov 24 09:09:09 crc kubenswrapper[4886]: I1124 09:09:09.587255 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-bf4c-account-create-2vcrs" event={"ID":"78e1de22-7cd4-4929-b282-886695a613c2","Type":"ContainerStarted","Data":"dbf415ade63df4df2a7dc7c6b8c9041d60f3cd45d9bb7297a96b43a0022e9ae3"} Nov 24 09:09:09 crc kubenswrapper[4886]: I1124 09:09:09.589069 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"801740d3-12c4-4576-a79d-186b36e3f079","Type":"ContainerStarted","Data":"5b5c32aafce9ce1e327d39785b8feded8d05e5f550a65426d29b6a963aaeae21"} Nov 24 09:09:09 crc kubenswrapper[4886]: I1124 09:09:09.591482 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-t72qh" event={"ID":"902ebf1f-132f-40a8-b469-d816a555740e","Type":"ContainerStarted","Data":"a1874849dc8d2f7daebfb7fc8bc983dd01de5e3444a668e5af96c1ade874f822"} Nov 24 09:09:09 crc kubenswrapper[4886]: I1124 09:09:09.596510 4886 generic.go:334] "Generic (PLEG): container finished" podID="820724e3-dec6-48f0-8626-e287d58059d3" containerID="6d670346f6590ef8a761045a7345b4b5931daeb38d9a47c05a3863b4e7414f9b" exitCode=0 Nov 24 09:09:09 crc kubenswrapper[4886]: I1124 09:09:09.596570 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-vbj6z" event={"ID":"820724e3-dec6-48f0-8626-e287d58059d3","Type":"ContainerDied","Data":"6d670346f6590ef8a761045a7345b4b5931daeb38d9a47c05a3863b4e7414f9b"} Nov 24 09:09:09 crc kubenswrapper[4886]: I1124 09:09:09.596595 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-vbj6z" event={"ID":"820724e3-dec6-48f0-8626-e287d58059d3","Type":"ContainerStarted","Data":"d85431e5733e4a025e21b823f99b0d796e1f44bc13949a1aa1c665c634524417"} Nov 24 09:09:09 crc kubenswrapper[4886]: I1124 09:09:09.611113 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-558564f98c-jl2ms" event={"ID":"c1f11d5d-8b31-47b7-9ceb-197d5ca23475","Type":"ContainerStarted","Data":"92c1245a7f6ca80fc93d6a7235a44124f44c3ba65e2f3cab42bcd16a6772d581"} Nov 24 09:09:09 crc kubenswrapper[4886]: I1124 09:09:09.619607 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-01c7-account-create-h5txn" podStartSLOduration=4.619584254 podStartE2EDuration="4.619584254s" podCreationTimestamp="2025-11-24 09:09:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 09:09:09.602854205 +0000 UTC m=+1205.489592340" watchObservedRunningTime="2025-11-24 09:09:09.619584254 +0000 UTC m=+1205.506322389" Nov 24 09:09:09 crc kubenswrapper[4886]: I1124 09:09:09.622340 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4555cb4f-bd53-4d32-b473-1ce3cfdd95c1","Type":"ContainerStarted","Data":"324f73d7ab0dd8ca79c0d39bb881afe710a2a89e1e6d39ed0e03d182ac4e2035"} Nov 24 09:09:09 crc kubenswrapper[4886]: I1124 09:09:09.644595 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=3.576845329 podStartE2EDuration="17.644572124s" podCreationTimestamp="2025-11-24 09:08:52 +0000 UTC" firstStartedPulling="2025-11-24 09:08:54.315392061 +0000 UTC m=+1190.202130206" lastFinishedPulling="2025-11-24 09:09:08.383118866 +0000 UTC m=+1204.269857001" observedRunningTime="2025-11-24 09:09:09.640459579 +0000 UTC m=+1205.527197714" watchObservedRunningTime="2025-11-24 09:09:09.644572124 +0000 UTC m=+1205.531310259" Nov 24 09:09:09 crc kubenswrapper[4886]: I1124 09:09:09.688105 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-bf4c-account-create-2vcrs" podStartSLOduration=4.688081244 podStartE2EDuration="4.688081244s" podCreationTimestamp="2025-11-24 09:09:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 09:09:09.66760441 +0000 UTC m=+1205.554342545" watchObservedRunningTime="2025-11-24 09:09:09.688081244 +0000 UTC m=+1205.574819389" Nov 24 09:09:09 crc kubenswrapper[4886]: I1124 09:09:09.866853 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-75ffb75746-pwc5g" podUID="54dfe4a2-2d0f-417f-b1d6-df18e5581bfc" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.144:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.144:8443: connect: connection refused" Nov 24 09:09:09 crc kubenswrapper[4886]: I1124 09:09:09.866984 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-75ffb75746-pwc5g" Nov 24 09:09:10 crc kubenswrapper[4886]: I1124 09:09:10.636145 4886 generic.go:334] "Generic (PLEG): container finished" podID="78e1de22-7cd4-4929-b282-886695a613c2" containerID="41646597baed68efe99287ccad64edc11f2729fa330d98fb1d265cf588e08baf" exitCode=0 Nov 24 09:09:10 crc kubenswrapper[4886]: I1124 09:09:10.636661 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-bf4c-account-create-2vcrs" event={"ID":"78e1de22-7cd4-4929-b282-886695a613c2","Type":"ContainerDied","Data":"41646597baed68efe99287ccad64edc11f2729fa330d98fb1d265cf588e08baf"} Nov 24 09:09:10 crc kubenswrapper[4886]: I1124 09:09:10.642500 4886 generic.go:334] "Generic (PLEG): container finished" podID="8f1558f7-ba7f-4a8b-9dad-112f3aaacb7a" containerID="e8c401767cb66edae23e292dc1f191bbb8eaf7b87218ad36a6b3ecb4a4b5d8b2" exitCode=0 Nov 24 09:09:10 crc kubenswrapper[4886]: I1124 09:09:10.642590 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-krgfp" event={"ID":"8f1558f7-ba7f-4a8b-9dad-112f3aaacb7a","Type":"ContainerDied","Data":"e8c401767cb66edae23e292dc1f191bbb8eaf7b87218ad36a6b3ecb4a4b5d8b2"} Nov 24 09:09:10 crc kubenswrapper[4886]: I1124 09:09:10.644754 4886 generic.go:334] "Generic (PLEG): container finished" podID="3d522eb1-b9f9-47ae-bb27-616dffd736d3" containerID="732c728a6b3ed010564394fe88f87ac9941533d77aa0c5dbafea5a33aff6bd43" exitCode=0 Nov 24 09:09:10 crc kubenswrapper[4886]: I1124 09:09:10.644796 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-3106-account-create-rd5w6" event={"ID":"3d522eb1-b9f9-47ae-bb27-616dffd736d3","Type":"ContainerDied","Data":"732c728a6b3ed010564394fe88f87ac9941533d77aa0c5dbafea5a33aff6bd43"} Nov 24 09:09:10 crc kubenswrapper[4886]: I1124 09:09:10.659854 4886 generic.go:334] "Generic (PLEG): container finished" podID="caebd1b1-b583-446f-bfc8-9c4a1be619da" containerID="c955b1a526ed486b0ca332997a1e4a75451163825649ca98e777b93f4c4dd6aa" exitCode=0 Nov 24 09:09:10 crc kubenswrapper[4886]: I1124 09:09:10.663751 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"caebd1b1-b583-446f-bfc8-9c4a1be619da","Type":"ContainerDied","Data":"c955b1a526ed486b0ca332997a1e4a75451163825649ca98e777b93f4c4dd6aa"} Nov 24 09:09:10 crc kubenswrapper[4886]: I1124 09:09:10.667608 4886 generic.go:334] "Generic (PLEG): container finished" podID="902ebf1f-132f-40a8-b469-d816a555740e" containerID="00696bc1d377f8a981cb98c84bf8c9ad3a268c7c4ca0677a14dd0a50455372e5" exitCode=0 Nov 24 09:09:10 crc kubenswrapper[4886]: I1124 09:09:10.668165 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-t72qh" event={"ID":"902ebf1f-132f-40a8-b469-d816a555740e","Type":"ContainerDied","Data":"00696bc1d377f8a981cb98c84bf8c9ad3a268c7c4ca0677a14dd0a50455372e5"} Nov 24 09:09:10 crc kubenswrapper[4886]: I1124 09:09:10.677626 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-558564f98c-jl2ms" event={"ID":"c1f11d5d-8b31-47b7-9ceb-197d5ca23475","Type":"ContainerStarted","Data":"9e089f2581eec641e60d0fca4460ca43e0acfa1b51c6f1bb7508bcdc1838c50b"} Nov 24 09:09:10 crc kubenswrapper[4886]: I1124 09:09:10.677892 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-558564f98c-jl2ms" event={"ID":"c1f11d5d-8b31-47b7-9ceb-197d5ca23475","Type":"ContainerStarted","Data":"64161eb080502dd88770473c76c594295be42eb8e59ec7f648578716adc71e60"} Nov 24 09:09:10 crc kubenswrapper[4886]: I1124 09:09:10.678844 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-558564f98c-jl2ms" Nov 24 09:09:10 crc kubenswrapper[4886]: I1124 09:09:10.678977 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-558564f98c-jl2ms" Nov 24 09:09:10 crc kubenswrapper[4886]: I1124 09:09:10.682917 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4555cb4f-bd53-4d32-b473-1ce3cfdd95c1","Type":"ContainerStarted","Data":"c1b2522ad5371fd44110d4c9a8a5ab64c8d6607e2740f6797e68fe8029809028"} Nov 24 09:09:10 crc kubenswrapper[4886]: I1124 09:09:10.685099 4886 generic.go:334] "Generic (PLEG): container finished" podID="58780350-18ab-4b0c-ace4-fa09769e0266" containerID="8766467d25b1045be73ca05429414766d88a4024d0760bba2df09828d4f4a0a0" exitCode=0 Nov 24 09:09:10 crc kubenswrapper[4886]: I1124 09:09:10.685630 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-01c7-account-create-h5txn" event={"ID":"58780350-18ab-4b0c-ace4-fa09769e0266","Type":"ContainerDied","Data":"8766467d25b1045be73ca05429414766d88a4024d0760bba2df09828d4f4a0a0"} Nov 24 09:09:10 crc kubenswrapper[4886]: I1124 09:09:10.731074 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-558564f98c-jl2ms" podStartSLOduration=8.731047718 podStartE2EDuration="8.731047718s" podCreationTimestamp="2025-11-24 09:09:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 09:09:10.719770392 +0000 UTC m=+1206.606508547" watchObservedRunningTime="2025-11-24 09:09:10.731047718 +0000 UTC m=+1206.617785853" Nov 24 09:09:11 crc kubenswrapper[4886]: I1124 09:09:11.089180 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 24 09:09:11 crc kubenswrapper[4886]: I1124 09:09:11.279020 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/caebd1b1-b583-446f-bfc8-9c4a1be619da-httpd-run\") pod \"caebd1b1-b583-446f-bfc8-9c4a1be619da\" (UID: \"caebd1b1-b583-446f-bfc8-9c4a1be619da\") " Nov 24 09:09:11 crc kubenswrapper[4886]: I1124 09:09:11.279548 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/caebd1b1-b583-446f-bfc8-9c4a1be619da-config-data\") pod \"caebd1b1-b583-446f-bfc8-9c4a1be619da\" (UID: \"caebd1b1-b583-446f-bfc8-9c4a1be619da\") " Nov 24 09:09:11 crc kubenswrapper[4886]: I1124 09:09:11.279639 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/caebd1b1-b583-446f-bfc8-9c4a1be619da-public-tls-certs\") pod \"caebd1b1-b583-446f-bfc8-9c4a1be619da\" (UID: \"caebd1b1-b583-446f-bfc8-9c4a1be619da\") " Nov 24 09:09:11 crc kubenswrapper[4886]: I1124 09:09:11.279735 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/caebd1b1-b583-446f-bfc8-9c4a1be619da-combined-ca-bundle\") pod \"caebd1b1-b583-446f-bfc8-9c4a1be619da\" (UID: \"caebd1b1-b583-446f-bfc8-9c4a1be619da\") " Nov 24 09:09:11 crc kubenswrapper[4886]: I1124 09:09:11.279808 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/caebd1b1-b583-446f-bfc8-9c4a1be619da-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "caebd1b1-b583-446f-bfc8-9c4a1be619da" (UID: "caebd1b1-b583-446f-bfc8-9c4a1be619da"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 09:09:11 crc kubenswrapper[4886]: I1124 09:09:11.279845 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vrhw7\" (UniqueName: \"kubernetes.io/projected/caebd1b1-b583-446f-bfc8-9c4a1be619da-kube-api-access-vrhw7\") pod \"caebd1b1-b583-446f-bfc8-9c4a1be619da\" (UID: \"caebd1b1-b583-446f-bfc8-9c4a1be619da\") " Nov 24 09:09:11 crc kubenswrapper[4886]: I1124 09:09:11.279926 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/caebd1b1-b583-446f-bfc8-9c4a1be619da-logs\") pod \"caebd1b1-b583-446f-bfc8-9c4a1be619da\" (UID: \"caebd1b1-b583-446f-bfc8-9c4a1be619da\") " Nov 24 09:09:11 crc kubenswrapper[4886]: I1124 09:09:11.280099 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"caebd1b1-b583-446f-bfc8-9c4a1be619da\" (UID: \"caebd1b1-b583-446f-bfc8-9c4a1be619da\") " Nov 24 09:09:11 crc kubenswrapper[4886]: I1124 09:09:11.280264 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/caebd1b1-b583-446f-bfc8-9c4a1be619da-scripts\") pod \"caebd1b1-b583-446f-bfc8-9c4a1be619da\" (UID: \"caebd1b1-b583-446f-bfc8-9c4a1be619da\") " Nov 24 09:09:11 crc kubenswrapper[4886]: I1124 09:09:11.282073 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/caebd1b1-b583-446f-bfc8-9c4a1be619da-logs" (OuterVolumeSpecName: "logs") pod "caebd1b1-b583-446f-bfc8-9c4a1be619da" (UID: "caebd1b1-b583-446f-bfc8-9c4a1be619da"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 09:09:11 crc kubenswrapper[4886]: I1124 09:09:11.283473 4886 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/caebd1b1-b583-446f-bfc8-9c4a1be619da-httpd-run\") on node \"crc\" DevicePath \"\"" Nov 24 09:09:11 crc kubenswrapper[4886]: I1124 09:09:11.283511 4886 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/caebd1b1-b583-446f-bfc8-9c4a1be619da-logs\") on node \"crc\" DevicePath \"\"" Nov 24 09:09:11 crc kubenswrapper[4886]: I1124 09:09:11.290330 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/caebd1b1-b583-446f-bfc8-9c4a1be619da-kube-api-access-vrhw7" (OuterVolumeSpecName: "kube-api-access-vrhw7") pod "caebd1b1-b583-446f-bfc8-9c4a1be619da" (UID: "caebd1b1-b583-446f-bfc8-9c4a1be619da"). InnerVolumeSpecName "kube-api-access-vrhw7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 09:09:11 crc kubenswrapper[4886]: I1124 09:09:11.290439 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage03-crc" (OuterVolumeSpecName: "glance") pod "caebd1b1-b583-446f-bfc8-9c4a1be619da" (UID: "caebd1b1-b583-446f-bfc8-9c4a1be619da"). InnerVolumeSpecName "local-storage03-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 24 09:09:11 crc kubenswrapper[4886]: I1124 09:09:11.306396 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/caebd1b1-b583-446f-bfc8-9c4a1be619da-scripts" (OuterVolumeSpecName: "scripts") pod "caebd1b1-b583-446f-bfc8-9c4a1be619da" (UID: "caebd1b1-b583-446f-bfc8-9c4a1be619da"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:09:11 crc kubenswrapper[4886]: I1124 09:09:11.313378 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-vbj6z" Nov 24 09:09:11 crc kubenswrapper[4886]: I1124 09:09:11.340045 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/caebd1b1-b583-446f-bfc8-9c4a1be619da-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "caebd1b1-b583-446f-bfc8-9c4a1be619da" (UID: "caebd1b1-b583-446f-bfc8-9c4a1be619da"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:09:11 crc kubenswrapper[4886]: I1124 09:09:11.384866 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/820724e3-dec6-48f0-8626-e287d58059d3-operator-scripts\") pod \"820724e3-dec6-48f0-8626-e287d58059d3\" (UID: \"820724e3-dec6-48f0-8626-e287d58059d3\") " Nov 24 09:09:11 crc kubenswrapper[4886]: I1124 09:09:11.385797 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/820724e3-dec6-48f0-8626-e287d58059d3-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "820724e3-dec6-48f0-8626-e287d58059d3" (UID: "820724e3-dec6-48f0-8626-e287d58059d3"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 09:09:11 crc kubenswrapper[4886]: I1124 09:09:11.386106 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zw8ss\" (UniqueName: \"kubernetes.io/projected/820724e3-dec6-48f0-8626-e287d58059d3-kube-api-access-zw8ss\") pod \"820724e3-dec6-48f0-8626-e287d58059d3\" (UID: \"820724e3-dec6-48f0-8626-e287d58059d3\") " Nov 24 09:09:11 crc kubenswrapper[4886]: I1124 09:09:11.387251 4886 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/caebd1b1-b583-446f-bfc8-9c4a1be619da-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 09:09:11 crc kubenswrapper[4886]: I1124 09:09:11.387274 4886 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/820724e3-dec6-48f0-8626-e287d58059d3-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 09:09:11 crc kubenswrapper[4886]: I1124 09:09:11.387285 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vrhw7\" (UniqueName: \"kubernetes.io/projected/caebd1b1-b583-446f-bfc8-9c4a1be619da-kube-api-access-vrhw7\") on node \"crc\" DevicePath \"\"" Nov 24 09:09:11 crc kubenswrapper[4886]: I1124 09:09:11.387321 4886 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" " Nov 24 09:09:11 crc kubenswrapper[4886]: I1124 09:09:11.387332 4886 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/caebd1b1-b583-446f-bfc8-9c4a1be619da-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 09:09:11 crc kubenswrapper[4886]: I1124 09:09:11.487351 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/820724e3-dec6-48f0-8626-e287d58059d3-kube-api-access-zw8ss" (OuterVolumeSpecName: "kube-api-access-zw8ss") pod "820724e3-dec6-48f0-8626-e287d58059d3" (UID: "820724e3-dec6-48f0-8626-e287d58059d3"). InnerVolumeSpecName "kube-api-access-zw8ss". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 09:09:11 crc kubenswrapper[4886]: I1124 09:09:11.488718 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zw8ss\" (UniqueName: \"kubernetes.io/projected/820724e3-dec6-48f0-8626-e287d58059d3-kube-api-access-zw8ss\") on node \"crc\" DevicePath \"\"" Nov 24 09:09:11 crc kubenswrapper[4886]: I1124 09:09:11.627762 4886 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage03-crc" (UniqueName: "kubernetes.io/local-volume/local-storage03-crc") on node "crc" Nov 24 09:09:11 crc kubenswrapper[4886]: I1124 09:09:11.677650 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/caebd1b1-b583-446f-bfc8-9c4a1be619da-config-data" (OuterVolumeSpecName: "config-data") pod "caebd1b1-b583-446f-bfc8-9c4a1be619da" (UID: "caebd1b1-b583-446f-bfc8-9c4a1be619da"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:09:11 crc kubenswrapper[4886]: I1124 09:09:11.698811 4886 reconciler_common.go:293] "Volume detached for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" DevicePath \"\"" Nov 24 09:09:11 crc kubenswrapper[4886]: I1124 09:09:11.699197 4886 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/caebd1b1-b583-446f-bfc8-9c4a1be619da-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 09:09:11 crc kubenswrapper[4886]: I1124 09:09:11.706334 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/caebd1b1-b583-446f-bfc8-9c4a1be619da-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "caebd1b1-b583-446f-bfc8-9c4a1be619da" (UID: "caebd1b1-b583-446f-bfc8-9c4a1be619da"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:09:11 crc kubenswrapper[4886]: I1124 09:09:11.712766 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 24 09:09:11 crc kubenswrapper[4886]: I1124 09:09:11.714973 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"caebd1b1-b583-446f-bfc8-9c4a1be619da","Type":"ContainerDied","Data":"016d9ca63725e50bf9606cd6fb84645d6efc52876d6cc104b31e83ee681818a9"} Nov 24 09:09:11 crc kubenswrapper[4886]: I1124 09:09:11.715044 4886 scope.go:117] "RemoveContainer" containerID="c955b1a526ed486b0ca332997a1e4a75451163825649ca98e777b93f4c4dd6aa" Nov 24 09:09:11 crc kubenswrapper[4886]: I1124 09:09:11.728735 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-vbj6z" event={"ID":"820724e3-dec6-48f0-8626-e287d58059d3","Type":"ContainerDied","Data":"d85431e5733e4a025e21b823f99b0d796e1f44bc13949a1aa1c665c634524417"} Nov 24 09:09:11 crc kubenswrapper[4886]: I1124 09:09:11.728783 4886 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d85431e5733e4a025e21b823f99b0d796e1f44bc13949a1aa1c665c634524417" Nov 24 09:09:11 crc kubenswrapper[4886]: I1124 09:09:11.729184 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-vbj6z" Nov 24 09:09:11 crc kubenswrapper[4886]: I1124 09:09:11.739727 4886 generic.go:334] "Generic (PLEG): container finished" podID="b4a3df3e-e493-448e-afb1-b52e1a50437a" containerID="0a2394bb23bdc576103a3a750bc16ce96e45f561bdb043e7aa24c8b8b48fbe56" exitCode=0 Nov 24 09:09:11 crc kubenswrapper[4886]: I1124 09:09:11.739942 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"b4a3df3e-e493-448e-afb1-b52e1a50437a","Type":"ContainerDied","Data":"0a2394bb23bdc576103a3a750bc16ce96e45f561bdb043e7aa24c8b8b48fbe56"} Nov 24 09:09:11 crc kubenswrapper[4886]: I1124 09:09:11.757271 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 24 09:09:11 crc kubenswrapper[4886]: I1124 09:09:11.765450 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 24 09:09:11 crc kubenswrapper[4886]: I1124 09:09:11.774810 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 24 09:09:11 crc kubenswrapper[4886]: I1124 09:09:11.793290 4886 scope.go:117] "RemoveContainer" containerID="8e39182930d884585426ec654fe44ba08ebe43f29f1ea47f6779bc6dbdc6d168" Nov 24 09:09:11 crc kubenswrapper[4886]: I1124 09:09:11.796747 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Nov 24 09:09:11 crc kubenswrapper[4886]: E1124 09:09:11.797339 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="caebd1b1-b583-446f-bfc8-9c4a1be619da" containerName="glance-log" Nov 24 09:09:11 crc kubenswrapper[4886]: I1124 09:09:11.797357 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="caebd1b1-b583-446f-bfc8-9c4a1be619da" containerName="glance-log" Nov 24 09:09:11 crc kubenswrapper[4886]: E1124 09:09:11.797397 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4a3df3e-e493-448e-afb1-b52e1a50437a" containerName="glance-log" Nov 24 09:09:11 crc kubenswrapper[4886]: I1124 09:09:11.797406 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4a3df3e-e493-448e-afb1-b52e1a50437a" containerName="glance-log" Nov 24 09:09:11 crc kubenswrapper[4886]: E1124 09:09:11.797418 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4a3df3e-e493-448e-afb1-b52e1a50437a" containerName="glance-httpd" Nov 24 09:09:11 crc kubenswrapper[4886]: I1124 09:09:11.797425 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4a3df3e-e493-448e-afb1-b52e1a50437a" containerName="glance-httpd" Nov 24 09:09:11 crc kubenswrapper[4886]: E1124 09:09:11.797446 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="caebd1b1-b583-446f-bfc8-9c4a1be619da" containerName="glance-httpd" Nov 24 09:09:11 crc kubenswrapper[4886]: I1124 09:09:11.797452 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="caebd1b1-b583-446f-bfc8-9c4a1be619da" containerName="glance-httpd" Nov 24 09:09:11 crc kubenswrapper[4886]: E1124 09:09:11.797465 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="820724e3-dec6-48f0-8626-e287d58059d3" containerName="mariadb-database-create" Nov 24 09:09:11 crc kubenswrapper[4886]: I1124 09:09:11.797471 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="820724e3-dec6-48f0-8626-e287d58059d3" containerName="mariadb-database-create" Nov 24 09:09:11 crc kubenswrapper[4886]: I1124 09:09:11.797665 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="caebd1b1-b583-446f-bfc8-9c4a1be619da" containerName="glance-httpd" Nov 24 09:09:11 crc kubenswrapper[4886]: I1124 09:09:11.797682 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="820724e3-dec6-48f0-8626-e287d58059d3" containerName="mariadb-database-create" Nov 24 09:09:11 crc kubenswrapper[4886]: I1124 09:09:11.797690 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="b4a3df3e-e493-448e-afb1-b52e1a50437a" containerName="glance-httpd" Nov 24 09:09:11 crc kubenswrapper[4886]: I1124 09:09:11.797703 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="caebd1b1-b583-446f-bfc8-9c4a1be619da" containerName="glance-log" Nov 24 09:09:11 crc kubenswrapper[4886]: I1124 09:09:11.797717 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="b4a3df3e-e493-448e-afb1-b52e1a50437a" containerName="glance-log" Nov 24 09:09:11 crc kubenswrapper[4886]: I1124 09:09:11.799938 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 24 09:09:11 crc kubenswrapper[4886]: I1124 09:09:11.804708 4886 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/caebd1b1-b583-446f-bfc8-9c4a1be619da-public-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 24 09:09:11 crc kubenswrapper[4886]: I1124 09:09:11.805581 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Nov 24 09:09:11 crc kubenswrapper[4886]: I1124 09:09:11.805742 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Nov 24 09:09:11 crc kubenswrapper[4886]: I1124 09:09:11.844980 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 24 09:09:11 crc kubenswrapper[4886]: I1124 09:09:11.908910 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6vnmd\" (UniqueName: \"kubernetes.io/projected/b4a3df3e-e493-448e-afb1-b52e1a50437a-kube-api-access-6vnmd\") pod \"b4a3df3e-e493-448e-afb1-b52e1a50437a\" (UID: \"b4a3df3e-e493-448e-afb1-b52e1a50437a\") " Nov 24 09:09:11 crc kubenswrapper[4886]: I1124 09:09:11.909016 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b4a3df3e-e493-448e-afb1-b52e1a50437a-internal-tls-certs\") pod \"b4a3df3e-e493-448e-afb1-b52e1a50437a\" (UID: \"b4a3df3e-e493-448e-afb1-b52e1a50437a\") " Nov 24 09:09:11 crc kubenswrapper[4886]: I1124 09:09:11.909129 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4a3df3e-e493-448e-afb1-b52e1a50437a-combined-ca-bundle\") pod \"b4a3df3e-e493-448e-afb1-b52e1a50437a\" (UID: \"b4a3df3e-e493-448e-afb1-b52e1a50437a\") " Nov 24 09:09:11 crc kubenswrapper[4886]: I1124 09:09:11.909208 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b4a3df3e-e493-448e-afb1-b52e1a50437a-logs\") pod \"b4a3df3e-e493-448e-afb1-b52e1a50437a\" (UID: \"b4a3df3e-e493-448e-afb1-b52e1a50437a\") " Nov 24 09:09:11 crc kubenswrapper[4886]: I1124 09:09:11.909240 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"b4a3df3e-e493-448e-afb1-b52e1a50437a\" (UID: \"b4a3df3e-e493-448e-afb1-b52e1a50437a\") " Nov 24 09:09:11 crc kubenswrapper[4886]: I1124 09:09:11.909358 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b4a3df3e-e493-448e-afb1-b52e1a50437a-scripts\") pod \"b4a3df3e-e493-448e-afb1-b52e1a50437a\" (UID: \"b4a3df3e-e493-448e-afb1-b52e1a50437a\") " Nov 24 09:09:11 crc kubenswrapper[4886]: I1124 09:09:11.909387 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b4a3df3e-e493-448e-afb1-b52e1a50437a-config-data\") pod \"b4a3df3e-e493-448e-afb1-b52e1a50437a\" (UID: \"b4a3df3e-e493-448e-afb1-b52e1a50437a\") " Nov 24 09:09:11 crc kubenswrapper[4886]: I1124 09:09:11.909411 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b4a3df3e-e493-448e-afb1-b52e1a50437a-httpd-run\") pod \"b4a3df3e-e493-448e-afb1-b52e1a50437a\" (UID: \"b4a3df3e-e493-448e-afb1-b52e1a50437a\") " Nov 24 09:09:11 crc kubenswrapper[4886]: I1124 09:09:11.909692 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f223fa66-cb1a-4f97-970b-9c64793d34b9-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"f223fa66-cb1a-4f97-970b-9c64793d34b9\") " pod="openstack/glance-default-external-api-0" Nov 24 09:09:11 crc kubenswrapper[4886]: I1124 09:09:11.909766 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f223fa66-cb1a-4f97-970b-9c64793d34b9-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"f223fa66-cb1a-4f97-970b-9c64793d34b9\") " pod="openstack/glance-default-external-api-0" Nov 24 09:09:11 crc kubenswrapper[4886]: I1124 09:09:11.909795 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f223fa66-cb1a-4f97-970b-9c64793d34b9-scripts\") pod \"glance-default-external-api-0\" (UID: \"f223fa66-cb1a-4f97-970b-9c64793d34b9\") " pod="openstack/glance-default-external-api-0" Nov 24 09:09:11 crc kubenswrapper[4886]: I1124 09:09:11.909842 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f223fa66-cb1a-4f97-970b-9c64793d34b9-config-data\") pod \"glance-default-external-api-0\" (UID: \"f223fa66-cb1a-4f97-970b-9c64793d34b9\") " pod="openstack/glance-default-external-api-0" Nov 24 09:09:11 crc kubenswrapper[4886]: I1124 09:09:11.909877 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2zrwb\" (UniqueName: \"kubernetes.io/projected/f223fa66-cb1a-4f97-970b-9c64793d34b9-kube-api-access-2zrwb\") pod \"glance-default-external-api-0\" (UID: \"f223fa66-cb1a-4f97-970b-9c64793d34b9\") " pod="openstack/glance-default-external-api-0" Nov 24 09:09:11 crc kubenswrapper[4886]: I1124 09:09:11.909899 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f223fa66-cb1a-4f97-970b-9c64793d34b9-logs\") pod \"glance-default-external-api-0\" (UID: \"f223fa66-cb1a-4f97-970b-9c64793d34b9\") " pod="openstack/glance-default-external-api-0" Nov 24 09:09:11 crc kubenswrapper[4886]: I1124 09:09:11.909945 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-external-api-0\" (UID: \"f223fa66-cb1a-4f97-970b-9c64793d34b9\") " pod="openstack/glance-default-external-api-0" Nov 24 09:09:11 crc kubenswrapper[4886]: I1124 09:09:11.909985 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f223fa66-cb1a-4f97-970b-9c64793d34b9-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"f223fa66-cb1a-4f97-970b-9c64793d34b9\") " pod="openstack/glance-default-external-api-0" Nov 24 09:09:11 crc kubenswrapper[4886]: I1124 09:09:11.919351 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b4a3df3e-e493-448e-afb1-b52e1a50437a-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "b4a3df3e-e493-448e-afb1-b52e1a50437a" (UID: "b4a3df3e-e493-448e-afb1-b52e1a50437a"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 09:09:11 crc kubenswrapper[4886]: I1124 09:09:11.920544 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b4a3df3e-e493-448e-afb1-b52e1a50437a-logs" (OuterVolumeSpecName: "logs") pod "b4a3df3e-e493-448e-afb1-b52e1a50437a" (UID: "b4a3df3e-e493-448e-afb1-b52e1a50437a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 09:09:11 crc kubenswrapper[4886]: I1124 09:09:11.944337 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b4a3df3e-e493-448e-afb1-b52e1a50437a-kube-api-access-6vnmd" (OuterVolumeSpecName: "kube-api-access-6vnmd") pod "b4a3df3e-e493-448e-afb1-b52e1a50437a" (UID: "b4a3df3e-e493-448e-afb1-b52e1a50437a"). InnerVolumeSpecName "kube-api-access-6vnmd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 09:09:11 crc kubenswrapper[4886]: I1124 09:09:11.944413 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4a3df3e-e493-448e-afb1-b52e1a50437a-scripts" (OuterVolumeSpecName: "scripts") pod "b4a3df3e-e493-448e-afb1-b52e1a50437a" (UID: "b4a3df3e-e493-448e-afb1-b52e1a50437a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:09:11 crc kubenswrapper[4886]: I1124 09:09:11.944572 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage10-crc" (OuterVolumeSpecName: "glance") pod "b4a3df3e-e493-448e-afb1-b52e1a50437a" (UID: "b4a3df3e-e493-448e-afb1-b52e1a50437a"). InnerVolumeSpecName "local-storage10-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 24 09:09:12 crc kubenswrapper[4886]: I1124 09:09:12.013017 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f223fa66-cb1a-4f97-970b-9c64793d34b9-scripts\") pod \"glance-default-external-api-0\" (UID: \"f223fa66-cb1a-4f97-970b-9c64793d34b9\") " pod="openstack/glance-default-external-api-0" Nov 24 09:09:12 crc kubenswrapper[4886]: I1124 09:09:12.013522 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f223fa66-cb1a-4f97-970b-9c64793d34b9-config-data\") pod \"glance-default-external-api-0\" (UID: \"f223fa66-cb1a-4f97-970b-9c64793d34b9\") " pod="openstack/glance-default-external-api-0" Nov 24 09:09:12 crc kubenswrapper[4886]: I1124 09:09:12.013941 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2zrwb\" (UniqueName: \"kubernetes.io/projected/f223fa66-cb1a-4f97-970b-9c64793d34b9-kube-api-access-2zrwb\") pod \"glance-default-external-api-0\" (UID: \"f223fa66-cb1a-4f97-970b-9c64793d34b9\") " pod="openstack/glance-default-external-api-0" Nov 24 09:09:12 crc kubenswrapper[4886]: I1124 09:09:12.013973 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f223fa66-cb1a-4f97-970b-9c64793d34b9-logs\") pod \"glance-default-external-api-0\" (UID: \"f223fa66-cb1a-4f97-970b-9c64793d34b9\") " pod="openstack/glance-default-external-api-0" Nov 24 09:09:12 crc kubenswrapper[4886]: I1124 09:09:12.014014 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-external-api-0\" (UID: \"f223fa66-cb1a-4f97-970b-9c64793d34b9\") " pod="openstack/glance-default-external-api-0" Nov 24 09:09:12 crc kubenswrapper[4886]: I1124 09:09:12.014040 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f223fa66-cb1a-4f97-970b-9c64793d34b9-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"f223fa66-cb1a-4f97-970b-9c64793d34b9\") " pod="openstack/glance-default-external-api-0" Nov 24 09:09:12 crc kubenswrapper[4886]: I1124 09:09:12.014128 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f223fa66-cb1a-4f97-970b-9c64793d34b9-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"f223fa66-cb1a-4f97-970b-9c64793d34b9\") " pod="openstack/glance-default-external-api-0" Nov 24 09:09:12 crc kubenswrapper[4886]: I1124 09:09:12.014244 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f223fa66-cb1a-4f97-970b-9c64793d34b9-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"f223fa66-cb1a-4f97-970b-9c64793d34b9\") " pod="openstack/glance-default-external-api-0" Nov 24 09:09:12 crc kubenswrapper[4886]: I1124 09:09:12.014308 4886 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b4a3df3e-e493-448e-afb1-b52e1a50437a-logs\") on node \"crc\" DevicePath \"\"" Nov 24 09:09:12 crc kubenswrapper[4886]: I1124 09:09:12.014338 4886 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" " Nov 24 09:09:12 crc kubenswrapper[4886]: I1124 09:09:12.014352 4886 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b4a3df3e-e493-448e-afb1-b52e1a50437a-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 09:09:12 crc kubenswrapper[4886]: I1124 09:09:12.014369 4886 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b4a3df3e-e493-448e-afb1-b52e1a50437a-httpd-run\") on node \"crc\" DevicePath \"\"" Nov 24 09:09:12 crc kubenswrapper[4886]: I1124 09:09:12.014381 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6vnmd\" (UniqueName: \"kubernetes.io/projected/b4a3df3e-e493-448e-afb1-b52e1a50437a-kube-api-access-6vnmd\") on node \"crc\" DevicePath \"\"" Nov 24 09:09:12 crc kubenswrapper[4886]: I1124 09:09:12.031866 4886 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-external-api-0\" (UID: \"f223fa66-cb1a-4f97-970b-9c64793d34b9\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/glance-default-external-api-0" Nov 24 09:09:12 crc kubenswrapper[4886]: I1124 09:09:12.035139 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f223fa66-cb1a-4f97-970b-9c64793d34b9-logs\") pod \"glance-default-external-api-0\" (UID: \"f223fa66-cb1a-4f97-970b-9c64793d34b9\") " pod="openstack/glance-default-external-api-0" Nov 24 09:09:12 crc kubenswrapper[4886]: I1124 09:09:12.037263 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f223fa66-cb1a-4f97-970b-9c64793d34b9-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"f223fa66-cb1a-4f97-970b-9c64793d34b9\") " pod="openstack/glance-default-external-api-0" Nov 24 09:09:12 crc kubenswrapper[4886]: I1124 09:09:12.043346 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f223fa66-cb1a-4f97-970b-9c64793d34b9-scripts\") pod \"glance-default-external-api-0\" (UID: \"f223fa66-cb1a-4f97-970b-9c64793d34b9\") " pod="openstack/glance-default-external-api-0" Nov 24 09:09:12 crc kubenswrapper[4886]: I1124 09:09:12.044875 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f223fa66-cb1a-4f97-970b-9c64793d34b9-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"f223fa66-cb1a-4f97-970b-9c64793d34b9\") " pod="openstack/glance-default-external-api-0" Nov 24 09:09:12 crc kubenswrapper[4886]: I1124 09:09:12.047173 4886 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage10-crc" (UniqueName: "kubernetes.io/local-volume/local-storage10-crc") on node "crc" Nov 24 09:09:12 crc kubenswrapper[4886]: I1124 09:09:12.052555 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f223fa66-cb1a-4f97-970b-9c64793d34b9-config-data\") pod \"glance-default-external-api-0\" (UID: \"f223fa66-cb1a-4f97-970b-9c64793d34b9\") " pod="openstack/glance-default-external-api-0" Nov 24 09:09:12 crc kubenswrapper[4886]: I1124 09:09:12.052568 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f223fa66-cb1a-4f97-970b-9c64793d34b9-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"f223fa66-cb1a-4f97-970b-9c64793d34b9\") " pod="openstack/glance-default-external-api-0" Nov 24 09:09:12 crc kubenswrapper[4886]: I1124 09:09:12.072042 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2zrwb\" (UniqueName: \"kubernetes.io/projected/f223fa66-cb1a-4f97-970b-9c64793d34b9-kube-api-access-2zrwb\") pod \"glance-default-external-api-0\" (UID: \"f223fa66-cb1a-4f97-970b-9c64793d34b9\") " pod="openstack/glance-default-external-api-0" Nov 24 09:09:12 crc kubenswrapper[4886]: I1124 09:09:12.085025 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4a3df3e-e493-448e-afb1-b52e1a50437a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b4a3df3e-e493-448e-afb1-b52e1a50437a" (UID: "b4a3df3e-e493-448e-afb1-b52e1a50437a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:09:12 crc kubenswrapper[4886]: I1124 09:09:12.109603 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4a3df3e-e493-448e-afb1-b52e1a50437a-config-data" (OuterVolumeSpecName: "config-data") pod "b4a3df3e-e493-448e-afb1-b52e1a50437a" (UID: "b4a3df3e-e493-448e-afb1-b52e1a50437a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:09:12 crc kubenswrapper[4886]: I1124 09:09:12.114545 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-external-api-0\" (UID: \"f223fa66-cb1a-4f97-970b-9c64793d34b9\") " pod="openstack/glance-default-external-api-0" Nov 24 09:09:12 crc kubenswrapper[4886]: I1124 09:09:12.116125 4886 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b4a3df3e-e493-448e-afb1-b52e1a50437a-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 09:09:12 crc kubenswrapper[4886]: I1124 09:09:12.116368 4886 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4a3df3e-e493-448e-afb1-b52e1a50437a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 09:09:12 crc kubenswrapper[4886]: I1124 09:09:12.116389 4886 reconciler_common.go:293] "Volume detached for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" DevicePath \"\"" Nov 24 09:09:12 crc kubenswrapper[4886]: I1124 09:09:12.119729 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4a3df3e-e493-448e-afb1-b52e1a50437a-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "b4a3df3e-e493-448e-afb1-b52e1a50437a" (UID: "b4a3df3e-e493-448e-afb1-b52e1a50437a"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:09:12 crc kubenswrapper[4886]: I1124 09:09:12.136929 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 24 09:09:12 crc kubenswrapper[4886]: I1124 09:09:12.218210 4886 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b4a3df3e-e493-448e-afb1-b52e1a50437a-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 24 09:09:12 crc kubenswrapper[4886]: I1124 09:09:12.409322 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-krgfp" Nov 24 09:09:12 crc kubenswrapper[4886]: I1124 09:09:12.458427 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jpnb9\" (UniqueName: \"kubernetes.io/projected/8f1558f7-ba7f-4a8b-9dad-112f3aaacb7a-kube-api-access-jpnb9\") pod \"8f1558f7-ba7f-4a8b-9dad-112f3aaacb7a\" (UID: \"8f1558f7-ba7f-4a8b-9dad-112f3aaacb7a\") " Nov 24 09:09:12 crc kubenswrapper[4886]: I1124 09:09:12.458719 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8f1558f7-ba7f-4a8b-9dad-112f3aaacb7a-operator-scripts\") pod \"8f1558f7-ba7f-4a8b-9dad-112f3aaacb7a\" (UID: \"8f1558f7-ba7f-4a8b-9dad-112f3aaacb7a\") " Nov 24 09:09:12 crc kubenswrapper[4886]: I1124 09:09:12.459914 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f1558f7-ba7f-4a8b-9dad-112f3aaacb7a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "8f1558f7-ba7f-4a8b-9dad-112f3aaacb7a" (UID: "8f1558f7-ba7f-4a8b-9dad-112f3aaacb7a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 09:09:12 crc kubenswrapper[4886]: I1124 09:09:12.471911 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f1558f7-ba7f-4a8b-9dad-112f3aaacb7a-kube-api-access-jpnb9" (OuterVolumeSpecName: "kube-api-access-jpnb9") pod "8f1558f7-ba7f-4a8b-9dad-112f3aaacb7a" (UID: "8f1558f7-ba7f-4a8b-9dad-112f3aaacb7a"). InnerVolumeSpecName "kube-api-access-jpnb9". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 09:09:12 crc kubenswrapper[4886]: I1124 09:09:12.562579 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jpnb9\" (UniqueName: \"kubernetes.io/projected/8f1558f7-ba7f-4a8b-9dad-112f3aaacb7a-kube-api-access-jpnb9\") on node \"crc\" DevicePath \"\"" Nov 24 09:09:12 crc kubenswrapper[4886]: I1124 09:09:12.562622 4886 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8f1558f7-ba7f-4a8b-9dad-112f3aaacb7a-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 09:09:12 crc kubenswrapper[4886]: I1124 09:09:12.751647 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-01c7-account-create-h5txn" Nov 24 09:09:12 crc kubenswrapper[4886]: I1124 09:09:12.764734 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4555cb4f-bd53-4d32-b473-1ce3cfdd95c1","Type":"ContainerStarted","Data":"96dc4f6f7594a2e28308453bae8aa63971155012d936515827b0eb47f243886c"} Nov 24 09:09:12 crc kubenswrapper[4886]: I1124 09:09:12.764971 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4555cb4f-bd53-4d32-b473-1ce3cfdd95c1" containerName="ceilometer-central-agent" containerID="cri-o://f7f56e2886b0914d07b49c4386195ea1f33c7a6a080d3a6058f165d109afb945" gracePeriod=30 Nov 24 09:09:12 crc kubenswrapper[4886]: I1124 09:09:12.765277 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 24 09:09:12 crc kubenswrapper[4886]: I1124 09:09:12.765341 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4555cb4f-bd53-4d32-b473-1ce3cfdd95c1" containerName="proxy-httpd" containerID="cri-o://96dc4f6f7594a2e28308453bae8aa63971155012d936515827b0eb47f243886c" gracePeriod=30 Nov 24 09:09:12 crc kubenswrapper[4886]: I1124 09:09:12.765406 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4555cb4f-bd53-4d32-b473-1ce3cfdd95c1" containerName="sg-core" containerID="cri-o://c1b2522ad5371fd44110d4c9a8a5ab64c8d6607e2740f6797e68fe8029809028" gracePeriod=30 Nov 24 09:09:12 crc kubenswrapper[4886]: I1124 09:09:12.765487 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4555cb4f-bd53-4d32-b473-1ce3cfdd95c1" containerName="ceilometer-notification-agent" containerID="cri-o://324f73d7ab0dd8ca79c0d39bb881afe710a2a89e1e6d39ed0e03d182ac4e2035" gracePeriod=30 Nov 24 09:09:12 crc kubenswrapper[4886]: I1124 09:09:12.770144 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-t72qh" Nov 24 09:09:12 crc kubenswrapper[4886]: I1124 09:09:12.790349 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-01c7-account-create-h5txn" event={"ID":"58780350-18ab-4b0c-ace4-fa09769e0266","Type":"ContainerDied","Data":"1747a9de11e320924ef4ac21f9e050079e90e4da17b26cdcb774cc7be0f62982"} Nov 24 09:09:12 crc kubenswrapper[4886]: I1124 09:09:12.790409 4886 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1747a9de11e320924ef4ac21f9e050079e90e4da17b26cdcb774cc7be0f62982" Nov 24 09:09:12 crc kubenswrapper[4886]: I1124 09:09:12.790555 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-bf4c-account-create-2vcrs" Nov 24 09:09:12 crc kubenswrapper[4886]: I1124 09:09:12.790668 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-01c7-account-create-h5txn" Nov 24 09:09:12 crc kubenswrapper[4886]: I1124 09:09:12.806523 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-3106-account-create-rd5w6" Nov 24 09:09:12 crc kubenswrapper[4886]: I1124 09:09:12.806654 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-krgfp" event={"ID":"8f1558f7-ba7f-4a8b-9dad-112f3aaacb7a","Type":"ContainerDied","Data":"3b1153f7c29d442470085ddffc0986736145295493087cde55e5fc1945443da8"} Nov 24 09:09:12 crc kubenswrapper[4886]: I1124 09:09:12.806687 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-krgfp" Nov 24 09:09:12 crc kubenswrapper[4886]: I1124 09:09:12.806731 4886 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3b1153f7c29d442470085ddffc0986736145295493087cde55e5fc1945443da8" Nov 24 09:09:12 crc kubenswrapper[4886]: I1124 09:09:12.845956 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"b4a3df3e-e493-448e-afb1-b52e1a50437a","Type":"ContainerDied","Data":"7f784a6ead27bc4f173acaa06b1963cd540a199cada1ca65151204c3c05534dc"} Nov 24 09:09:12 crc kubenswrapper[4886]: I1124 09:09:12.846043 4886 scope.go:117] "RemoveContainer" containerID="0a2394bb23bdc576103a3a750bc16ce96e45f561bdb043e7aa24c8b8b48fbe56" Nov 24 09:09:12 crc kubenswrapper[4886]: I1124 09:09:12.846301 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 24 09:09:12 crc kubenswrapper[4886]: I1124 09:09:12.855576 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.6850924210000002 podStartE2EDuration="14.855545507s" podCreationTimestamp="2025-11-24 09:08:58 +0000 UTC" firstStartedPulling="2025-11-24 09:08:59.674168806 +0000 UTC m=+1195.560906941" lastFinishedPulling="2025-11-24 09:09:10.844621892 +0000 UTC m=+1206.731360027" observedRunningTime="2025-11-24 09:09:12.848281953 +0000 UTC m=+1208.735020098" watchObservedRunningTime="2025-11-24 09:09:12.855545507 +0000 UTC m=+1208.742283642" Nov 24 09:09:12 crc kubenswrapper[4886]: I1124 09:09:12.878926 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="caebd1b1-b583-446f-bfc8-9c4a1be619da" path="/var/lib/kubelet/pods/caebd1b1-b583-446f-bfc8-9c4a1be619da/volumes" Nov 24 09:09:12 crc kubenswrapper[4886]: I1124 09:09:12.883021 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f4qmk\" (UniqueName: \"kubernetes.io/projected/58780350-18ab-4b0c-ace4-fa09769e0266-kube-api-access-f4qmk\") pod \"58780350-18ab-4b0c-ace4-fa09769e0266\" (UID: \"58780350-18ab-4b0c-ace4-fa09769e0266\") " Nov 24 09:09:12 crc kubenswrapper[4886]: I1124 09:09:12.883144 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zswfn\" (UniqueName: \"kubernetes.io/projected/78e1de22-7cd4-4929-b282-886695a613c2-kube-api-access-zswfn\") pod \"78e1de22-7cd4-4929-b282-886695a613c2\" (UID: \"78e1de22-7cd4-4929-b282-886695a613c2\") " Nov 24 09:09:12 crc kubenswrapper[4886]: I1124 09:09:12.883348 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gtkxd\" (UniqueName: \"kubernetes.io/projected/902ebf1f-132f-40a8-b469-d816a555740e-kube-api-access-gtkxd\") pod \"902ebf1f-132f-40a8-b469-d816a555740e\" (UID: \"902ebf1f-132f-40a8-b469-d816a555740e\") " Nov 24 09:09:12 crc kubenswrapper[4886]: I1124 09:09:12.883395 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/902ebf1f-132f-40a8-b469-d816a555740e-operator-scripts\") pod \"902ebf1f-132f-40a8-b469-d816a555740e\" (UID: \"902ebf1f-132f-40a8-b469-d816a555740e\") " Nov 24 09:09:12 crc kubenswrapper[4886]: I1124 09:09:12.883488 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hndkb\" (UniqueName: \"kubernetes.io/projected/3d522eb1-b9f9-47ae-bb27-616dffd736d3-kube-api-access-hndkb\") pod \"3d522eb1-b9f9-47ae-bb27-616dffd736d3\" (UID: \"3d522eb1-b9f9-47ae-bb27-616dffd736d3\") " Nov 24 09:09:12 crc kubenswrapper[4886]: I1124 09:09:12.883511 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/78e1de22-7cd4-4929-b282-886695a613c2-operator-scripts\") pod \"78e1de22-7cd4-4929-b282-886695a613c2\" (UID: \"78e1de22-7cd4-4929-b282-886695a613c2\") " Nov 24 09:09:12 crc kubenswrapper[4886]: I1124 09:09:12.883571 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3d522eb1-b9f9-47ae-bb27-616dffd736d3-operator-scripts\") pod \"3d522eb1-b9f9-47ae-bb27-616dffd736d3\" (UID: \"3d522eb1-b9f9-47ae-bb27-616dffd736d3\") " Nov 24 09:09:12 crc kubenswrapper[4886]: I1124 09:09:12.883623 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/58780350-18ab-4b0c-ace4-fa09769e0266-operator-scripts\") pod \"58780350-18ab-4b0c-ace4-fa09769e0266\" (UID: \"58780350-18ab-4b0c-ace4-fa09769e0266\") " Nov 24 09:09:12 crc kubenswrapper[4886]: I1124 09:09:12.891231 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/78e1de22-7cd4-4929-b282-886695a613c2-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "78e1de22-7cd4-4929-b282-886695a613c2" (UID: "78e1de22-7cd4-4929-b282-886695a613c2"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 09:09:12 crc kubenswrapper[4886]: I1124 09:09:12.891343 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/58780350-18ab-4b0c-ace4-fa09769e0266-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "58780350-18ab-4b0c-ace4-fa09769e0266" (UID: "58780350-18ab-4b0c-ace4-fa09769e0266"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 09:09:12 crc kubenswrapper[4886]: I1124 09:09:12.891910 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3d522eb1-b9f9-47ae-bb27-616dffd736d3-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "3d522eb1-b9f9-47ae-bb27-616dffd736d3" (UID: "3d522eb1-b9f9-47ae-bb27-616dffd736d3"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 09:09:12 crc kubenswrapper[4886]: I1124 09:09:12.893095 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/902ebf1f-132f-40a8-b469-d816a555740e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "902ebf1f-132f-40a8-b469-d816a555740e" (UID: "902ebf1f-132f-40a8-b469-d816a555740e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 09:09:12 crc kubenswrapper[4886]: I1124 09:09:12.895070 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3d522eb1-b9f9-47ae-bb27-616dffd736d3-kube-api-access-hndkb" (OuterVolumeSpecName: "kube-api-access-hndkb") pod "3d522eb1-b9f9-47ae-bb27-616dffd736d3" (UID: "3d522eb1-b9f9-47ae-bb27-616dffd736d3"). InnerVolumeSpecName "kube-api-access-hndkb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 09:09:12 crc kubenswrapper[4886]: I1124 09:09:12.898281 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/58780350-18ab-4b0c-ace4-fa09769e0266-kube-api-access-f4qmk" (OuterVolumeSpecName: "kube-api-access-f4qmk") pod "58780350-18ab-4b0c-ace4-fa09769e0266" (UID: "58780350-18ab-4b0c-ace4-fa09769e0266"). InnerVolumeSpecName "kube-api-access-f4qmk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 09:09:12 crc kubenswrapper[4886]: I1124 09:09:12.903046 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/78e1de22-7cd4-4929-b282-886695a613c2-kube-api-access-zswfn" (OuterVolumeSpecName: "kube-api-access-zswfn") pod "78e1de22-7cd4-4929-b282-886695a613c2" (UID: "78e1de22-7cd4-4929-b282-886695a613c2"). InnerVolumeSpecName "kube-api-access-zswfn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 09:09:12 crc kubenswrapper[4886]: I1124 09:09:12.919577 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/902ebf1f-132f-40a8-b469-d816a555740e-kube-api-access-gtkxd" (OuterVolumeSpecName: "kube-api-access-gtkxd") pod "902ebf1f-132f-40a8-b469-d816a555740e" (UID: "902ebf1f-132f-40a8-b469-d816a555740e"). InnerVolumeSpecName "kube-api-access-gtkxd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 09:09:12 crc kubenswrapper[4886]: I1124 09:09:12.992236 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f4qmk\" (UniqueName: \"kubernetes.io/projected/58780350-18ab-4b0c-ace4-fa09769e0266-kube-api-access-f4qmk\") on node \"crc\" DevicePath \"\"" Nov 24 09:09:12 crc kubenswrapper[4886]: I1124 09:09:12.992274 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zswfn\" (UniqueName: \"kubernetes.io/projected/78e1de22-7cd4-4929-b282-886695a613c2-kube-api-access-zswfn\") on node \"crc\" DevicePath \"\"" Nov 24 09:09:12 crc kubenswrapper[4886]: I1124 09:09:12.992289 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gtkxd\" (UniqueName: \"kubernetes.io/projected/902ebf1f-132f-40a8-b469-d816a555740e-kube-api-access-gtkxd\") on node \"crc\" DevicePath \"\"" Nov 24 09:09:12 crc kubenswrapper[4886]: I1124 09:09:12.992308 4886 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/902ebf1f-132f-40a8-b469-d816a555740e-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 09:09:12 crc kubenswrapper[4886]: I1124 09:09:12.992318 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hndkb\" (UniqueName: \"kubernetes.io/projected/3d522eb1-b9f9-47ae-bb27-616dffd736d3-kube-api-access-hndkb\") on node \"crc\" DevicePath \"\"" Nov 24 09:09:12 crc kubenswrapper[4886]: I1124 09:09:12.992329 4886 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/78e1de22-7cd4-4929-b282-886695a613c2-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 09:09:12 crc kubenswrapper[4886]: I1124 09:09:12.992339 4886 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3d522eb1-b9f9-47ae-bb27-616dffd736d3-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 09:09:12 crc kubenswrapper[4886]: I1124 09:09:12.992347 4886 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/58780350-18ab-4b0c-ace4-fa09769e0266-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 09:09:13 crc kubenswrapper[4886]: I1124 09:09:13.104635 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 24 09:09:13 crc kubenswrapper[4886]: I1124 09:09:13.124851 4886 scope.go:117] "RemoveContainer" containerID="93dad6caf21ecc145b49baafdfd8e66227d03285557979991d216a83b024b3c6" Nov 24 09:09:13 crc kubenswrapper[4886]: I1124 09:09:13.125232 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 24 09:09:13 crc kubenswrapper[4886]: I1124 09:09:13.130434 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 24 09:09:13 crc kubenswrapper[4886]: E1124 09:09:13.131013 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d522eb1-b9f9-47ae-bb27-616dffd736d3" containerName="mariadb-account-create" Nov 24 09:09:13 crc kubenswrapper[4886]: I1124 09:09:13.131030 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d522eb1-b9f9-47ae-bb27-616dffd736d3" containerName="mariadb-account-create" Nov 24 09:09:13 crc kubenswrapper[4886]: E1124 09:09:13.131058 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="58780350-18ab-4b0c-ace4-fa09769e0266" containerName="mariadb-account-create" Nov 24 09:09:13 crc kubenswrapper[4886]: I1124 09:09:13.131067 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="58780350-18ab-4b0c-ace4-fa09769e0266" containerName="mariadb-account-create" Nov 24 09:09:13 crc kubenswrapper[4886]: E1124 09:09:13.131096 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f1558f7-ba7f-4a8b-9dad-112f3aaacb7a" containerName="mariadb-database-create" Nov 24 09:09:13 crc kubenswrapper[4886]: I1124 09:09:13.131105 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f1558f7-ba7f-4a8b-9dad-112f3aaacb7a" containerName="mariadb-database-create" Nov 24 09:09:13 crc kubenswrapper[4886]: E1124 09:09:13.131126 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="902ebf1f-132f-40a8-b469-d816a555740e" containerName="mariadb-database-create" Nov 24 09:09:13 crc kubenswrapper[4886]: I1124 09:09:13.131132 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="902ebf1f-132f-40a8-b469-d816a555740e" containerName="mariadb-database-create" Nov 24 09:09:13 crc kubenswrapper[4886]: E1124 09:09:13.131143 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="78e1de22-7cd4-4929-b282-886695a613c2" containerName="mariadb-account-create" Nov 24 09:09:13 crc kubenswrapper[4886]: I1124 09:09:13.131172 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="78e1de22-7cd4-4929-b282-886695a613c2" containerName="mariadb-account-create" Nov 24 09:09:13 crc kubenswrapper[4886]: I1124 09:09:13.131420 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="78e1de22-7cd4-4929-b282-886695a613c2" containerName="mariadb-account-create" Nov 24 09:09:13 crc kubenswrapper[4886]: I1124 09:09:13.131438 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="8f1558f7-ba7f-4a8b-9dad-112f3aaacb7a" containerName="mariadb-database-create" Nov 24 09:09:13 crc kubenswrapper[4886]: I1124 09:09:13.131448 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="58780350-18ab-4b0c-ace4-fa09769e0266" containerName="mariadb-account-create" Nov 24 09:09:13 crc kubenswrapper[4886]: I1124 09:09:13.131461 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="3d522eb1-b9f9-47ae-bb27-616dffd736d3" containerName="mariadb-account-create" Nov 24 09:09:13 crc kubenswrapper[4886]: I1124 09:09:13.131475 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="902ebf1f-132f-40a8-b469-d816a555740e" containerName="mariadb-database-create" Nov 24 09:09:13 crc kubenswrapper[4886]: I1124 09:09:13.138089 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 24 09:09:13 crc kubenswrapper[4886]: I1124 09:09:13.153434 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Nov 24 09:09:13 crc kubenswrapper[4886]: I1124 09:09:13.153669 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Nov 24 09:09:13 crc kubenswrapper[4886]: I1124 09:09:13.191863 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 24 09:09:13 crc kubenswrapper[4886]: I1124 09:09:13.195381 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c2a34763-2dcf-4e0d-a6b6-7e26dc1f0404-logs\") pod \"glance-default-internal-api-0\" (UID: \"c2a34763-2dcf-4e0d-a6b6-7e26dc1f0404\") " pod="openstack/glance-default-internal-api-0" Nov 24 09:09:13 crc kubenswrapper[4886]: I1124 09:09:13.195461 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6tgqb\" (UniqueName: \"kubernetes.io/projected/c2a34763-2dcf-4e0d-a6b6-7e26dc1f0404-kube-api-access-6tgqb\") pod \"glance-default-internal-api-0\" (UID: \"c2a34763-2dcf-4e0d-a6b6-7e26dc1f0404\") " pod="openstack/glance-default-internal-api-0" Nov 24 09:09:13 crc kubenswrapper[4886]: I1124 09:09:13.195496 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c2a34763-2dcf-4e0d-a6b6-7e26dc1f0404-scripts\") pod \"glance-default-internal-api-0\" (UID: \"c2a34763-2dcf-4e0d-a6b6-7e26dc1f0404\") " pod="openstack/glance-default-internal-api-0" Nov 24 09:09:13 crc kubenswrapper[4886]: I1124 09:09:13.195524 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c2a34763-2dcf-4e0d-a6b6-7e26dc1f0404-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"c2a34763-2dcf-4e0d-a6b6-7e26dc1f0404\") " pod="openstack/glance-default-internal-api-0" Nov 24 09:09:13 crc kubenswrapper[4886]: I1124 09:09:13.195541 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c2a34763-2dcf-4e0d-a6b6-7e26dc1f0404-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"c2a34763-2dcf-4e0d-a6b6-7e26dc1f0404\") " pod="openstack/glance-default-internal-api-0" Nov 24 09:09:13 crc kubenswrapper[4886]: I1124 09:09:13.195572 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c2a34763-2dcf-4e0d-a6b6-7e26dc1f0404-config-data\") pod \"glance-default-internal-api-0\" (UID: \"c2a34763-2dcf-4e0d-a6b6-7e26dc1f0404\") " pod="openstack/glance-default-internal-api-0" Nov 24 09:09:13 crc kubenswrapper[4886]: I1124 09:09:13.195605 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c2a34763-2dcf-4e0d-a6b6-7e26dc1f0404-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"c2a34763-2dcf-4e0d-a6b6-7e26dc1f0404\") " pod="openstack/glance-default-internal-api-0" Nov 24 09:09:13 crc kubenswrapper[4886]: I1124 09:09:13.195633 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"c2a34763-2dcf-4e0d-a6b6-7e26dc1f0404\") " pod="openstack/glance-default-internal-api-0" Nov 24 09:09:13 crc kubenswrapper[4886]: I1124 09:09:13.222473 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 24 09:09:13 crc kubenswrapper[4886]: I1124 09:09:13.297738 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c2a34763-2dcf-4e0d-a6b6-7e26dc1f0404-logs\") pod \"glance-default-internal-api-0\" (UID: \"c2a34763-2dcf-4e0d-a6b6-7e26dc1f0404\") " pod="openstack/glance-default-internal-api-0" Nov 24 09:09:13 crc kubenswrapper[4886]: I1124 09:09:13.297834 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6tgqb\" (UniqueName: \"kubernetes.io/projected/c2a34763-2dcf-4e0d-a6b6-7e26dc1f0404-kube-api-access-6tgqb\") pod \"glance-default-internal-api-0\" (UID: \"c2a34763-2dcf-4e0d-a6b6-7e26dc1f0404\") " pod="openstack/glance-default-internal-api-0" Nov 24 09:09:13 crc kubenswrapper[4886]: I1124 09:09:13.297873 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c2a34763-2dcf-4e0d-a6b6-7e26dc1f0404-scripts\") pod \"glance-default-internal-api-0\" (UID: \"c2a34763-2dcf-4e0d-a6b6-7e26dc1f0404\") " pod="openstack/glance-default-internal-api-0" Nov 24 09:09:13 crc kubenswrapper[4886]: I1124 09:09:13.297902 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c2a34763-2dcf-4e0d-a6b6-7e26dc1f0404-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"c2a34763-2dcf-4e0d-a6b6-7e26dc1f0404\") " pod="openstack/glance-default-internal-api-0" Nov 24 09:09:13 crc kubenswrapper[4886]: I1124 09:09:13.297926 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c2a34763-2dcf-4e0d-a6b6-7e26dc1f0404-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"c2a34763-2dcf-4e0d-a6b6-7e26dc1f0404\") " pod="openstack/glance-default-internal-api-0" Nov 24 09:09:13 crc kubenswrapper[4886]: I1124 09:09:13.297957 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c2a34763-2dcf-4e0d-a6b6-7e26dc1f0404-config-data\") pod \"glance-default-internal-api-0\" (UID: \"c2a34763-2dcf-4e0d-a6b6-7e26dc1f0404\") " pod="openstack/glance-default-internal-api-0" Nov 24 09:09:13 crc kubenswrapper[4886]: I1124 09:09:13.297987 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c2a34763-2dcf-4e0d-a6b6-7e26dc1f0404-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"c2a34763-2dcf-4e0d-a6b6-7e26dc1f0404\") " pod="openstack/glance-default-internal-api-0" Nov 24 09:09:13 crc kubenswrapper[4886]: I1124 09:09:13.298013 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"c2a34763-2dcf-4e0d-a6b6-7e26dc1f0404\") " pod="openstack/glance-default-internal-api-0" Nov 24 09:09:13 crc kubenswrapper[4886]: I1124 09:09:13.298445 4886 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"c2a34763-2dcf-4e0d-a6b6-7e26dc1f0404\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/glance-default-internal-api-0" Nov 24 09:09:13 crc kubenswrapper[4886]: I1124 09:09:13.299361 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c2a34763-2dcf-4e0d-a6b6-7e26dc1f0404-logs\") pod \"glance-default-internal-api-0\" (UID: \"c2a34763-2dcf-4e0d-a6b6-7e26dc1f0404\") " pod="openstack/glance-default-internal-api-0" Nov 24 09:09:13 crc kubenswrapper[4886]: I1124 09:09:13.301304 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c2a34763-2dcf-4e0d-a6b6-7e26dc1f0404-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"c2a34763-2dcf-4e0d-a6b6-7e26dc1f0404\") " pod="openstack/glance-default-internal-api-0" Nov 24 09:09:13 crc kubenswrapper[4886]: I1124 09:09:13.304920 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c2a34763-2dcf-4e0d-a6b6-7e26dc1f0404-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"c2a34763-2dcf-4e0d-a6b6-7e26dc1f0404\") " pod="openstack/glance-default-internal-api-0" Nov 24 09:09:13 crc kubenswrapper[4886]: I1124 09:09:13.306287 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c2a34763-2dcf-4e0d-a6b6-7e26dc1f0404-config-data\") pod \"glance-default-internal-api-0\" (UID: \"c2a34763-2dcf-4e0d-a6b6-7e26dc1f0404\") " pod="openstack/glance-default-internal-api-0" Nov 24 09:09:13 crc kubenswrapper[4886]: I1124 09:09:13.306374 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c2a34763-2dcf-4e0d-a6b6-7e26dc1f0404-scripts\") pod \"glance-default-internal-api-0\" (UID: \"c2a34763-2dcf-4e0d-a6b6-7e26dc1f0404\") " pod="openstack/glance-default-internal-api-0" Nov 24 09:09:13 crc kubenswrapper[4886]: I1124 09:09:13.306959 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c2a34763-2dcf-4e0d-a6b6-7e26dc1f0404-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"c2a34763-2dcf-4e0d-a6b6-7e26dc1f0404\") " pod="openstack/glance-default-internal-api-0" Nov 24 09:09:13 crc kubenswrapper[4886]: I1124 09:09:13.321354 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6tgqb\" (UniqueName: \"kubernetes.io/projected/c2a34763-2dcf-4e0d-a6b6-7e26dc1f0404-kube-api-access-6tgqb\") pod \"glance-default-internal-api-0\" (UID: \"c2a34763-2dcf-4e0d-a6b6-7e26dc1f0404\") " pod="openstack/glance-default-internal-api-0" Nov 24 09:09:13 crc kubenswrapper[4886]: I1124 09:09:13.342672 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"c2a34763-2dcf-4e0d-a6b6-7e26dc1f0404\") " pod="openstack/glance-default-internal-api-0" Nov 24 09:09:13 crc kubenswrapper[4886]: I1124 09:09:13.494839 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 24 09:09:13 crc kubenswrapper[4886]: I1124 09:09:13.868763 4886 generic.go:334] "Generic (PLEG): container finished" podID="4555cb4f-bd53-4d32-b473-1ce3cfdd95c1" containerID="96dc4f6f7594a2e28308453bae8aa63971155012d936515827b0eb47f243886c" exitCode=0 Nov 24 09:09:13 crc kubenswrapper[4886]: I1124 09:09:13.868806 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4555cb4f-bd53-4d32-b473-1ce3cfdd95c1","Type":"ContainerDied","Data":"96dc4f6f7594a2e28308453bae8aa63971155012d936515827b0eb47f243886c"} Nov 24 09:09:13 crc kubenswrapper[4886]: I1124 09:09:13.872580 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-bf4c-account-create-2vcrs" Nov 24 09:09:13 crc kubenswrapper[4886]: I1124 09:09:13.872592 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-bf4c-account-create-2vcrs" event={"ID":"78e1de22-7cd4-4929-b282-886695a613c2","Type":"ContainerDied","Data":"dbf415ade63df4df2a7dc7c6b8c9041d60f3cd45d9bb7297a96b43a0022e9ae3"} Nov 24 09:09:13 crc kubenswrapper[4886]: I1124 09:09:13.872650 4886 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dbf415ade63df4df2a7dc7c6b8c9041d60f3cd45d9bb7297a96b43a0022e9ae3" Nov 24 09:09:13 crc kubenswrapper[4886]: I1124 09:09:13.874541 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-3106-account-create-rd5w6" event={"ID":"3d522eb1-b9f9-47ae-bb27-616dffd736d3","Type":"ContainerDied","Data":"be9015b6e0d349f6217544b8e9645a579e66dc71c2cd4dc298773b18d97a59fa"} Nov 24 09:09:13 crc kubenswrapper[4886]: I1124 09:09:13.874574 4886 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="be9015b6e0d349f6217544b8e9645a579e66dc71c2cd4dc298773b18d97a59fa" Nov 24 09:09:13 crc kubenswrapper[4886]: I1124 09:09:13.874633 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-3106-account-create-rd5w6" Nov 24 09:09:13 crc kubenswrapper[4886]: I1124 09:09:13.876795 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-t72qh" event={"ID":"902ebf1f-132f-40a8-b469-d816a555740e","Type":"ContainerDied","Data":"a1874849dc8d2f7daebfb7fc8bc983dd01de5e3444a668e5af96c1ade874f822"} Nov 24 09:09:13 crc kubenswrapper[4886]: I1124 09:09:13.876832 4886 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a1874849dc8d2f7daebfb7fc8bc983dd01de5e3444a668e5af96c1ade874f822" Nov 24 09:09:13 crc kubenswrapper[4886]: I1124 09:09:13.876884 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-t72qh" Nov 24 09:09:13 crc kubenswrapper[4886]: I1124 09:09:13.880516 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"f223fa66-cb1a-4f97-970b-9c64793d34b9","Type":"ContainerStarted","Data":"99fbff9c9072d8c4f35c859d6666b943cf6e0e2991103f6c0f86b9ca0bc7c076"} Nov 24 09:09:14 crc kubenswrapper[4886]: I1124 09:09:14.099293 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 24 09:09:14 crc kubenswrapper[4886]: W1124 09:09:14.104677 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc2a34763_2dcf_4e0d_a6b6_7e26dc1f0404.slice/crio-84e1f951ede265fd6fd52cb2565541490e0ea45928c4e75d014127d98a9d3a6b WatchSource:0}: Error finding container 84e1f951ede265fd6fd52cb2565541490e0ea45928c4e75d014127d98a9d3a6b: Status 404 returned error can't find the container with id 84e1f951ede265fd6fd52cb2565541490e0ea45928c4e75d014127d98a9d3a6b Nov 24 09:09:14 crc kubenswrapper[4886]: I1124 09:09:14.861946 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b4a3df3e-e493-448e-afb1-b52e1a50437a" path="/var/lib/kubelet/pods/b4a3df3e-e493-448e-afb1-b52e1a50437a/volumes" Nov 24 09:09:14 crc kubenswrapper[4886]: I1124 09:09:14.900045 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"c2a34763-2dcf-4e0d-a6b6-7e26dc1f0404","Type":"ContainerStarted","Data":"84e1f951ede265fd6fd52cb2565541490e0ea45928c4e75d014127d98a9d3a6b"} Nov 24 09:09:14 crc kubenswrapper[4886]: I1124 09:09:14.912520 4886 generic.go:334] "Generic (PLEG): container finished" podID="4555cb4f-bd53-4d32-b473-1ce3cfdd95c1" containerID="c1b2522ad5371fd44110d4c9a8a5ab64c8d6607e2740f6797e68fe8029809028" exitCode=2 Nov 24 09:09:14 crc kubenswrapper[4886]: I1124 09:09:14.912593 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4555cb4f-bd53-4d32-b473-1ce3cfdd95c1","Type":"ContainerDied","Data":"c1b2522ad5371fd44110d4c9a8a5ab64c8d6607e2740f6797e68fe8029809028"} Nov 24 09:09:16 crc kubenswrapper[4886]: I1124 09:09:16.019490 4886 generic.go:334] "Generic (PLEG): container finished" podID="4555cb4f-bd53-4d32-b473-1ce3cfdd95c1" containerID="324f73d7ab0dd8ca79c0d39bb881afe710a2a89e1e6d39ed0e03d182ac4e2035" exitCode=0 Nov 24 09:09:16 crc kubenswrapper[4886]: I1124 09:09:16.019960 4886 generic.go:334] "Generic (PLEG): container finished" podID="4555cb4f-bd53-4d32-b473-1ce3cfdd95c1" containerID="f7f56e2886b0914d07b49c4386195ea1f33c7a6a080d3a6058f165d109afb945" exitCode=0 Nov 24 09:09:16 crc kubenswrapper[4886]: I1124 09:09:16.020107 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4555cb4f-bd53-4d32-b473-1ce3cfdd95c1","Type":"ContainerDied","Data":"324f73d7ab0dd8ca79c0d39bb881afe710a2a89e1e6d39ed0e03d182ac4e2035"} Nov 24 09:09:16 crc kubenswrapper[4886]: I1124 09:09:16.020163 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4555cb4f-bd53-4d32-b473-1ce3cfdd95c1","Type":"ContainerDied","Data":"f7f56e2886b0914d07b49c4386195ea1f33c7a6a080d3a6058f165d109afb945"} Nov 24 09:09:16 crc kubenswrapper[4886]: I1124 09:09:16.044350 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"c2a34763-2dcf-4e0d-a6b6-7e26dc1f0404","Type":"ContainerStarted","Data":"92036ba3f72085a94ff33288211d68840c0a93a8ba34c9b79147583884c63084"} Nov 24 09:09:16 crc kubenswrapper[4886]: I1124 09:09:16.060436 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"f223fa66-cb1a-4f97-970b-9c64793d34b9","Type":"ContainerStarted","Data":"3a208ea9974c0a1ea11317c06342f287c85474fe5fd3a21649919ee1660e2048"} Nov 24 09:09:16 crc kubenswrapper[4886]: I1124 09:09:16.075249 4886 generic.go:334] "Generic (PLEG): container finished" podID="54dfe4a2-2d0f-417f-b1d6-df18e5581bfc" containerID="692850d79528a83bba835c99fe3464de2f71e29a9c94c5526cca407f8eedbbb6" exitCode=137 Nov 24 09:09:16 crc kubenswrapper[4886]: I1124 09:09:16.075311 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-75ffb75746-pwc5g" event={"ID":"54dfe4a2-2d0f-417f-b1d6-df18e5581bfc","Type":"ContainerDied","Data":"692850d79528a83bba835c99fe3464de2f71e29a9c94c5526cca407f8eedbbb6"} Nov 24 09:09:16 crc kubenswrapper[4886]: I1124 09:09:16.408120 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-c9t2q"] Nov 24 09:09:16 crc kubenswrapper[4886]: I1124 09:09:16.413820 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-c9t2q" Nov 24 09:09:16 crc kubenswrapper[4886]: I1124 09:09:16.419942 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-vbwk7" Nov 24 09:09:16 crc kubenswrapper[4886]: I1124 09:09:16.420134 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Nov 24 09:09:16 crc kubenswrapper[4886]: I1124 09:09:16.420283 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Nov 24 09:09:16 crc kubenswrapper[4886]: I1124 09:09:16.442002 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-c9t2q"] Nov 24 09:09:16 crc kubenswrapper[4886]: I1124 09:09:16.479952 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2a4f443-5a71-4a49-816a-b052b3f6246c-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-c9t2q\" (UID: \"f2a4f443-5a71-4a49-816a-b052b3f6246c\") " pod="openstack/nova-cell0-conductor-db-sync-c9t2q" Nov 24 09:09:16 crc kubenswrapper[4886]: I1124 09:09:16.480079 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2a4f443-5a71-4a49-816a-b052b3f6246c-config-data\") pod \"nova-cell0-conductor-db-sync-c9t2q\" (UID: \"f2a4f443-5a71-4a49-816a-b052b3f6246c\") " pod="openstack/nova-cell0-conductor-db-sync-c9t2q" Nov 24 09:09:16 crc kubenswrapper[4886]: I1124 09:09:16.480134 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f2a4f443-5a71-4a49-816a-b052b3f6246c-scripts\") pod \"nova-cell0-conductor-db-sync-c9t2q\" (UID: \"f2a4f443-5a71-4a49-816a-b052b3f6246c\") " pod="openstack/nova-cell0-conductor-db-sync-c9t2q" Nov 24 09:09:16 crc kubenswrapper[4886]: I1124 09:09:16.480312 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mlq5c\" (UniqueName: \"kubernetes.io/projected/f2a4f443-5a71-4a49-816a-b052b3f6246c-kube-api-access-mlq5c\") pod \"nova-cell0-conductor-db-sync-c9t2q\" (UID: \"f2a4f443-5a71-4a49-816a-b052b3f6246c\") " pod="openstack/nova-cell0-conductor-db-sync-c9t2q" Nov 24 09:09:16 crc kubenswrapper[4886]: I1124 09:09:16.581070 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f2a4f443-5a71-4a49-816a-b052b3f6246c-scripts\") pod \"nova-cell0-conductor-db-sync-c9t2q\" (UID: \"f2a4f443-5a71-4a49-816a-b052b3f6246c\") " pod="openstack/nova-cell0-conductor-db-sync-c9t2q" Nov 24 09:09:16 crc kubenswrapper[4886]: I1124 09:09:16.581175 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mlq5c\" (UniqueName: \"kubernetes.io/projected/f2a4f443-5a71-4a49-816a-b052b3f6246c-kube-api-access-mlq5c\") pod \"nova-cell0-conductor-db-sync-c9t2q\" (UID: \"f2a4f443-5a71-4a49-816a-b052b3f6246c\") " pod="openstack/nova-cell0-conductor-db-sync-c9t2q" Nov 24 09:09:16 crc kubenswrapper[4886]: I1124 09:09:16.581226 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2a4f443-5a71-4a49-816a-b052b3f6246c-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-c9t2q\" (UID: \"f2a4f443-5a71-4a49-816a-b052b3f6246c\") " pod="openstack/nova-cell0-conductor-db-sync-c9t2q" Nov 24 09:09:16 crc kubenswrapper[4886]: I1124 09:09:16.581277 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2a4f443-5a71-4a49-816a-b052b3f6246c-config-data\") pod \"nova-cell0-conductor-db-sync-c9t2q\" (UID: \"f2a4f443-5a71-4a49-816a-b052b3f6246c\") " pod="openstack/nova-cell0-conductor-db-sync-c9t2q" Nov 24 09:09:16 crc kubenswrapper[4886]: I1124 09:09:16.590721 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2a4f443-5a71-4a49-816a-b052b3f6246c-config-data\") pod \"nova-cell0-conductor-db-sync-c9t2q\" (UID: \"f2a4f443-5a71-4a49-816a-b052b3f6246c\") " pod="openstack/nova-cell0-conductor-db-sync-c9t2q" Nov 24 09:09:16 crc kubenswrapper[4886]: I1124 09:09:16.593906 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f2a4f443-5a71-4a49-816a-b052b3f6246c-scripts\") pod \"nova-cell0-conductor-db-sync-c9t2q\" (UID: \"f2a4f443-5a71-4a49-816a-b052b3f6246c\") " pod="openstack/nova-cell0-conductor-db-sync-c9t2q" Nov 24 09:09:16 crc kubenswrapper[4886]: I1124 09:09:16.604032 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mlq5c\" (UniqueName: \"kubernetes.io/projected/f2a4f443-5a71-4a49-816a-b052b3f6246c-kube-api-access-mlq5c\") pod \"nova-cell0-conductor-db-sync-c9t2q\" (UID: \"f2a4f443-5a71-4a49-816a-b052b3f6246c\") " pod="openstack/nova-cell0-conductor-db-sync-c9t2q" Nov 24 09:09:16 crc kubenswrapper[4886]: I1124 09:09:16.608361 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2a4f443-5a71-4a49-816a-b052b3f6246c-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-c9t2q\" (UID: \"f2a4f443-5a71-4a49-816a-b052b3f6246c\") " pod="openstack/nova-cell0-conductor-db-sync-c9t2q" Nov 24 09:09:16 crc kubenswrapper[4886]: I1124 09:09:16.794686 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-c9t2q" Nov 24 09:09:16 crc kubenswrapper[4886]: I1124 09:09:16.897333 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 09:09:16 crc kubenswrapper[4886]: I1124 09:09:16.908064 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-75ffb75746-pwc5g" Nov 24 09:09:17 crc kubenswrapper[4886]: I1124 09:09:17.001753 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4555cb4f-bd53-4d32-b473-1ce3cfdd95c1-sg-core-conf-yaml\") pod \"4555cb4f-bd53-4d32-b473-1ce3cfdd95c1\" (UID: \"4555cb4f-bd53-4d32-b473-1ce3cfdd95c1\") " Nov 24 09:09:17 crc kubenswrapper[4886]: I1124 09:09:17.005516 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/54dfe4a2-2d0f-417f-b1d6-df18e5581bfc-scripts\") pod \"54dfe4a2-2d0f-417f-b1d6-df18e5581bfc\" (UID: \"54dfe4a2-2d0f-417f-b1d6-df18e5581bfc\") " Nov 24 09:09:17 crc kubenswrapper[4886]: I1124 09:09:17.005563 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4555cb4f-bd53-4d32-b473-1ce3cfdd95c1-config-data\") pod \"4555cb4f-bd53-4d32-b473-1ce3cfdd95c1\" (UID: \"4555cb4f-bd53-4d32-b473-1ce3cfdd95c1\") " Nov 24 09:09:17 crc kubenswrapper[4886]: I1124 09:09:17.005662 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4555cb4f-bd53-4d32-b473-1ce3cfdd95c1-combined-ca-bundle\") pod \"4555cb4f-bd53-4d32-b473-1ce3cfdd95c1\" (UID: \"4555cb4f-bd53-4d32-b473-1ce3cfdd95c1\") " Nov 24 09:09:17 crc kubenswrapper[4886]: I1124 09:09:17.005694 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4555cb4f-bd53-4d32-b473-1ce3cfdd95c1-log-httpd\") pod \"4555cb4f-bd53-4d32-b473-1ce3cfdd95c1\" (UID: \"4555cb4f-bd53-4d32-b473-1ce3cfdd95c1\") " Nov 24 09:09:17 crc kubenswrapper[4886]: I1124 09:09:17.005768 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/54dfe4a2-2d0f-417f-b1d6-df18e5581bfc-logs\") pod \"54dfe4a2-2d0f-417f-b1d6-df18e5581bfc\" (UID: \"54dfe4a2-2d0f-417f-b1d6-df18e5581bfc\") " Nov 24 09:09:17 crc kubenswrapper[4886]: I1124 09:09:17.005820 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4555cb4f-bd53-4d32-b473-1ce3cfdd95c1-scripts\") pod \"4555cb4f-bd53-4d32-b473-1ce3cfdd95c1\" (UID: \"4555cb4f-bd53-4d32-b473-1ce3cfdd95c1\") " Nov 24 09:09:17 crc kubenswrapper[4886]: I1124 09:09:17.005888 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/54dfe4a2-2d0f-417f-b1d6-df18e5581bfc-config-data\") pod \"54dfe4a2-2d0f-417f-b1d6-df18e5581bfc\" (UID: \"54dfe4a2-2d0f-417f-b1d6-df18e5581bfc\") " Nov 24 09:09:17 crc kubenswrapper[4886]: I1124 09:09:17.005921 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4555cb4f-bd53-4d32-b473-1ce3cfdd95c1-run-httpd\") pod \"4555cb4f-bd53-4d32-b473-1ce3cfdd95c1\" (UID: \"4555cb4f-bd53-4d32-b473-1ce3cfdd95c1\") " Nov 24 09:09:17 crc kubenswrapper[4886]: I1124 09:09:17.005947 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/54dfe4a2-2d0f-417f-b1d6-df18e5581bfc-horizon-secret-key\") pod \"54dfe4a2-2d0f-417f-b1d6-df18e5581bfc\" (UID: \"54dfe4a2-2d0f-417f-b1d6-df18e5581bfc\") " Nov 24 09:09:17 crc kubenswrapper[4886]: I1124 09:09:17.005977 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lrmk9\" (UniqueName: \"kubernetes.io/projected/4555cb4f-bd53-4d32-b473-1ce3cfdd95c1-kube-api-access-lrmk9\") pod \"4555cb4f-bd53-4d32-b473-1ce3cfdd95c1\" (UID: \"4555cb4f-bd53-4d32-b473-1ce3cfdd95c1\") " Nov 24 09:09:17 crc kubenswrapper[4886]: I1124 09:09:17.006002 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xr2rc\" (UniqueName: \"kubernetes.io/projected/54dfe4a2-2d0f-417f-b1d6-df18e5581bfc-kube-api-access-xr2rc\") pod \"54dfe4a2-2d0f-417f-b1d6-df18e5581bfc\" (UID: \"54dfe4a2-2d0f-417f-b1d6-df18e5581bfc\") " Nov 24 09:09:17 crc kubenswrapper[4886]: I1124 09:09:17.006061 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/54dfe4a2-2d0f-417f-b1d6-df18e5581bfc-combined-ca-bundle\") pod \"54dfe4a2-2d0f-417f-b1d6-df18e5581bfc\" (UID: \"54dfe4a2-2d0f-417f-b1d6-df18e5581bfc\") " Nov 24 09:09:17 crc kubenswrapper[4886]: I1124 09:09:17.006086 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/54dfe4a2-2d0f-417f-b1d6-df18e5581bfc-horizon-tls-certs\") pod \"54dfe4a2-2d0f-417f-b1d6-df18e5581bfc\" (UID: \"54dfe4a2-2d0f-417f-b1d6-df18e5581bfc\") " Nov 24 09:09:17 crc kubenswrapper[4886]: I1124 09:09:17.006694 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4555cb4f-bd53-4d32-b473-1ce3cfdd95c1-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "4555cb4f-bd53-4d32-b473-1ce3cfdd95c1" (UID: "4555cb4f-bd53-4d32-b473-1ce3cfdd95c1"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 09:09:17 crc kubenswrapper[4886]: I1124 09:09:17.007009 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4555cb4f-bd53-4d32-b473-1ce3cfdd95c1-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "4555cb4f-bd53-4d32-b473-1ce3cfdd95c1" (UID: "4555cb4f-bd53-4d32-b473-1ce3cfdd95c1"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 09:09:17 crc kubenswrapper[4886]: I1124 09:09:17.007372 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/54dfe4a2-2d0f-417f-b1d6-df18e5581bfc-logs" (OuterVolumeSpecName: "logs") pod "54dfe4a2-2d0f-417f-b1d6-df18e5581bfc" (UID: "54dfe4a2-2d0f-417f-b1d6-df18e5581bfc"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 09:09:17 crc kubenswrapper[4886]: I1124 09:09:17.012607 4886 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4555cb4f-bd53-4d32-b473-1ce3cfdd95c1-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 24 09:09:17 crc kubenswrapper[4886]: I1124 09:09:17.012712 4886 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/54dfe4a2-2d0f-417f-b1d6-df18e5581bfc-logs\") on node \"crc\" DevicePath \"\"" Nov 24 09:09:17 crc kubenswrapper[4886]: I1124 09:09:17.012774 4886 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4555cb4f-bd53-4d32-b473-1ce3cfdd95c1-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 24 09:09:17 crc kubenswrapper[4886]: I1124 09:09:17.016834 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4555cb4f-bd53-4d32-b473-1ce3cfdd95c1-kube-api-access-lrmk9" (OuterVolumeSpecName: "kube-api-access-lrmk9") pod "4555cb4f-bd53-4d32-b473-1ce3cfdd95c1" (UID: "4555cb4f-bd53-4d32-b473-1ce3cfdd95c1"). InnerVolumeSpecName "kube-api-access-lrmk9". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 09:09:17 crc kubenswrapper[4886]: I1124 09:09:17.018200 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/54dfe4a2-2d0f-417f-b1d6-df18e5581bfc-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "54dfe4a2-2d0f-417f-b1d6-df18e5581bfc" (UID: "54dfe4a2-2d0f-417f-b1d6-df18e5581bfc"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:09:17 crc kubenswrapper[4886]: I1124 09:09:17.023467 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4555cb4f-bd53-4d32-b473-1ce3cfdd95c1-scripts" (OuterVolumeSpecName: "scripts") pod "4555cb4f-bd53-4d32-b473-1ce3cfdd95c1" (UID: "4555cb4f-bd53-4d32-b473-1ce3cfdd95c1"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:09:17 crc kubenswrapper[4886]: I1124 09:09:17.029848 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/54dfe4a2-2d0f-417f-b1d6-df18e5581bfc-kube-api-access-xr2rc" (OuterVolumeSpecName: "kube-api-access-xr2rc") pod "54dfe4a2-2d0f-417f-b1d6-df18e5581bfc" (UID: "54dfe4a2-2d0f-417f-b1d6-df18e5581bfc"). InnerVolumeSpecName "kube-api-access-xr2rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 09:09:17 crc kubenswrapper[4886]: I1124 09:09:17.048787 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/54dfe4a2-2d0f-417f-b1d6-df18e5581bfc-scripts" (OuterVolumeSpecName: "scripts") pod "54dfe4a2-2d0f-417f-b1d6-df18e5581bfc" (UID: "54dfe4a2-2d0f-417f-b1d6-df18e5581bfc"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 09:09:17 crc kubenswrapper[4886]: I1124 09:09:17.064534 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4555cb4f-bd53-4d32-b473-1ce3cfdd95c1-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "4555cb4f-bd53-4d32-b473-1ce3cfdd95c1" (UID: "4555cb4f-bd53-4d32-b473-1ce3cfdd95c1"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:09:17 crc kubenswrapper[4886]: I1124 09:09:17.068621 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/54dfe4a2-2d0f-417f-b1d6-df18e5581bfc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "54dfe4a2-2d0f-417f-b1d6-df18e5581bfc" (UID: "54dfe4a2-2d0f-417f-b1d6-df18e5581bfc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:09:17 crc kubenswrapper[4886]: I1124 09:09:17.074400 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/54dfe4a2-2d0f-417f-b1d6-df18e5581bfc-config-data" (OuterVolumeSpecName: "config-data") pod "54dfe4a2-2d0f-417f-b1d6-df18e5581bfc" (UID: "54dfe4a2-2d0f-417f-b1d6-df18e5581bfc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 09:09:17 crc kubenswrapper[4886]: I1124 09:09:17.103482 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"f223fa66-cb1a-4f97-970b-9c64793d34b9","Type":"ContainerStarted","Data":"0714cdc7adea9622bc2c36c98e4785c9b257ab6a841f75c52c29639a498896cc"} Nov 24 09:09:17 crc kubenswrapper[4886]: I1124 09:09:17.119388 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-75ffb75746-pwc5g" event={"ID":"54dfe4a2-2d0f-417f-b1d6-df18e5581bfc","Type":"ContainerDied","Data":"4af8b7c8248b28e69e1033743affb1fbf6a8db634ffd4fc78c957c02576ffe85"} Nov 24 09:09:17 crc kubenswrapper[4886]: I1124 09:09:17.119456 4886 scope.go:117] "RemoveContainer" containerID="06e9411832b7ffda01b80eacb06c2049544b01676f1e649af4e6b2361c635c65" Nov 24 09:09:17 crc kubenswrapper[4886]: I1124 09:09:17.119652 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-75ffb75746-pwc5g" Nov 24 09:09:17 crc kubenswrapper[4886]: I1124 09:09:17.123949 4886 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/54dfe4a2-2d0f-417f-b1d6-df18e5581bfc-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Nov 24 09:09:17 crc kubenswrapper[4886]: I1124 09:09:17.124002 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lrmk9\" (UniqueName: \"kubernetes.io/projected/4555cb4f-bd53-4d32-b473-1ce3cfdd95c1-kube-api-access-lrmk9\") on node \"crc\" DevicePath \"\"" Nov 24 09:09:17 crc kubenswrapper[4886]: I1124 09:09:17.124021 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xr2rc\" (UniqueName: \"kubernetes.io/projected/54dfe4a2-2d0f-417f-b1d6-df18e5581bfc-kube-api-access-xr2rc\") on node \"crc\" DevicePath \"\"" Nov 24 09:09:17 crc kubenswrapper[4886]: I1124 09:09:17.124035 4886 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/54dfe4a2-2d0f-417f-b1d6-df18e5581bfc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 09:09:17 crc kubenswrapper[4886]: I1124 09:09:17.124047 4886 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4555cb4f-bd53-4d32-b473-1ce3cfdd95c1-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 24 09:09:17 crc kubenswrapper[4886]: I1124 09:09:17.124060 4886 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/54dfe4a2-2d0f-417f-b1d6-df18e5581bfc-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 09:09:17 crc kubenswrapper[4886]: I1124 09:09:17.124077 4886 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4555cb4f-bd53-4d32-b473-1ce3cfdd95c1-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 09:09:17 crc kubenswrapper[4886]: I1124 09:09:17.124090 4886 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/54dfe4a2-2d0f-417f-b1d6-df18e5581bfc-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 09:09:17 crc kubenswrapper[4886]: I1124 09:09:17.140968 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4555cb4f-bd53-4d32-b473-1ce3cfdd95c1","Type":"ContainerDied","Data":"9ff91a58d70f1ecc134738515057d780f6030ae24ca7e6af788efad1a750ed31"} Nov 24 09:09:17 crc kubenswrapper[4886]: I1124 09:09:17.141108 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 09:09:17 crc kubenswrapper[4886]: I1124 09:09:17.141397 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=6.141371068 podStartE2EDuration="6.141371068s" podCreationTimestamp="2025-11-24 09:09:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 09:09:17.138531899 +0000 UTC m=+1213.025270034" watchObservedRunningTime="2025-11-24 09:09:17.141371068 +0000 UTC m=+1213.028109203" Nov 24 09:09:17 crc kubenswrapper[4886]: I1124 09:09:17.170437 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4555cb4f-bd53-4d32-b473-1ce3cfdd95c1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4555cb4f-bd53-4d32-b473-1ce3cfdd95c1" (UID: "4555cb4f-bd53-4d32-b473-1ce3cfdd95c1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:09:17 crc kubenswrapper[4886]: I1124 09:09:17.198710 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/54dfe4a2-2d0f-417f-b1d6-df18e5581bfc-horizon-tls-certs" (OuterVolumeSpecName: "horizon-tls-certs") pod "54dfe4a2-2d0f-417f-b1d6-df18e5581bfc" (UID: "54dfe4a2-2d0f-417f-b1d6-df18e5581bfc"). InnerVolumeSpecName "horizon-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:09:17 crc kubenswrapper[4886]: I1124 09:09:17.235873 4886 reconciler_common.go:293] "Volume detached for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/54dfe4a2-2d0f-417f-b1d6-df18e5581bfc-horizon-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 24 09:09:17 crc kubenswrapper[4886]: I1124 09:09:17.235929 4886 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4555cb4f-bd53-4d32-b473-1ce3cfdd95c1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 09:09:17 crc kubenswrapper[4886]: I1124 09:09:17.276731 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4555cb4f-bd53-4d32-b473-1ce3cfdd95c1-config-data" (OuterVolumeSpecName: "config-data") pod "4555cb4f-bd53-4d32-b473-1ce3cfdd95c1" (UID: "4555cb4f-bd53-4d32-b473-1ce3cfdd95c1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:09:17 crc kubenswrapper[4886]: I1124 09:09:17.337496 4886 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4555cb4f-bd53-4d32-b473-1ce3cfdd95c1-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 09:09:17 crc kubenswrapper[4886]: I1124 09:09:17.386413 4886 scope.go:117] "RemoveContainer" containerID="692850d79528a83bba835c99fe3464de2f71e29a9c94c5526cca407f8eedbbb6" Nov 24 09:09:17 crc kubenswrapper[4886]: I1124 09:09:17.424060 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-c9t2q"] Nov 24 09:09:17 crc kubenswrapper[4886]: I1124 09:09:17.539884 4886 scope.go:117] "RemoveContainer" containerID="96dc4f6f7594a2e28308453bae8aa63971155012d936515827b0eb47f243886c" Nov 24 09:09:17 crc kubenswrapper[4886]: I1124 09:09:17.559083 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-75ffb75746-pwc5g"] Nov 24 09:09:17 crc kubenswrapper[4886]: I1124 09:09:17.565302 4886 scope.go:117] "RemoveContainer" containerID="c1b2522ad5371fd44110d4c9a8a5ab64c8d6607e2740f6797e68fe8029809028" Nov 24 09:09:17 crc kubenswrapper[4886]: I1124 09:09:17.590400 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-75ffb75746-pwc5g"] Nov 24 09:09:17 crc kubenswrapper[4886]: I1124 09:09:17.601242 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 24 09:09:17 crc kubenswrapper[4886]: I1124 09:09:17.602450 4886 scope.go:117] "RemoveContainer" containerID="324f73d7ab0dd8ca79c0d39bb881afe710a2a89e1e6d39ed0e03d182ac4e2035" Nov 24 09:09:17 crc kubenswrapper[4886]: I1124 09:09:17.613633 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 24 09:09:17 crc kubenswrapper[4886]: I1124 09:09:17.625134 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 24 09:09:17 crc kubenswrapper[4886]: E1124 09:09:17.625669 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4555cb4f-bd53-4d32-b473-1ce3cfdd95c1" containerName="proxy-httpd" Nov 24 09:09:17 crc kubenswrapper[4886]: I1124 09:09:17.625690 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="4555cb4f-bd53-4d32-b473-1ce3cfdd95c1" containerName="proxy-httpd" Nov 24 09:09:17 crc kubenswrapper[4886]: E1124 09:09:17.625724 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4555cb4f-bd53-4d32-b473-1ce3cfdd95c1" containerName="sg-core" Nov 24 09:09:17 crc kubenswrapper[4886]: I1124 09:09:17.625731 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="4555cb4f-bd53-4d32-b473-1ce3cfdd95c1" containerName="sg-core" Nov 24 09:09:17 crc kubenswrapper[4886]: E1124 09:09:17.625750 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="54dfe4a2-2d0f-417f-b1d6-df18e5581bfc" containerName="horizon" Nov 24 09:09:17 crc kubenswrapper[4886]: I1124 09:09:17.625756 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="54dfe4a2-2d0f-417f-b1d6-df18e5581bfc" containerName="horizon" Nov 24 09:09:17 crc kubenswrapper[4886]: E1124 09:09:17.625771 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4555cb4f-bd53-4d32-b473-1ce3cfdd95c1" containerName="ceilometer-notification-agent" Nov 24 09:09:17 crc kubenswrapper[4886]: I1124 09:09:17.625777 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="4555cb4f-bd53-4d32-b473-1ce3cfdd95c1" containerName="ceilometer-notification-agent" Nov 24 09:09:17 crc kubenswrapper[4886]: E1124 09:09:17.625793 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="54dfe4a2-2d0f-417f-b1d6-df18e5581bfc" containerName="horizon-log" Nov 24 09:09:17 crc kubenswrapper[4886]: I1124 09:09:17.625801 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="54dfe4a2-2d0f-417f-b1d6-df18e5581bfc" containerName="horizon-log" Nov 24 09:09:17 crc kubenswrapper[4886]: E1124 09:09:17.625816 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4555cb4f-bd53-4d32-b473-1ce3cfdd95c1" containerName="ceilometer-central-agent" Nov 24 09:09:17 crc kubenswrapper[4886]: I1124 09:09:17.625824 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="4555cb4f-bd53-4d32-b473-1ce3cfdd95c1" containerName="ceilometer-central-agent" Nov 24 09:09:17 crc kubenswrapper[4886]: I1124 09:09:17.626014 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="54dfe4a2-2d0f-417f-b1d6-df18e5581bfc" containerName="horizon" Nov 24 09:09:17 crc kubenswrapper[4886]: I1124 09:09:17.626032 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="4555cb4f-bd53-4d32-b473-1ce3cfdd95c1" containerName="ceilometer-central-agent" Nov 24 09:09:17 crc kubenswrapper[4886]: I1124 09:09:17.626049 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="4555cb4f-bd53-4d32-b473-1ce3cfdd95c1" containerName="ceilometer-notification-agent" Nov 24 09:09:17 crc kubenswrapper[4886]: I1124 09:09:17.626062 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="4555cb4f-bd53-4d32-b473-1ce3cfdd95c1" containerName="proxy-httpd" Nov 24 09:09:17 crc kubenswrapper[4886]: I1124 09:09:17.626076 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="4555cb4f-bd53-4d32-b473-1ce3cfdd95c1" containerName="sg-core" Nov 24 09:09:17 crc kubenswrapper[4886]: I1124 09:09:17.626089 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="54dfe4a2-2d0f-417f-b1d6-df18e5581bfc" containerName="horizon-log" Nov 24 09:09:17 crc kubenswrapper[4886]: I1124 09:09:17.627915 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 09:09:17 crc kubenswrapper[4886]: I1124 09:09:17.632136 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 24 09:09:17 crc kubenswrapper[4886]: I1124 09:09:17.633180 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 24 09:09:17 crc kubenswrapper[4886]: I1124 09:09:17.633368 4886 scope.go:117] "RemoveContainer" containerID="f7f56e2886b0914d07b49c4386195ea1f33c7a6a080d3a6058f165d109afb945" Nov 24 09:09:17 crc kubenswrapper[4886]: I1124 09:09:17.638982 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 24 09:09:17 crc kubenswrapper[4886]: I1124 09:09:17.744726 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/810d538a-b17b-4169-ad53-6a041de64fee-log-httpd\") pod \"ceilometer-0\" (UID: \"810d538a-b17b-4169-ad53-6a041de64fee\") " pod="openstack/ceilometer-0" Nov 24 09:09:17 crc kubenswrapper[4886]: I1124 09:09:17.744806 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zj58f\" (UniqueName: \"kubernetes.io/projected/810d538a-b17b-4169-ad53-6a041de64fee-kube-api-access-zj58f\") pod \"ceilometer-0\" (UID: \"810d538a-b17b-4169-ad53-6a041de64fee\") " pod="openstack/ceilometer-0" Nov 24 09:09:17 crc kubenswrapper[4886]: I1124 09:09:17.744925 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/810d538a-b17b-4169-ad53-6a041de64fee-scripts\") pod \"ceilometer-0\" (UID: \"810d538a-b17b-4169-ad53-6a041de64fee\") " pod="openstack/ceilometer-0" Nov 24 09:09:17 crc kubenswrapper[4886]: I1124 09:09:17.746765 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/810d538a-b17b-4169-ad53-6a041de64fee-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"810d538a-b17b-4169-ad53-6a041de64fee\") " pod="openstack/ceilometer-0" Nov 24 09:09:17 crc kubenswrapper[4886]: I1124 09:09:17.746805 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/810d538a-b17b-4169-ad53-6a041de64fee-config-data\") pod \"ceilometer-0\" (UID: \"810d538a-b17b-4169-ad53-6a041de64fee\") " pod="openstack/ceilometer-0" Nov 24 09:09:17 crc kubenswrapper[4886]: I1124 09:09:17.746939 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/810d538a-b17b-4169-ad53-6a041de64fee-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"810d538a-b17b-4169-ad53-6a041de64fee\") " pod="openstack/ceilometer-0" Nov 24 09:09:17 crc kubenswrapper[4886]: I1124 09:09:17.747117 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/810d538a-b17b-4169-ad53-6a041de64fee-run-httpd\") pod \"ceilometer-0\" (UID: \"810d538a-b17b-4169-ad53-6a041de64fee\") " pod="openstack/ceilometer-0" Nov 24 09:09:17 crc kubenswrapper[4886]: I1124 09:09:17.848656 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/810d538a-b17b-4169-ad53-6a041de64fee-scripts\") pod \"ceilometer-0\" (UID: \"810d538a-b17b-4169-ad53-6a041de64fee\") " pod="openstack/ceilometer-0" Nov 24 09:09:17 crc kubenswrapper[4886]: I1124 09:09:17.848738 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/810d538a-b17b-4169-ad53-6a041de64fee-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"810d538a-b17b-4169-ad53-6a041de64fee\") " pod="openstack/ceilometer-0" Nov 24 09:09:17 crc kubenswrapper[4886]: I1124 09:09:17.848766 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/810d538a-b17b-4169-ad53-6a041de64fee-config-data\") pod \"ceilometer-0\" (UID: \"810d538a-b17b-4169-ad53-6a041de64fee\") " pod="openstack/ceilometer-0" Nov 24 09:09:17 crc kubenswrapper[4886]: I1124 09:09:17.848795 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/810d538a-b17b-4169-ad53-6a041de64fee-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"810d538a-b17b-4169-ad53-6a041de64fee\") " pod="openstack/ceilometer-0" Nov 24 09:09:17 crc kubenswrapper[4886]: I1124 09:09:17.848866 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/810d538a-b17b-4169-ad53-6a041de64fee-run-httpd\") pod \"ceilometer-0\" (UID: \"810d538a-b17b-4169-ad53-6a041de64fee\") " pod="openstack/ceilometer-0" Nov 24 09:09:17 crc kubenswrapper[4886]: I1124 09:09:17.848904 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/810d538a-b17b-4169-ad53-6a041de64fee-log-httpd\") pod \"ceilometer-0\" (UID: \"810d538a-b17b-4169-ad53-6a041de64fee\") " pod="openstack/ceilometer-0" Nov 24 09:09:17 crc kubenswrapper[4886]: I1124 09:09:17.848965 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zj58f\" (UniqueName: \"kubernetes.io/projected/810d538a-b17b-4169-ad53-6a041de64fee-kube-api-access-zj58f\") pod \"ceilometer-0\" (UID: \"810d538a-b17b-4169-ad53-6a041de64fee\") " pod="openstack/ceilometer-0" Nov 24 09:09:17 crc kubenswrapper[4886]: I1124 09:09:17.849834 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/810d538a-b17b-4169-ad53-6a041de64fee-log-httpd\") pod \"ceilometer-0\" (UID: \"810d538a-b17b-4169-ad53-6a041de64fee\") " pod="openstack/ceilometer-0" Nov 24 09:09:17 crc kubenswrapper[4886]: I1124 09:09:17.849844 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/810d538a-b17b-4169-ad53-6a041de64fee-run-httpd\") pod \"ceilometer-0\" (UID: \"810d538a-b17b-4169-ad53-6a041de64fee\") " pod="openstack/ceilometer-0" Nov 24 09:09:17 crc kubenswrapper[4886]: I1124 09:09:17.855788 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/810d538a-b17b-4169-ad53-6a041de64fee-scripts\") pod \"ceilometer-0\" (UID: \"810d538a-b17b-4169-ad53-6a041de64fee\") " pod="openstack/ceilometer-0" Nov 24 09:09:17 crc kubenswrapper[4886]: I1124 09:09:17.857029 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/810d538a-b17b-4169-ad53-6a041de64fee-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"810d538a-b17b-4169-ad53-6a041de64fee\") " pod="openstack/ceilometer-0" Nov 24 09:09:17 crc kubenswrapper[4886]: I1124 09:09:17.863718 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/810d538a-b17b-4169-ad53-6a041de64fee-config-data\") pod \"ceilometer-0\" (UID: \"810d538a-b17b-4169-ad53-6a041de64fee\") " pod="openstack/ceilometer-0" Nov 24 09:09:17 crc kubenswrapper[4886]: I1124 09:09:17.865461 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/810d538a-b17b-4169-ad53-6a041de64fee-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"810d538a-b17b-4169-ad53-6a041de64fee\") " pod="openstack/ceilometer-0" Nov 24 09:09:17 crc kubenswrapper[4886]: I1124 09:09:17.868986 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zj58f\" (UniqueName: \"kubernetes.io/projected/810d538a-b17b-4169-ad53-6a041de64fee-kube-api-access-zj58f\") pod \"ceilometer-0\" (UID: \"810d538a-b17b-4169-ad53-6a041de64fee\") " pod="openstack/ceilometer-0" Nov 24 09:09:17 crc kubenswrapper[4886]: I1124 09:09:17.957759 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 09:09:18 crc kubenswrapper[4886]: I1124 09:09:18.030254 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-558564f98c-jl2ms" Nov 24 09:09:18 crc kubenswrapper[4886]: I1124 09:09:18.032761 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-558564f98c-jl2ms" Nov 24 09:09:18 crc kubenswrapper[4886]: I1124 09:09:18.201641 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-c9t2q" event={"ID":"f2a4f443-5a71-4a49-816a-b052b3f6246c","Type":"ContainerStarted","Data":"7a8a41ecc73d7f883ee08227e43ee631dee0ab22b8ef701a6e861bbce92e8b5b"} Nov 24 09:09:18 crc kubenswrapper[4886]: I1124 09:09:18.206531 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"c2a34763-2dcf-4e0d-a6b6-7e26dc1f0404","Type":"ContainerStarted","Data":"47046d4c6489b7dba6516bb5aa50fb4831f8ffac5cbed4940caa3f8b191a20bc"} Nov 24 09:09:18 crc kubenswrapper[4886]: I1124 09:09:18.245540 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=5.245511748 podStartE2EDuration="5.245511748s" podCreationTimestamp="2025-11-24 09:09:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 09:09:18.23845035 +0000 UTC m=+1214.125188505" watchObservedRunningTime="2025-11-24 09:09:18.245511748 +0000 UTC m=+1214.132249883" Nov 24 09:09:18 crc kubenswrapper[4886]: I1124 09:09:18.335016 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 24 09:09:18 crc kubenswrapper[4886]: W1124 09:09:18.343877 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod810d538a_b17b_4169_ad53_6a041de64fee.slice/crio-62dd1166ee4ea317406ac68a60ac3d12f3d22b639caa8556c593f0af112c5497 WatchSource:0}: Error finding container 62dd1166ee4ea317406ac68a60ac3d12f3d22b639caa8556c593f0af112c5497: Status 404 returned error can't find the container with id 62dd1166ee4ea317406ac68a60ac3d12f3d22b639caa8556c593f0af112c5497 Nov 24 09:09:18 crc kubenswrapper[4886]: I1124 09:09:18.867560 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4555cb4f-bd53-4d32-b473-1ce3cfdd95c1" path="/var/lib/kubelet/pods/4555cb4f-bd53-4d32-b473-1ce3cfdd95c1/volumes" Nov 24 09:09:18 crc kubenswrapper[4886]: I1124 09:09:18.868713 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="54dfe4a2-2d0f-417f-b1d6-df18e5581bfc" path="/var/lib/kubelet/pods/54dfe4a2-2d0f-417f-b1d6-df18e5581bfc/volumes" Nov 24 09:09:19 crc kubenswrapper[4886]: I1124 09:09:19.227232 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"810d538a-b17b-4169-ad53-6a041de64fee","Type":"ContainerStarted","Data":"ed260612eacd255aeec78a97266fe341d9ea730ff1c44a4fd0ee0733aadc4f8a"} Nov 24 09:09:19 crc kubenswrapper[4886]: I1124 09:09:19.227700 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"810d538a-b17b-4169-ad53-6a041de64fee","Type":"ContainerStarted","Data":"62dd1166ee4ea317406ac68a60ac3d12f3d22b639caa8556c593f0af112c5497"} Nov 24 09:09:20 crc kubenswrapper[4886]: I1124 09:09:20.242466 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"810d538a-b17b-4169-ad53-6a041de64fee","Type":"ContainerStarted","Data":"fda2f5cdb97bf67eb8f5c34e3632307a7f115b632daef1b2b3f9ef5e2a4f3b55"} Nov 24 09:09:21 crc kubenswrapper[4886]: I1124 09:09:21.257656 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"810d538a-b17b-4169-ad53-6a041de64fee","Type":"ContainerStarted","Data":"b0a460651683865540346a39ba696305b2ead6aecdfe10ae79d90e941d0732b3"} Nov 24 09:09:22 crc kubenswrapper[4886]: I1124 09:09:22.138202 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Nov 24 09:09:22 crc kubenswrapper[4886]: I1124 09:09:22.139804 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Nov 24 09:09:22 crc kubenswrapper[4886]: I1124 09:09:22.175839 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Nov 24 09:09:22 crc kubenswrapper[4886]: I1124 09:09:22.206239 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Nov 24 09:09:22 crc kubenswrapper[4886]: I1124 09:09:22.271702 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Nov 24 09:09:22 crc kubenswrapper[4886]: I1124 09:09:22.271918 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Nov 24 09:09:22 crc kubenswrapper[4886]: I1124 09:09:22.757541 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 24 09:09:23 crc kubenswrapper[4886]: I1124 09:09:23.495363 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Nov 24 09:09:23 crc kubenswrapper[4886]: I1124 09:09:23.495699 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Nov 24 09:09:23 crc kubenswrapper[4886]: I1124 09:09:23.549477 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Nov 24 09:09:23 crc kubenswrapper[4886]: I1124 09:09:23.578928 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Nov 24 09:09:24 crc kubenswrapper[4886]: I1124 09:09:24.294306 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Nov 24 09:09:24 crc kubenswrapper[4886]: I1124 09:09:24.294694 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Nov 24 09:09:24 crc kubenswrapper[4886]: I1124 09:09:24.468213 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Nov 24 09:09:24 crc kubenswrapper[4886]: I1124 09:09:24.468384 4886 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 24 09:09:24 crc kubenswrapper[4886]: I1124 09:09:24.614716 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Nov 24 09:09:26 crc kubenswrapper[4886]: I1124 09:09:26.316717 4886 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 24 09:09:26 crc kubenswrapper[4886]: I1124 09:09:26.317233 4886 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 24 09:09:26 crc kubenswrapper[4886]: I1124 09:09:26.714882 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Nov 24 09:09:26 crc kubenswrapper[4886]: I1124 09:09:26.717239 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Nov 24 09:09:29 crc kubenswrapper[4886]: I1124 09:09:29.361837 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"810d538a-b17b-4169-ad53-6a041de64fee","Type":"ContainerStarted","Data":"0adc2ac79b15000424002629a6d62054627527afa9fa104606825273ec7bfb4d"} Nov 24 09:09:29 crc kubenswrapper[4886]: I1124 09:09:29.362871 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 24 09:09:29 crc kubenswrapper[4886]: I1124 09:09:29.362178 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="810d538a-b17b-4169-ad53-6a041de64fee" containerName="proxy-httpd" containerID="cri-o://0adc2ac79b15000424002629a6d62054627527afa9fa104606825273ec7bfb4d" gracePeriod=30 Nov 24 09:09:29 crc kubenswrapper[4886]: I1124 09:09:29.361906 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="810d538a-b17b-4169-ad53-6a041de64fee" containerName="ceilometer-central-agent" containerID="cri-o://ed260612eacd255aeec78a97266fe341d9ea730ff1c44a4fd0ee0733aadc4f8a" gracePeriod=30 Nov 24 09:09:29 crc kubenswrapper[4886]: I1124 09:09:29.362214 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="810d538a-b17b-4169-ad53-6a041de64fee" containerName="ceilometer-notification-agent" containerID="cri-o://fda2f5cdb97bf67eb8f5c34e3632307a7f115b632daef1b2b3f9ef5e2a4f3b55" gracePeriod=30 Nov 24 09:09:29 crc kubenswrapper[4886]: I1124 09:09:29.362199 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="810d538a-b17b-4169-ad53-6a041de64fee" containerName="sg-core" containerID="cri-o://b0a460651683865540346a39ba696305b2ead6aecdfe10ae79d90e941d0732b3" gracePeriod=30 Nov 24 09:09:29 crc kubenswrapper[4886]: I1124 09:09:29.365727 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-c9t2q" event={"ID":"f2a4f443-5a71-4a49-816a-b052b3f6246c","Type":"ContainerStarted","Data":"239439736ecdb042aea787b616b9c9fc9e8422bebfb64a7cc6b3c3993243325e"} Nov 24 09:09:29 crc kubenswrapper[4886]: I1124 09:09:29.406773 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.822385097 podStartE2EDuration="12.406747414s" podCreationTimestamp="2025-11-24 09:09:17 +0000 UTC" firstStartedPulling="2025-11-24 09:09:18.347410494 +0000 UTC m=+1214.234148629" lastFinishedPulling="2025-11-24 09:09:28.931772821 +0000 UTC m=+1224.818510946" observedRunningTime="2025-11-24 09:09:29.401359363 +0000 UTC m=+1225.288097508" watchObservedRunningTime="2025-11-24 09:09:29.406747414 +0000 UTC m=+1225.293485549" Nov 24 09:09:29 crc kubenswrapper[4886]: I1124 09:09:29.433269 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-c9t2q" podStartSLOduration=1.931853576 podStartE2EDuration="13.433231547s" podCreationTimestamp="2025-11-24 09:09:16 +0000 UTC" firstStartedPulling="2025-11-24 09:09:17.438277791 +0000 UTC m=+1213.325015916" lastFinishedPulling="2025-11-24 09:09:28.939655752 +0000 UTC m=+1224.826393887" observedRunningTime="2025-11-24 09:09:29.424103241 +0000 UTC m=+1225.310841386" watchObservedRunningTime="2025-11-24 09:09:29.433231547 +0000 UTC m=+1225.319969692" Nov 24 09:09:30 crc kubenswrapper[4886]: I1124 09:09:30.381536 4886 generic.go:334] "Generic (PLEG): container finished" podID="810d538a-b17b-4169-ad53-6a041de64fee" containerID="0adc2ac79b15000424002629a6d62054627527afa9fa104606825273ec7bfb4d" exitCode=0 Nov 24 09:09:30 crc kubenswrapper[4886]: I1124 09:09:30.381914 4886 generic.go:334] "Generic (PLEG): container finished" podID="810d538a-b17b-4169-ad53-6a041de64fee" containerID="b0a460651683865540346a39ba696305b2ead6aecdfe10ae79d90e941d0732b3" exitCode=2 Nov 24 09:09:30 crc kubenswrapper[4886]: I1124 09:09:30.381925 4886 generic.go:334] "Generic (PLEG): container finished" podID="810d538a-b17b-4169-ad53-6a041de64fee" containerID="fda2f5cdb97bf67eb8f5c34e3632307a7f115b632daef1b2b3f9ef5e2a4f3b55" exitCode=0 Nov 24 09:09:30 crc kubenswrapper[4886]: I1124 09:09:30.381770 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"810d538a-b17b-4169-ad53-6a041de64fee","Type":"ContainerDied","Data":"0adc2ac79b15000424002629a6d62054627527afa9fa104606825273ec7bfb4d"} Nov 24 09:09:30 crc kubenswrapper[4886]: I1124 09:09:30.381995 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"810d538a-b17b-4169-ad53-6a041de64fee","Type":"ContainerDied","Data":"b0a460651683865540346a39ba696305b2ead6aecdfe10ae79d90e941d0732b3"} Nov 24 09:09:30 crc kubenswrapper[4886]: I1124 09:09:30.382021 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"810d538a-b17b-4169-ad53-6a041de64fee","Type":"ContainerDied","Data":"fda2f5cdb97bf67eb8f5c34e3632307a7f115b632daef1b2b3f9ef5e2a4f3b55"} Nov 24 09:09:30 crc kubenswrapper[4886]: I1124 09:09:30.813990 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 09:09:30 crc kubenswrapper[4886]: I1124 09:09:30.873125 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/810d538a-b17b-4169-ad53-6a041de64fee-sg-core-conf-yaml\") pod \"810d538a-b17b-4169-ad53-6a041de64fee\" (UID: \"810d538a-b17b-4169-ad53-6a041de64fee\") " Nov 24 09:09:30 crc kubenswrapper[4886]: I1124 09:09:30.873220 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/810d538a-b17b-4169-ad53-6a041de64fee-combined-ca-bundle\") pod \"810d538a-b17b-4169-ad53-6a041de64fee\" (UID: \"810d538a-b17b-4169-ad53-6a041de64fee\") " Nov 24 09:09:30 crc kubenswrapper[4886]: I1124 09:09:30.873246 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/810d538a-b17b-4169-ad53-6a041de64fee-run-httpd\") pod \"810d538a-b17b-4169-ad53-6a041de64fee\" (UID: \"810d538a-b17b-4169-ad53-6a041de64fee\") " Nov 24 09:09:30 crc kubenswrapper[4886]: I1124 09:09:30.873279 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zj58f\" (UniqueName: \"kubernetes.io/projected/810d538a-b17b-4169-ad53-6a041de64fee-kube-api-access-zj58f\") pod \"810d538a-b17b-4169-ad53-6a041de64fee\" (UID: \"810d538a-b17b-4169-ad53-6a041de64fee\") " Nov 24 09:09:30 crc kubenswrapper[4886]: I1124 09:09:30.873303 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/810d538a-b17b-4169-ad53-6a041de64fee-config-data\") pod \"810d538a-b17b-4169-ad53-6a041de64fee\" (UID: \"810d538a-b17b-4169-ad53-6a041de64fee\") " Nov 24 09:09:30 crc kubenswrapper[4886]: I1124 09:09:30.873409 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/810d538a-b17b-4169-ad53-6a041de64fee-log-httpd\") pod \"810d538a-b17b-4169-ad53-6a041de64fee\" (UID: \"810d538a-b17b-4169-ad53-6a041de64fee\") " Nov 24 09:09:30 crc kubenswrapper[4886]: I1124 09:09:30.873460 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/810d538a-b17b-4169-ad53-6a041de64fee-scripts\") pod \"810d538a-b17b-4169-ad53-6a041de64fee\" (UID: \"810d538a-b17b-4169-ad53-6a041de64fee\") " Nov 24 09:09:30 crc kubenswrapper[4886]: I1124 09:09:30.875593 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/810d538a-b17b-4169-ad53-6a041de64fee-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "810d538a-b17b-4169-ad53-6a041de64fee" (UID: "810d538a-b17b-4169-ad53-6a041de64fee"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 09:09:30 crc kubenswrapper[4886]: I1124 09:09:30.876187 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/810d538a-b17b-4169-ad53-6a041de64fee-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "810d538a-b17b-4169-ad53-6a041de64fee" (UID: "810d538a-b17b-4169-ad53-6a041de64fee"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 09:09:30 crc kubenswrapper[4886]: I1124 09:09:30.898453 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/810d538a-b17b-4169-ad53-6a041de64fee-scripts" (OuterVolumeSpecName: "scripts") pod "810d538a-b17b-4169-ad53-6a041de64fee" (UID: "810d538a-b17b-4169-ad53-6a041de64fee"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:09:30 crc kubenswrapper[4886]: I1124 09:09:30.898485 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/810d538a-b17b-4169-ad53-6a041de64fee-kube-api-access-zj58f" (OuterVolumeSpecName: "kube-api-access-zj58f") pod "810d538a-b17b-4169-ad53-6a041de64fee" (UID: "810d538a-b17b-4169-ad53-6a041de64fee"). InnerVolumeSpecName "kube-api-access-zj58f". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 09:09:30 crc kubenswrapper[4886]: I1124 09:09:30.925890 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/810d538a-b17b-4169-ad53-6a041de64fee-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "810d538a-b17b-4169-ad53-6a041de64fee" (UID: "810d538a-b17b-4169-ad53-6a041de64fee"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:09:30 crc kubenswrapper[4886]: I1124 09:09:30.976047 4886 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/810d538a-b17b-4169-ad53-6a041de64fee-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 24 09:09:30 crc kubenswrapper[4886]: I1124 09:09:30.976096 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zj58f\" (UniqueName: \"kubernetes.io/projected/810d538a-b17b-4169-ad53-6a041de64fee-kube-api-access-zj58f\") on node \"crc\" DevicePath \"\"" Nov 24 09:09:30 crc kubenswrapper[4886]: I1124 09:09:30.976112 4886 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/810d538a-b17b-4169-ad53-6a041de64fee-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 24 09:09:30 crc kubenswrapper[4886]: I1124 09:09:30.976122 4886 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/810d538a-b17b-4169-ad53-6a041de64fee-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 09:09:30 crc kubenswrapper[4886]: I1124 09:09:30.976131 4886 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/810d538a-b17b-4169-ad53-6a041de64fee-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 24 09:09:30 crc kubenswrapper[4886]: I1124 09:09:30.977799 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/810d538a-b17b-4169-ad53-6a041de64fee-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "810d538a-b17b-4169-ad53-6a041de64fee" (UID: "810d538a-b17b-4169-ad53-6a041de64fee"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:09:31 crc kubenswrapper[4886]: I1124 09:09:31.007254 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/810d538a-b17b-4169-ad53-6a041de64fee-config-data" (OuterVolumeSpecName: "config-data") pod "810d538a-b17b-4169-ad53-6a041de64fee" (UID: "810d538a-b17b-4169-ad53-6a041de64fee"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:09:31 crc kubenswrapper[4886]: I1124 09:09:31.078742 4886 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/810d538a-b17b-4169-ad53-6a041de64fee-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 09:09:31 crc kubenswrapper[4886]: I1124 09:09:31.078812 4886 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/810d538a-b17b-4169-ad53-6a041de64fee-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 09:09:31 crc kubenswrapper[4886]: I1124 09:09:31.396537 4886 generic.go:334] "Generic (PLEG): container finished" podID="810d538a-b17b-4169-ad53-6a041de64fee" containerID="ed260612eacd255aeec78a97266fe341d9ea730ff1c44a4fd0ee0733aadc4f8a" exitCode=0 Nov 24 09:09:31 crc kubenswrapper[4886]: I1124 09:09:31.396606 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"810d538a-b17b-4169-ad53-6a041de64fee","Type":"ContainerDied","Data":"ed260612eacd255aeec78a97266fe341d9ea730ff1c44a4fd0ee0733aadc4f8a"} Nov 24 09:09:31 crc kubenswrapper[4886]: I1124 09:09:31.396661 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 09:09:31 crc kubenswrapper[4886]: I1124 09:09:31.396694 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"810d538a-b17b-4169-ad53-6a041de64fee","Type":"ContainerDied","Data":"62dd1166ee4ea317406ac68a60ac3d12f3d22b639caa8556c593f0af112c5497"} Nov 24 09:09:31 crc kubenswrapper[4886]: I1124 09:09:31.396723 4886 scope.go:117] "RemoveContainer" containerID="0adc2ac79b15000424002629a6d62054627527afa9fa104606825273ec7bfb4d" Nov 24 09:09:31 crc kubenswrapper[4886]: I1124 09:09:31.425871 4886 scope.go:117] "RemoveContainer" containerID="b0a460651683865540346a39ba696305b2ead6aecdfe10ae79d90e941d0732b3" Nov 24 09:09:31 crc kubenswrapper[4886]: I1124 09:09:31.461705 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 24 09:09:31 crc kubenswrapper[4886]: I1124 09:09:31.475887 4886 scope.go:117] "RemoveContainer" containerID="fda2f5cdb97bf67eb8f5c34e3632307a7f115b632daef1b2b3f9ef5e2a4f3b55" Nov 24 09:09:31 crc kubenswrapper[4886]: I1124 09:09:31.479864 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 24 09:09:31 crc kubenswrapper[4886]: I1124 09:09:31.488742 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 24 09:09:31 crc kubenswrapper[4886]: E1124 09:09:31.489314 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="810d538a-b17b-4169-ad53-6a041de64fee" containerName="ceilometer-notification-agent" Nov 24 09:09:31 crc kubenswrapper[4886]: I1124 09:09:31.489335 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="810d538a-b17b-4169-ad53-6a041de64fee" containerName="ceilometer-notification-agent" Nov 24 09:09:31 crc kubenswrapper[4886]: E1124 09:09:31.489354 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="810d538a-b17b-4169-ad53-6a041de64fee" containerName="sg-core" Nov 24 09:09:31 crc kubenswrapper[4886]: I1124 09:09:31.489360 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="810d538a-b17b-4169-ad53-6a041de64fee" containerName="sg-core" Nov 24 09:09:31 crc kubenswrapper[4886]: E1124 09:09:31.489387 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="810d538a-b17b-4169-ad53-6a041de64fee" containerName="ceilometer-central-agent" Nov 24 09:09:31 crc kubenswrapper[4886]: I1124 09:09:31.489394 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="810d538a-b17b-4169-ad53-6a041de64fee" containerName="ceilometer-central-agent" Nov 24 09:09:31 crc kubenswrapper[4886]: E1124 09:09:31.489403 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="810d538a-b17b-4169-ad53-6a041de64fee" containerName="proxy-httpd" Nov 24 09:09:31 crc kubenswrapper[4886]: I1124 09:09:31.489409 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="810d538a-b17b-4169-ad53-6a041de64fee" containerName="proxy-httpd" Nov 24 09:09:31 crc kubenswrapper[4886]: I1124 09:09:31.489578 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="810d538a-b17b-4169-ad53-6a041de64fee" containerName="proxy-httpd" Nov 24 09:09:31 crc kubenswrapper[4886]: I1124 09:09:31.489595 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="810d538a-b17b-4169-ad53-6a041de64fee" containerName="ceilometer-notification-agent" Nov 24 09:09:31 crc kubenswrapper[4886]: I1124 09:09:31.489608 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="810d538a-b17b-4169-ad53-6a041de64fee" containerName="sg-core" Nov 24 09:09:31 crc kubenswrapper[4886]: I1124 09:09:31.489620 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="810d538a-b17b-4169-ad53-6a041de64fee" containerName="ceilometer-central-agent" Nov 24 09:09:31 crc kubenswrapper[4886]: I1124 09:09:31.491258 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 09:09:31 crc kubenswrapper[4886]: I1124 09:09:31.496888 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 24 09:09:31 crc kubenswrapper[4886]: I1124 09:09:31.497043 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 24 09:09:31 crc kubenswrapper[4886]: I1124 09:09:31.521369 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 24 09:09:31 crc kubenswrapper[4886]: I1124 09:09:31.530749 4886 scope.go:117] "RemoveContainer" containerID="ed260612eacd255aeec78a97266fe341d9ea730ff1c44a4fd0ee0733aadc4f8a" Nov 24 09:09:31 crc kubenswrapper[4886]: I1124 09:09:31.549892 4886 scope.go:117] "RemoveContainer" containerID="0adc2ac79b15000424002629a6d62054627527afa9fa104606825273ec7bfb4d" Nov 24 09:09:31 crc kubenswrapper[4886]: E1124 09:09:31.550511 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0adc2ac79b15000424002629a6d62054627527afa9fa104606825273ec7bfb4d\": container with ID starting with 0adc2ac79b15000424002629a6d62054627527afa9fa104606825273ec7bfb4d not found: ID does not exist" containerID="0adc2ac79b15000424002629a6d62054627527afa9fa104606825273ec7bfb4d" Nov 24 09:09:31 crc kubenswrapper[4886]: I1124 09:09:31.550552 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0adc2ac79b15000424002629a6d62054627527afa9fa104606825273ec7bfb4d"} err="failed to get container status \"0adc2ac79b15000424002629a6d62054627527afa9fa104606825273ec7bfb4d\": rpc error: code = NotFound desc = could not find container \"0adc2ac79b15000424002629a6d62054627527afa9fa104606825273ec7bfb4d\": container with ID starting with 0adc2ac79b15000424002629a6d62054627527afa9fa104606825273ec7bfb4d not found: ID does not exist" Nov 24 09:09:31 crc kubenswrapper[4886]: I1124 09:09:31.550576 4886 scope.go:117] "RemoveContainer" containerID="b0a460651683865540346a39ba696305b2ead6aecdfe10ae79d90e941d0732b3" Nov 24 09:09:31 crc kubenswrapper[4886]: E1124 09:09:31.551772 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b0a460651683865540346a39ba696305b2ead6aecdfe10ae79d90e941d0732b3\": container with ID starting with b0a460651683865540346a39ba696305b2ead6aecdfe10ae79d90e941d0732b3 not found: ID does not exist" containerID="b0a460651683865540346a39ba696305b2ead6aecdfe10ae79d90e941d0732b3" Nov 24 09:09:31 crc kubenswrapper[4886]: I1124 09:09:31.551801 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b0a460651683865540346a39ba696305b2ead6aecdfe10ae79d90e941d0732b3"} err="failed to get container status \"b0a460651683865540346a39ba696305b2ead6aecdfe10ae79d90e941d0732b3\": rpc error: code = NotFound desc = could not find container \"b0a460651683865540346a39ba696305b2ead6aecdfe10ae79d90e941d0732b3\": container with ID starting with b0a460651683865540346a39ba696305b2ead6aecdfe10ae79d90e941d0732b3 not found: ID does not exist" Nov 24 09:09:31 crc kubenswrapper[4886]: I1124 09:09:31.551819 4886 scope.go:117] "RemoveContainer" containerID="fda2f5cdb97bf67eb8f5c34e3632307a7f115b632daef1b2b3f9ef5e2a4f3b55" Nov 24 09:09:31 crc kubenswrapper[4886]: E1124 09:09:31.552093 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fda2f5cdb97bf67eb8f5c34e3632307a7f115b632daef1b2b3f9ef5e2a4f3b55\": container with ID starting with fda2f5cdb97bf67eb8f5c34e3632307a7f115b632daef1b2b3f9ef5e2a4f3b55 not found: ID does not exist" containerID="fda2f5cdb97bf67eb8f5c34e3632307a7f115b632daef1b2b3f9ef5e2a4f3b55" Nov 24 09:09:31 crc kubenswrapper[4886]: I1124 09:09:31.552120 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fda2f5cdb97bf67eb8f5c34e3632307a7f115b632daef1b2b3f9ef5e2a4f3b55"} err="failed to get container status \"fda2f5cdb97bf67eb8f5c34e3632307a7f115b632daef1b2b3f9ef5e2a4f3b55\": rpc error: code = NotFound desc = could not find container \"fda2f5cdb97bf67eb8f5c34e3632307a7f115b632daef1b2b3f9ef5e2a4f3b55\": container with ID starting with fda2f5cdb97bf67eb8f5c34e3632307a7f115b632daef1b2b3f9ef5e2a4f3b55 not found: ID does not exist" Nov 24 09:09:31 crc kubenswrapper[4886]: I1124 09:09:31.552134 4886 scope.go:117] "RemoveContainer" containerID="ed260612eacd255aeec78a97266fe341d9ea730ff1c44a4fd0ee0733aadc4f8a" Nov 24 09:09:31 crc kubenswrapper[4886]: E1124 09:09:31.552400 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ed260612eacd255aeec78a97266fe341d9ea730ff1c44a4fd0ee0733aadc4f8a\": container with ID starting with ed260612eacd255aeec78a97266fe341d9ea730ff1c44a4fd0ee0733aadc4f8a not found: ID does not exist" containerID="ed260612eacd255aeec78a97266fe341d9ea730ff1c44a4fd0ee0733aadc4f8a" Nov 24 09:09:31 crc kubenswrapper[4886]: I1124 09:09:31.552434 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ed260612eacd255aeec78a97266fe341d9ea730ff1c44a4fd0ee0733aadc4f8a"} err="failed to get container status \"ed260612eacd255aeec78a97266fe341d9ea730ff1c44a4fd0ee0733aadc4f8a\": rpc error: code = NotFound desc = could not find container \"ed260612eacd255aeec78a97266fe341d9ea730ff1c44a4fd0ee0733aadc4f8a\": container with ID starting with ed260612eacd255aeec78a97266fe341d9ea730ff1c44a4fd0ee0733aadc4f8a not found: ID does not exist" Nov 24 09:09:31 crc kubenswrapper[4886]: I1124 09:09:31.689867 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eae9a888-7e95-434a-b28b-4d2ec4e6483a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"eae9a888-7e95-434a-b28b-4d2ec4e6483a\") " pod="openstack/ceilometer-0" Nov 24 09:09:31 crc kubenswrapper[4886]: I1124 09:09:31.689925 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eae9a888-7e95-434a-b28b-4d2ec4e6483a-scripts\") pod \"ceilometer-0\" (UID: \"eae9a888-7e95-434a-b28b-4d2ec4e6483a\") " pod="openstack/ceilometer-0" Nov 24 09:09:31 crc kubenswrapper[4886]: I1124 09:09:31.689970 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/eae9a888-7e95-434a-b28b-4d2ec4e6483a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"eae9a888-7e95-434a-b28b-4d2ec4e6483a\") " pod="openstack/ceilometer-0" Nov 24 09:09:31 crc kubenswrapper[4886]: I1124 09:09:31.690011 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sdjrk\" (UniqueName: \"kubernetes.io/projected/eae9a888-7e95-434a-b28b-4d2ec4e6483a-kube-api-access-sdjrk\") pod \"ceilometer-0\" (UID: \"eae9a888-7e95-434a-b28b-4d2ec4e6483a\") " pod="openstack/ceilometer-0" Nov 24 09:09:31 crc kubenswrapper[4886]: I1124 09:09:31.690068 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/eae9a888-7e95-434a-b28b-4d2ec4e6483a-run-httpd\") pod \"ceilometer-0\" (UID: \"eae9a888-7e95-434a-b28b-4d2ec4e6483a\") " pod="openstack/ceilometer-0" Nov 24 09:09:31 crc kubenswrapper[4886]: I1124 09:09:31.690134 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eae9a888-7e95-434a-b28b-4d2ec4e6483a-config-data\") pod \"ceilometer-0\" (UID: \"eae9a888-7e95-434a-b28b-4d2ec4e6483a\") " pod="openstack/ceilometer-0" Nov 24 09:09:31 crc kubenswrapper[4886]: I1124 09:09:31.690191 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/eae9a888-7e95-434a-b28b-4d2ec4e6483a-log-httpd\") pod \"ceilometer-0\" (UID: \"eae9a888-7e95-434a-b28b-4d2ec4e6483a\") " pod="openstack/ceilometer-0" Nov 24 09:09:31 crc kubenswrapper[4886]: I1124 09:09:31.785000 4886 patch_prober.go:28] interesting pod/machine-config-daemon-zc46q container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 09:09:31 crc kubenswrapper[4886]: I1124 09:09:31.785068 4886 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zc46q" podUID="23cb993e-0360-4449-b604-8ddd825a6502" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 09:09:31 crc kubenswrapper[4886]: I1124 09:09:31.792204 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eae9a888-7e95-434a-b28b-4d2ec4e6483a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"eae9a888-7e95-434a-b28b-4d2ec4e6483a\") " pod="openstack/ceilometer-0" Nov 24 09:09:31 crc kubenswrapper[4886]: I1124 09:09:31.792264 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eae9a888-7e95-434a-b28b-4d2ec4e6483a-scripts\") pod \"ceilometer-0\" (UID: \"eae9a888-7e95-434a-b28b-4d2ec4e6483a\") " pod="openstack/ceilometer-0" Nov 24 09:09:31 crc kubenswrapper[4886]: I1124 09:09:31.792291 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/eae9a888-7e95-434a-b28b-4d2ec4e6483a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"eae9a888-7e95-434a-b28b-4d2ec4e6483a\") " pod="openstack/ceilometer-0" Nov 24 09:09:31 crc kubenswrapper[4886]: I1124 09:09:31.792318 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sdjrk\" (UniqueName: \"kubernetes.io/projected/eae9a888-7e95-434a-b28b-4d2ec4e6483a-kube-api-access-sdjrk\") pod \"ceilometer-0\" (UID: \"eae9a888-7e95-434a-b28b-4d2ec4e6483a\") " pod="openstack/ceilometer-0" Nov 24 09:09:31 crc kubenswrapper[4886]: I1124 09:09:31.792378 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/eae9a888-7e95-434a-b28b-4d2ec4e6483a-run-httpd\") pod \"ceilometer-0\" (UID: \"eae9a888-7e95-434a-b28b-4d2ec4e6483a\") " pod="openstack/ceilometer-0" Nov 24 09:09:31 crc kubenswrapper[4886]: I1124 09:09:31.792470 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eae9a888-7e95-434a-b28b-4d2ec4e6483a-config-data\") pod \"ceilometer-0\" (UID: \"eae9a888-7e95-434a-b28b-4d2ec4e6483a\") " pod="openstack/ceilometer-0" Nov 24 09:09:31 crc kubenswrapper[4886]: I1124 09:09:31.792490 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/eae9a888-7e95-434a-b28b-4d2ec4e6483a-log-httpd\") pod \"ceilometer-0\" (UID: \"eae9a888-7e95-434a-b28b-4d2ec4e6483a\") " pod="openstack/ceilometer-0" Nov 24 09:09:31 crc kubenswrapper[4886]: I1124 09:09:31.793521 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/eae9a888-7e95-434a-b28b-4d2ec4e6483a-run-httpd\") pod \"ceilometer-0\" (UID: \"eae9a888-7e95-434a-b28b-4d2ec4e6483a\") " pod="openstack/ceilometer-0" Nov 24 09:09:31 crc kubenswrapper[4886]: I1124 09:09:31.793760 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/eae9a888-7e95-434a-b28b-4d2ec4e6483a-log-httpd\") pod \"ceilometer-0\" (UID: \"eae9a888-7e95-434a-b28b-4d2ec4e6483a\") " pod="openstack/ceilometer-0" Nov 24 09:09:31 crc kubenswrapper[4886]: I1124 09:09:31.797872 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/eae9a888-7e95-434a-b28b-4d2ec4e6483a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"eae9a888-7e95-434a-b28b-4d2ec4e6483a\") " pod="openstack/ceilometer-0" Nov 24 09:09:31 crc kubenswrapper[4886]: I1124 09:09:31.802100 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eae9a888-7e95-434a-b28b-4d2ec4e6483a-scripts\") pod \"ceilometer-0\" (UID: \"eae9a888-7e95-434a-b28b-4d2ec4e6483a\") " pod="openstack/ceilometer-0" Nov 24 09:09:31 crc kubenswrapper[4886]: I1124 09:09:31.803131 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eae9a888-7e95-434a-b28b-4d2ec4e6483a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"eae9a888-7e95-434a-b28b-4d2ec4e6483a\") " pod="openstack/ceilometer-0" Nov 24 09:09:31 crc kubenswrapper[4886]: I1124 09:09:31.804956 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eae9a888-7e95-434a-b28b-4d2ec4e6483a-config-data\") pod \"ceilometer-0\" (UID: \"eae9a888-7e95-434a-b28b-4d2ec4e6483a\") " pod="openstack/ceilometer-0" Nov 24 09:09:31 crc kubenswrapper[4886]: I1124 09:09:31.812640 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sdjrk\" (UniqueName: \"kubernetes.io/projected/eae9a888-7e95-434a-b28b-4d2ec4e6483a-kube-api-access-sdjrk\") pod \"ceilometer-0\" (UID: \"eae9a888-7e95-434a-b28b-4d2ec4e6483a\") " pod="openstack/ceilometer-0" Nov 24 09:09:31 crc kubenswrapper[4886]: I1124 09:09:31.831625 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 09:09:32 crc kubenswrapper[4886]: I1124 09:09:32.354532 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 24 09:09:32 crc kubenswrapper[4886]: I1124 09:09:32.413648 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"eae9a888-7e95-434a-b28b-4d2ec4e6483a","Type":"ContainerStarted","Data":"007747c057d96c51000ba43e25391508d3c3dc4e8cd192e4646e9f46e71f5fe2"} Nov 24 09:09:32 crc kubenswrapper[4886]: I1124 09:09:32.863579 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="810d538a-b17b-4169-ad53-6a041de64fee" path="/var/lib/kubelet/pods/810d538a-b17b-4169-ad53-6a041de64fee/volumes" Nov 24 09:09:33 crc kubenswrapper[4886]: I1124 09:09:33.426498 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"eae9a888-7e95-434a-b28b-4d2ec4e6483a","Type":"ContainerStarted","Data":"29c9d2823267a5c549796b47a7f559500614ce366147eda6f2233162fdf03b24"} Nov 24 09:09:34 crc kubenswrapper[4886]: I1124 09:09:34.446044 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"eae9a888-7e95-434a-b28b-4d2ec4e6483a","Type":"ContainerStarted","Data":"8378fb38facacf81faffbc6dcc248fc5d22e3789c995a907f8de6397d8089d16"} Nov 24 09:09:35 crc kubenswrapper[4886]: I1124 09:09:35.458364 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"eae9a888-7e95-434a-b28b-4d2ec4e6483a","Type":"ContainerStarted","Data":"b9f144ad8b5222ae4f48fa181c294c79c4dc1fdd5a4a9cee03e1fb7ed04c80c5"} Nov 24 09:09:36 crc kubenswrapper[4886]: I1124 09:09:36.470719 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"eae9a888-7e95-434a-b28b-4d2ec4e6483a","Type":"ContainerStarted","Data":"42e7ca5bb136971ff3d79892e394ef186e97834ae557592832d118b7f883de10"} Nov 24 09:09:36 crc kubenswrapper[4886]: I1124 09:09:36.471819 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 24 09:09:36 crc kubenswrapper[4886]: I1124 09:09:36.506770 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.254245479 podStartE2EDuration="5.506749496s" podCreationTimestamp="2025-11-24 09:09:31 +0000 UTC" firstStartedPulling="2025-11-24 09:09:32.366165556 +0000 UTC m=+1228.252903691" lastFinishedPulling="2025-11-24 09:09:35.618669573 +0000 UTC m=+1231.505407708" observedRunningTime="2025-11-24 09:09:36.500016917 +0000 UTC m=+1232.386755072" watchObservedRunningTime="2025-11-24 09:09:36.506749496 +0000 UTC m=+1232.393487631" Nov 24 09:09:42 crc kubenswrapper[4886]: I1124 09:09:42.535563 4886 generic.go:334] "Generic (PLEG): container finished" podID="f2a4f443-5a71-4a49-816a-b052b3f6246c" containerID="239439736ecdb042aea787b616b9c9fc9e8422bebfb64a7cc6b3c3993243325e" exitCode=0 Nov 24 09:09:42 crc kubenswrapper[4886]: I1124 09:09:42.535625 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-c9t2q" event={"ID":"f2a4f443-5a71-4a49-816a-b052b3f6246c","Type":"ContainerDied","Data":"239439736ecdb042aea787b616b9c9fc9e8422bebfb64a7cc6b3c3993243325e"} Nov 24 09:09:44 crc kubenswrapper[4886]: I1124 09:09:44.014794 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-c9t2q" Nov 24 09:09:44 crc kubenswrapper[4886]: I1124 09:09:44.167912 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2a4f443-5a71-4a49-816a-b052b3f6246c-config-data\") pod \"f2a4f443-5a71-4a49-816a-b052b3f6246c\" (UID: \"f2a4f443-5a71-4a49-816a-b052b3f6246c\") " Nov 24 09:09:44 crc kubenswrapper[4886]: I1124 09:09:44.168053 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2a4f443-5a71-4a49-816a-b052b3f6246c-combined-ca-bundle\") pod \"f2a4f443-5a71-4a49-816a-b052b3f6246c\" (UID: \"f2a4f443-5a71-4a49-816a-b052b3f6246c\") " Nov 24 09:09:44 crc kubenswrapper[4886]: I1124 09:09:44.168097 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f2a4f443-5a71-4a49-816a-b052b3f6246c-scripts\") pod \"f2a4f443-5a71-4a49-816a-b052b3f6246c\" (UID: \"f2a4f443-5a71-4a49-816a-b052b3f6246c\") " Nov 24 09:09:44 crc kubenswrapper[4886]: I1124 09:09:44.168233 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mlq5c\" (UniqueName: \"kubernetes.io/projected/f2a4f443-5a71-4a49-816a-b052b3f6246c-kube-api-access-mlq5c\") pod \"f2a4f443-5a71-4a49-816a-b052b3f6246c\" (UID: \"f2a4f443-5a71-4a49-816a-b052b3f6246c\") " Nov 24 09:09:44 crc kubenswrapper[4886]: I1124 09:09:44.177347 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f2a4f443-5a71-4a49-816a-b052b3f6246c-kube-api-access-mlq5c" (OuterVolumeSpecName: "kube-api-access-mlq5c") pod "f2a4f443-5a71-4a49-816a-b052b3f6246c" (UID: "f2a4f443-5a71-4a49-816a-b052b3f6246c"). InnerVolumeSpecName "kube-api-access-mlq5c". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 09:09:44 crc kubenswrapper[4886]: I1124 09:09:44.177670 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f2a4f443-5a71-4a49-816a-b052b3f6246c-scripts" (OuterVolumeSpecName: "scripts") pod "f2a4f443-5a71-4a49-816a-b052b3f6246c" (UID: "f2a4f443-5a71-4a49-816a-b052b3f6246c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:09:44 crc kubenswrapper[4886]: I1124 09:09:44.203060 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f2a4f443-5a71-4a49-816a-b052b3f6246c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f2a4f443-5a71-4a49-816a-b052b3f6246c" (UID: "f2a4f443-5a71-4a49-816a-b052b3f6246c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:09:44 crc kubenswrapper[4886]: I1124 09:09:44.209061 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f2a4f443-5a71-4a49-816a-b052b3f6246c-config-data" (OuterVolumeSpecName: "config-data") pod "f2a4f443-5a71-4a49-816a-b052b3f6246c" (UID: "f2a4f443-5a71-4a49-816a-b052b3f6246c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:09:44 crc kubenswrapper[4886]: I1124 09:09:44.270315 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mlq5c\" (UniqueName: \"kubernetes.io/projected/f2a4f443-5a71-4a49-816a-b052b3f6246c-kube-api-access-mlq5c\") on node \"crc\" DevicePath \"\"" Nov 24 09:09:44 crc kubenswrapper[4886]: I1124 09:09:44.270368 4886 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2a4f443-5a71-4a49-816a-b052b3f6246c-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 09:09:44 crc kubenswrapper[4886]: I1124 09:09:44.270381 4886 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2a4f443-5a71-4a49-816a-b052b3f6246c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 09:09:44 crc kubenswrapper[4886]: I1124 09:09:44.270389 4886 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f2a4f443-5a71-4a49-816a-b052b3f6246c-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 09:09:44 crc kubenswrapper[4886]: I1124 09:09:44.568306 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-c9t2q" event={"ID":"f2a4f443-5a71-4a49-816a-b052b3f6246c","Type":"ContainerDied","Data":"7a8a41ecc73d7f883ee08227e43ee631dee0ab22b8ef701a6e861bbce92e8b5b"} Nov 24 09:09:44 crc kubenswrapper[4886]: I1124 09:09:44.568357 4886 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7a8a41ecc73d7f883ee08227e43ee631dee0ab22b8ef701a6e861bbce92e8b5b" Nov 24 09:09:44 crc kubenswrapper[4886]: I1124 09:09:44.568440 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-c9t2q" Nov 24 09:09:44 crc kubenswrapper[4886]: I1124 09:09:44.719329 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 24 09:09:44 crc kubenswrapper[4886]: E1124 09:09:44.719818 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2a4f443-5a71-4a49-816a-b052b3f6246c" containerName="nova-cell0-conductor-db-sync" Nov 24 09:09:44 crc kubenswrapper[4886]: I1124 09:09:44.719838 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2a4f443-5a71-4a49-816a-b052b3f6246c" containerName="nova-cell0-conductor-db-sync" Nov 24 09:09:44 crc kubenswrapper[4886]: I1124 09:09:44.720047 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="f2a4f443-5a71-4a49-816a-b052b3f6246c" containerName="nova-cell0-conductor-db-sync" Nov 24 09:09:44 crc kubenswrapper[4886]: I1124 09:09:44.720857 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Nov 24 09:09:44 crc kubenswrapper[4886]: I1124 09:09:44.725414 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-vbwk7" Nov 24 09:09:44 crc kubenswrapper[4886]: I1124 09:09:44.725788 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Nov 24 09:09:44 crc kubenswrapper[4886]: I1124 09:09:44.738109 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 24 09:09:44 crc kubenswrapper[4886]: E1124 09:09:44.887840 4886 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf2a4f443_5a71_4a49_816a_b052b3f6246c.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf2a4f443_5a71_4a49_816a_b052b3f6246c.slice/crio-7a8a41ecc73d7f883ee08227e43ee631dee0ab22b8ef701a6e861bbce92e8b5b\": RecentStats: unable to find data in memory cache]" Nov 24 09:09:44 crc kubenswrapper[4886]: I1124 09:09:44.917951 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f98453e-9a49-498a-bcc6-6a4d82f39fc7-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"2f98453e-9a49-498a-bcc6-6a4d82f39fc7\") " pod="openstack/nova-cell0-conductor-0" Nov 24 09:09:44 crc kubenswrapper[4886]: I1124 09:09:44.918107 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2f98453e-9a49-498a-bcc6-6a4d82f39fc7-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"2f98453e-9a49-498a-bcc6-6a4d82f39fc7\") " pod="openstack/nova-cell0-conductor-0" Nov 24 09:09:44 crc kubenswrapper[4886]: I1124 09:09:44.918253 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-76zzq\" (UniqueName: \"kubernetes.io/projected/2f98453e-9a49-498a-bcc6-6a4d82f39fc7-kube-api-access-76zzq\") pod \"nova-cell0-conductor-0\" (UID: \"2f98453e-9a49-498a-bcc6-6a4d82f39fc7\") " pod="openstack/nova-cell0-conductor-0" Nov 24 09:09:45 crc kubenswrapper[4886]: I1124 09:09:45.020492 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f98453e-9a49-498a-bcc6-6a4d82f39fc7-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"2f98453e-9a49-498a-bcc6-6a4d82f39fc7\") " pod="openstack/nova-cell0-conductor-0" Nov 24 09:09:45 crc kubenswrapper[4886]: I1124 09:09:45.020626 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2f98453e-9a49-498a-bcc6-6a4d82f39fc7-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"2f98453e-9a49-498a-bcc6-6a4d82f39fc7\") " pod="openstack/nova-cell0-conductor-0" Nov 24 09:09:45 crc kubenswrapper[4886]: I1124 09:09:45.020683 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-76zzq\" (UniqueName: \"kubernetes.io/projected/2f98453e-9a49-498a-bcc6-6a4d82f39fc7-kube-api-access-76zzq\") pod \"nova-cell0-conductor-0\" (UID: \"2f98453e-9a49-498a-bcc6-6a4d82f39fc7\") " pod="openstack/nova-cell0-conductor-0" Nov 24 09:09:45 crc kubenswrapper[4886]: I1124 09:09:45.027585 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2f98453e-9a49-498a-bcc6-6a4d82f39fc7-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"2f98453e-9a49-498a-bcc6-6a4d82f39fc7\") " pod="openstack/nova-cell0-conductor-0" Nov 24 09:09:45 crc kubenswrapper[4886]: I1124 09:09:45.037927 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f98453e-9a49-498a-bcc6-6a4d82f39fc7-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"2f98453e-9a49-498a-bcc6-6a4d82f39fc7\") " pod="openstack/nova-cell0-conductor-0" Nov 24 09:09:45 crc kubenswrapper[4886]: I1124 09:09:45.041408 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-76zzq\" (UniqueName: \"kubernetes.io/projected/2f98453e-9a49-498a-bcc6-6a4d82f39fc7-kube-api-access-76zzq\") pod \"nova-cell0-conductor-0\" (UID: \"2f98453e-9a49-498a-bcc6-6a4d82f39fc7\") " pod="openstack/nova-cell0-conductor-0" Nov 24 09:09:45 crc kubenswrapper[4886]: I1124 09:09:45.077432 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Nov 24 09:09:45 crc kubenswrapper[4886]: I1124 09:09:45.595774 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 24 09:09:46 crc kubenswrapper[4886]: I1124 09:09:46.596597 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"2f98453e-9a49-498a-bcc6-6a4d82f39fc7","Type":"ContainerStarted","Data":"39446a24a6b41994aa1f00d26c51610c0ee65cde66ffeeabeb71b625482ec728"} Nov 24 09:09:46 crc kubenswrapper[4886]: I1124 09:09:46.597127 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"2f98453e-9a49-498a-bcc6-6a4d82f39fc7","Type":"ContainerStarted","Data":"5cc0ab66e330dd29aaf811061892a484e1579dabe8b4b7bfedeac551c2f795b3"} Nov 24 09:09:46 crc kubenswrapper[4886]: I1124 09:09:46.597189 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Nov 24 09:09:46 crc kubenswrapper[4886]: I1124 09:09:46.622273 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.622248333 podStartE2EDuration="2.622248333s" podCreationTimestamp="2025-11-24 09:09:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 09:09:46.61299737 +0000 UTC m=+1242.499735525" watchObservedRunningTime="2025-11-24 09:09:46.622248333 +0000 UTC m=+1242.508986468" Nov 24 09:09:46 crc kubenswrapper[4886]: I1124 09:09:46.791124 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 24 09:09:46 crc kubenswrapper[4886]: I1124 09:09:46.792698 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="eae9a888-7e95-434a-b28b-4d2ec4e6483a" containerName="ceilometer-central-agent" containerID="cri-o://29c9d2823267a5c549796b47a7f559500614ce366147eda6f2233162fdf03b24" gracePeriod=30 Nov 24 09:09:46 crc kubenswrapper[4886]: I1124 09:09:46.792733 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="eae9a888-7e95-434a-b28b-4d2ec4e6483a" containerName="sg-core" containerID="cri-o://b9f144ad8b5222ae4f48fa181c294c79c4dc1fdd5a4a9cee03e1fb7ed04c80c5" gracePeriod=30 Nov 24 09:09:46 crc kubenswrapper[4886]: I1124 09:09:46.792745 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="eae9a888-7e95-434a-b28b-4d2ec4e6483a" containerName="proxy-httpd" containerID="cri-o://42e7ca5bb136971ff3d79892e394ef186e97834ae557592832d118b7f883de10" gracePeriod=30 Nov 24 09:09:46 crc kubenswrapper[4886]: I1124 09:09:46.792802 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="eae9a888-7e95-434a-b28b-4d2ec4e6483a" containerName="ceilometer-notification-agent" containerID="cri-o://8378fb38facacf81faffbc6dcc248fc5d22e3789c995a907f8de6397d8089d16" gracePeriod=30 Nov 24 09:09:46 crc kubenswrapper[4886]: I1124 09:09:46.808891 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="eae9a888-7e95-434a-b28b-4d2ec4e6483a" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 502" Nov 24 09:09:47 crc kubenswrapper[4886]: I1124 09:09:47.609752 4886 generic.go:334] "Generic (PLEG): container finished" podID="eae9a888-7e95-434a-b28b-4d2ec4e6483a" containerID="42e7ca5bb136971ff3d79892e394ef186e97834ae557592832d118b7f883de10" exitCode=0 Nov 24 09:09:47 crc kubenswrapper[4886]: I1124 09:09:47.609790 4886 generic.go:334] "Generic (PLEG): container finished" podID="eae9a888-7e95-434a-b28b-4d2ec4e6483a" containerID="b9f144ad8b5222ae4f48fa181c294c79c4dc1fdd5a4a9cee03e1fb7ed04c80c5" exitCode=2 Nov 24 09:09:47 crc kubenswrapper[4886]: I1124 09:09:47.609799 4886 generic.go:334] "Generic (PLEG): container finished" podID="eae9a888-7e95-434a-b28b-4d2ec4e6483a" containerID="29c9d2823267a5c549796b47a7f559500614ce366147eda6f2233162fdf03b24" exitCode=0 Nov 24 09:09:47 crc kubenswrapper[4886]: I1124 09:09:47.609857 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"eae9a888-7e95-434a-b28b-4d2ec4e6483a","Type":"ContainerDied","Data":"42e7ca5bb136971ff3d79892e394ef186e97834ae557592832d118b7f883de10"} Nov 24 09:09:47 crc kubenswrapper[4886]: I1124 09:09:47.609924 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"eae9a888-7e95-434a-b28b-4d2ec4e6483a","Type":"ContainerDied","Data":"b9f144ad8b5222ae4f48fa181c294c79c4dc1fdd5a4a9cee03e1fb7ed04c80c5"} Nov 24 09:09:47 crc kubenswrapper[4886]: I1124 09:09:47.609943 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"eae9a888-7e95-434a-b28b-4d2ec4e6483a","Type":"ContainerDied","Data":"29c9d2823267a5c549796b47a7f559500614ce366147eda6f2233162fdf03b24"} Nov 24 09:09:50 crc kubenswrapper[4886]: I1124 09:09:50.112028 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Nov 24 09:09:50 crc kubenswrapper[4886]: I1124 09:09:50.651486 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-rmkch"] Nov 24 09:09:50 crc kubenswrapper[4886]: I1124 09:09:50.653619 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-rmkch" Nov 24 09:09:50 crc kubenswrapper[4886]: I1124 09:09:50.656417 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Nov 24 09:09:50 crc kubenswrapper[4886]: I1124 09:09:50.657082 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Nov 24 09:09:50 crc kubenswrapper[4886]: I1124 09:09:50.670062 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-rmkch"] Nov 24 09:09:50 crc kubenswrapper[4886]: I1124 09:09:50.755368 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nwwsn\" (UniqueName: \"kubernetes.io/projected/a7266b1c-bb21-4f54-994c-52ab5db8d4eb-kube-api-access-nwwsn\") pod \"nova-cell0-cell-mapping-rmkch\" (UID: \"a7266b1c-bb21-4f54-994c-52ab5db8d4eb\") " pod="openstack/nova-cell0-cell-mapping-rmkch" Nov 24 09:09:50 crc kubenswrapper[4886]: I1124 09:09:50.755483 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7266b1c-bb21-4f54-994c-52ab5db8d4eb-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-rmkch\" (UID: \"a7266b1c-bb21-4f54-994c-52ab5db8d4eb\") " pod="openstack/nova-cell0-cell-mapping-rmkch" Nov 24 09:09:50 crc kubenswrapper[4886]: I1124 09:09:50.755530 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a7266b1c-bb21-4f54-994c-52ab5db8d4eb-scripts\") pod \"nova-cell0-cell-mapping-rmkch\" (UID: \"a7266b1c-bb21-4f54-994c-52ab5db8d4eb\") " pod="openstack/nova-cell0-cell-mapping-rmkch" Nov 24 09:09:50 crc kubenswrapper[4886]: I1124 09:09:50.755596 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a7266b1c-bb21-4f54-994c-52ab5db8d4eb-config-data\") pod \"nova-cell0-cell-mapping-rmkch\" (UID: \"a7266b1c-bb21-4f54-994c-52ab5db8d4eb\") " pod="openstack/nova-cell0-cell-mapping-rmkch" Nov 24 09:09:50 crc kubenswrapper[4886]: I1124 09:09:50.858279 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nwwsn\" (UniqueName: \"kubernetes.io/projected/a7266b1c-bb21-4f54-994c-52ab5db8d4eb-kube-api-access-nwwsn\") pod \"nova-cell0-cell-mapping-rmkch\" (UID: \"a7266b1c-bb21-4f54-994c-52ab5db8d4eb\") " pod="openstack/nova-cell0-cell-mapping-rmkch" Nov 24 09:09:50 crc kubenswrapper[4886]: I1124 09:09:50.858424 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7266b1c-bb21-4f54-994c-52ab5db8d4eb-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-rmkch\" (UID: \"a7266b1c-bb21-4f54-994c-52ab5db8d4eb\") " pod="openstack/nova-cell0-cell-mapping-rmkch" Nov 24 09:09:50 crc kubenswrapper[4886]: I1124 09:09:50.858464 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a7266b1c-bb21-4f54-994c-52ab5db8d4eb-scripts\") pod \"nova-cell0-cell-mapping-rmkch\" (UID: \"a7266b1c-bb21-4f54-994c-52ab5db8d4eb\") " pod="openstack/nova-cell0-cell-mapping-rmkch" Nov 24 09:09:50 crc kubenswrapper[4886]: I1124 09:09:50.858527 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a7266b1c-bb21-4f54-994c-52ab5db8d4eb-config-data\") pod \"nova-cell0-cell-mapping-rmkch\" (UID: \"a7266b1c-bb21-4f54-994c-52ab5db8d4eb\") " pod="openstack/nova-cell0-cell-mapping-rmkch" Nov 24 09:09:50 crc kubenswrapper[4886]: I1124 09:09:50.891926 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a7266b1c-bb21-4f54-994c-52ab5db8d4eb-scripts\") pod \"nova-cell0-cell-mapping-rmkch\" (UID: \"a7266b1c-bb21-4f54-994c-52ab5db8d4eb\") " pod="openstack/nova-cell0-cell-mapping-rmkch" Nov 24 09:09:50 crc kubenswrapper[4886]: I1124 09:09:50.901313 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7266b1c-bb21-4f54-994c-52ab5db8d4eb-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-rmkch\" (UID: \"a7266b1c-bb21-4f54-994c-52ab5db8d4eb\") " pod="openstack/nova-cell0-cell-mapping-rmkch" Nov 24 09:09:50 crc kubenswrapper[4886]: I1124 09:09:50.905264 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a7266b1c-bb21-4f54-994c-52ab5db8d4eb-config-data\") pod \"nova-cell0-cell-mapping-rmkch\" (UID: \"a7266b1c-bb21-4f54-994c-52ab5db8d4eb\") " pod="openstack/nova-cell0-cell-mapping-rmkch" Nov 24 09:09:50 crc kubenswrapper[4886]: I1124 09:09:50.908321 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Nov 24 09:09:50 crc kubenswrapper[4886]: I1124 09:09:50.918588 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 24 09:09:50 crc kubenswrapper[4886]: I1124 09:09:50.937354 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nwwsn\" (UniqueName: \"kubernetes.io/projected/a7266b1c-bb21-4f54-994c-52ab5db8d4eb-kube-api-access-nwwsn\") pod \"nova-cell0-cell-mapping-rmkch\" (UID: \"a7266b1c-bb21-4f54-994c-52ab5db8d4eb\") " pod="openstack/nova-cell0-cell-mapping-rmkch" Nov 24 09:09:50 crc kubenswrapper[4886]: I1124 09:09:50.937365 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 24 09:09:50 crc kubenswrapper[4886]: I1124 09:09:50.938351 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Nov 24 09:09:50 crc kubenswrapper[4886]: I1124 09:09:50.979331 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-rmkch" Nov 24 09:09:50 crc kubenswrapper[4886]: I1124 09:09:50.986246 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Nov 24 09:09:51 crc kubenswrapper[4886]: I1124 09:09:51.002918 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 24 09:09:51 crc kubenswrapper[4886]: I1124 09:09:51.021089 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Nov 24 09:09:51 crc kubenswrapper[4886]: I1124 09:09:51.052840 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 24 09:09:51 crc kubenswrapper[4886]: I1124 09:09:51.076629 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7491be7c-1c75-4527-8f77-290e42796216-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"7491be7c-1c75-4527-8f77-290e42796216\") " pod="openstack/nova-api-0" Nov 24 09:09:51 crc kubenswrapper[4886]: I1124 09:09:51.077000 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7491be7c-1c75-4527-8f77-290e42796216-config-data\") pod \"nova-api-0\" (UID: \"7491be7c-1c75-4527-8f77-290e42796216\") " pod="openstack/nova-api-0" Nov 24 09:09:51 crc kubenswrapper[4886]: I1124 09:09:51.077690 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4qqkd\" (UniqueName: \"kubernetes.io/projected/7491be7c-1c75-4527-8f77-290e42796216-kube-api-access-4qqkd\") pod \"nova-api-0\" (UID: \"7491be7c-1c75-4527-8f77-290e42796216\") " pod="openstack/nova-api-0" Nov 24 09:09:51 crc kubenswrapper[4886]: I1124 09:09:51.092264 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7491be7c-1c75-4527-8f77-290e42796216-logs\") pod \"nova-api-0\" (UID: \"7491be7c-1c75-4527-8f77-290e42796216\") " pod="openstack/nova-api-0" Nov 24 09:09:51 crc kubenswrapper[4886]: I1124 09:09:51.143240 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Nov 24 09:09:51 crc kubenswrapper[4886]: I1124 09:09:51.145238 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 24 09:09:51 crc kubenswrapper[4886]: I1124 09:09:51.151293 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Nov 24 09:09:51 crc kubenswrapper[4886]: I1124 09:09:51.168048 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 24 09:09:51 crc kubenswrapper[4886]: I1124 09:09:51.183426 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-bccf8f775-2fd72"] Nov 24 09:09:51 crc kubenswrapper[4886]: I1124 09:09:51.186335 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bccf8f775-2fd72" Nov 24 09:09:51 crc kubenswrapper[4886]: I1124 09:09:51.201164 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39bb02ba-ed82-471c-b04f-89fd336948b3-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"39bb02ba-ed82-471c-b04f-89fd336948b3\") " pod="openstack/nova-metadata-0" Nov 24 09:09:51 crc kubenswrapper[4886]: I1124 09:09:51.201248 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mqcnt\" (UniqueName: \"kubernetes.io/projected/0d0df575-904e-4913-9463-7c776faedd7e-kube-api-access-mqcnt\") pod \"dnsmasq-dns-bccf8f775-2fd72\" (UID: \"0d0df575-904e-4913-9463-7c776faedd7e\") " pod="openstack/dnsmasq-dns-bccf8f775-2fd72" Nov 24 09:09:51 crc kubenswrapper[4886]: I1124 09:09:51.201331 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/39bb02ba-ed82-471c-b04f-89fd336948b3-logs\") pod \"nova-metadata-0\" (UID: \"39bb02ba-ed82-471c-b04f-89fd336948b3\") " pod="openstack/nova-metadata-0" Nov 24 09:09:51 crc kubenswrapper[4886]: I1124 09:09:51.201423 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/89ccf13d-537d-485f-ab6d-448bc8171089-config-data\") pod \"nova-scheduler-0\" (UID: \"89ccf13d-537d-485f-ab6d-448bc8171089\") " pod="openstack/nova-scheduler-0" Nov 24 09:09:51 crc kubenswrapper[4886]: I1124 09:09:51.201452 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7491be7c-1c75-4527-8f77-290e42796216-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"7491be7c-1c75-4527-8f77-290e42796216\") " pod="openstack/nova-api-0" Nov 24 09:09:51 crc kubenswrapper[4886]: I1124 09:09:51.201475 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89ccf13d-537d-485f-ab6d-448bc8171089-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"89ccf13d-537d-485f-ab6d-448bc8171089\") " pod="openstack/nova-scheduler-0" Nov 24 09:09:51 crc kubenswrapper[4886]: I1124 09:09:51.201524 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0d0df575-904e-4913-9463-7c776faedd7e-config\") pod \"dnsmasq-dns-bccf8f775-2fd72\" (UID: \"0d0df575-904e-4913-9463-7c776faedd7e\") " pod="openstack/dnsmasq-dns-bccf8f775-2fd72" Nov 24 09:09:51 crc kubenswrapper[4886]: I1124 09:09:51.201580 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0d0df575-904e-4913-9463-7c776faedd7e-dns-svc\") pod \"dnsmasq-dns-bccf8f775-2fd72\" (UID: \"0d0df575-904e-4913-9463-7c776faedd7e\") " pod="openstack/dnsmasq-dns-bccf8f775-2fd72" Nov 24 09:09:51 crc kubenswrapper[4886]: I1124 09:09:51.201631 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7491be7c-1c75-4527-8f77-290e42796216-config-data\") pod \"nova-api-0\" (UID: \"7491be7c-1c75-4527-8f77-290e42796216\") " pod="openstack/nova-api-0" Nov 24 09:09:51 crc kubenswrapper[4886]: I1124 09:09:51.201801 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0d0df575-904e-4913-9463-7c776faedd7e-ovsdbserver-nb\") pod \"dnsmasq-dns-bccf8f775-2fd72\" (UID: \"0d0df575-904e-4913-9463-7c776faedd7e\") " pod="openstack/dnsmasq-dns-bccf8f775-2fd72" Nov 24 09:09:51 crc kubenswrapper[4886]: I1124 09:09:51.201946 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-44wkw\" (UniqueName: \"kubernetes.io/projected/39bb02ba-ed82-471c-b04f-89fd336948b3-kube-api-access-44wkw\") pod \"nova-metadata-0\" (UID: \"39bb02ba-ed82-471c-b04f-89fd336948b3\") " pod="openstack/nova-metadata-0" Nov 24 09:09:51 crc kubenswrapper[4886]: I1124 09:09:51.201975 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4qqkd\" (UniqueName: \"kubernetes.io/projected/7491be7c-1c75-4527-8f77-290e42796216-kube-api-access-4qqkd\") pod \"nova-api-0\" (UID: \"7491be7c-1c75-4527-8f77-290e42796216\") " pod="openstack/nova-api-0" Nov 24 09:09:51 crc kubenswrapper[4886]: I1124 09:09:51.202006 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2z7b8\" (UniqueName: \"kubernetes.io/projected/89ccf13d-537d-485f-ab6d-448bc8171089-kube-api-access-2z7b8\") pod \"nova-scheduler-0\" (UID: \"89ccf13d-537d-485f-ab6d-448bc8171089\") " pod="openstack/nova-scheduler-0" Nov 24 09:09:51 crc kubenswrapper[4886]: I1124 09:09:51.202040 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/39bb02ba-ed82-471c-b04f-89fd336948b3-config-data\") pod \"nova-metadata-0\" (UID: \"39bb02ba-ed82-471c-b04f-89fd336948b3\") " pod="openstack/nova-metadata-0" Nov 24 09:09:51 crc kubenswrapper[4886]: I1124 09:09:51.202077 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0d0df575-904e-4913-9463-7c776faedd7e-ovsdbserver-sb\") pod \"dnsmasq-dns-bccf8f775-2fd72\" (UID: \"0d0df575-904e-4913-9463-7c776faedd7e\") " pod="openstack/dnsmasq-dns-bccf8f775-2fd72" Nov 24 09:09:51 crc kubenswrapper[4886]: I1124 09:09:51.202096 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7491be7c-1c75-4527-8f77-290e42796216-logs\") pod \"nova-api-0\" (UID: \"7491be7c-1c75-4527-8f77-290e42796216\") " pod="openstack/nova-api-0" Nov 24 09:09:51 crc kubenswrapper[4886]: I1124 09:09:51.202119 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0d0df575-904e-4913-9463-7c776faedd7e-dns-swift-storage-0\") pod \"dnsmasq-dns-bccf8f775-2fd72\" (UID: \"0d0df575-904e-4913-9463-7c776faedd7e\") " pod="openstack/dnsmasq-dns-bccf8f775-2fd72" Nov 24 09:09:51 crc kubenswrapper[4886]: I1124 09:09:51.208865 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7491be7c-1c75-4527-8f77-290e42796216-logs\") pod \"nova-api-0\" (UID: \"7491be7c-1c75-4527-8f77-290e42796216\") " pod="openstack/nova-api-0" Nov 24 09:09:51 crc kubenswrapper[4886]: I1124 09:09:51.222330 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7491be7c-1c75-4527-8f77-290e42796216-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"7491be7c-1c75-4527-8f77-290e42796216\") " pod="openstack/nova-api-0" Nov 24 09:09:51 crc kubenswrapper[4886]: I1124 09:09:51.245040 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7491be7c-1c75-4527-8f77-290e42796216-config-data\") pod \"nova-api-0\" (UID: \"7491be7c-1c75-4527-8f77-290e42796216\") " pod="openstack/nova-api-0" Nov 24 09:09:51 crc kubenswrapper[4886]: I1124 09:09:51.277952 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4qqkd\" (UniqueName: \"kubernetes.io/projected/7491be7c-1c75-4527-8f77-290e42796216-kube-api-access-4qqkd\") pod \"nova-api-0\" (UID: \"7491be7c-1c75-4527-8f77-290e42796216\") " pod="openstack/nova-api-0" Nov 24 09:09:51 crc kubenswrapper[4886]: I1124 09:09:51.280836 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-bccf8f775-2fd72"] Nov 24 09:09:51 crc kubenswrapper[4886]: I1124 09:09:51.292577 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 24 09:09:51 crc kubenswrapper[4886]: I1124 09:09:51.308735 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0d0df575-904e-4913-9463-7c776faedd7e-ovsdbserver-nb\") pod \"dnsmasq-dns-bccf8f775-2fd72\" (UID: \"0d0df575-904e-4913-9463-7c776faedd7e\") " pod="openstack/dnsmasq-dns-bccf8f775-2fd72" Nov 24 09:09:51 crc kubenswrapper[4886]: I1124 09:09:51.308861 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-44wkw\" (UniqueName: \"kubernetes.io/projected/39bb02ba-ed82-471c-b04f-89fd336948b3-kube-api-access-44wkw\") pod \"nova-metadata-0\" (UID: \"39bb02ba-ed82-471c-b04f-89fd336948b3\") " pod="openstack/nova-metadata-0" Nov 24 09:09:51 crc kubenswrapper[4886]: I1124 09:09:51.308902 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2z7b8\" (UniqueName: \"kubernetes.io/projected/89ccf13d-537d-485f-ab6d-448bc8171089-kube-api-access-2z7b8\") pod \"nova-scheduler-0\" (UID: \"89ccf13d-537d-485f-ab6d-448bc8171089\") " pod="openstack/nova-scheduler-0" Nov 24 09:09:51 crc kubenswrapper[4886]: I1124 09:09:51.308933 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/39bb02ba-ed82-471c-b04f-89fd336948b3-config-data\") pod \"nova-metadata-0\" (UID: \"39bb02ba-ed82-471c-b04f-89fd336948b3\") " pod="openstack/nova-metadata-0" Nov 24 09:09:51 crc kubenswrapper[4886]: I1124 09:09:51.308964 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0d0df575-904e-4913-9463-7c776faedd7e-ovsdbserver-sb\") pod \"dnsmasq-dns-bccf8f775-2fd72\" (UID: \"0d0df575-904e-4913-9463-7c776faedd7e\") " pod="openstack/dnsmasq-dns-bccf8f775-2fd72" Nov 24 09:09:51 crc kubenswrapper[4886]: I1124 09:09:51.308988 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0d0df575-904e-4913-9463-7c776faedd7e-dns-swift-storage-0\") pod \"dnsmasq-dns-bccf8f775-2fd72\" (UID: \"0d0df575-904e-4913-9463-7c776faedd7e\") " pod="openstack/dnsmasq-dns-bccf8f775-2fd72" Nov 24 09:09:51 crc kubenswrapper[4886]: I1124 09:09:51.309039 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39bb02ba-ed82-471c-b04f-89fd336948b3-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"39bb02ba-ed82-471c-b04f-89fd336948b3\") " pod="openstack/nova-metadata-0" Nov 24 09:09:51 crc kubenswrapper[4886]: I1124 09:09:51.309069 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mqcnt\" (UniqueName: \"kubernetes.io/projected/0d0df575-904e-4913-9463-7c776faedd7e-kube-api-access-mqcnt\") pod \"dnsmasq-dns-bccf8f775-2fd72\" (UID: \"0d0df575-904e-4913-9463-7c776faedd7e\") " pod="openstack/dnsmasq-dns-bccf8f775-2fd72" Nov 24 09:09:51 crc kubenswrapper[4886]: I1124 09:09:51.309324 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/39bb02ba-ed82-471c-b04f-89fd336948b3-logs\") pod \"nova-metadata-0\" (UID: \"39bb02ba-ed82-471c-b04f-89fd336948b3\") " pod="openstack/nova-metadata-0" Nov 24 09:09:51 crc kubenswrapper[4886]: I1124 09:09:51.309362 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/89ccf13d-537d-485f-ab6d-448bc8171089-config-data\") pod \"nova-scheduler-0\" (UID: \"89ccf13d-537d-485f-ab6d-448bc8171089\") " pod="openstack/nova-scheduler-0" Nov 24 09:09:51 crc kubenswrapper[4886]: I1124 09:09:51.309388 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89ccf13d-537d-485f-ab6d-448bc8171089-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"89ccf13d-537d-485f-ab6d-448bc8171089\") " pod="openstack/nova-scheduler-0" Nov 24 09:09:51 crc kubenswrapper[4886]: I1124 09:09:51.309431 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0d0df575-904e-4913-9463-7c776faedd7e-config\") pod \"dnsmasq-dns-bccf8f775-2fd72\" (UID: \"0d0df575-904e-4913-9463-7c776faedd7e\") " pod="openstack/dnsmasq-dns-bccf8f775-2fd72" Nov 24 09:09:51 crc kubenswrapper[4886]: I1124 09:09:51.309471 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0d0df575-904e-4913-9463-7c776faedd7e-dns-svc\") pod \"dnsmasq-dns-bccf8f775-2fd72\" (UID: \"0d0df575-904e-4913-9463-7c776faedd7e\") " pod="openstack/dnsmasq-dns-bccf8f775-2fd72" Nov 24 09:09:51 crc kubenswrapper[4886]: I1124 09:09:51.314216 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 24 09:09:51 crc kubenswrapper[4886]: I1124 09:09:51.314779 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0d0df575-904e-4913-9463-7c776faedd7e-dns-svc\") pod \"dnsmasq-dns-bccf8f775-2fd72\" (UID: \"0d0df575-904e-4913-9463-7c776faedd7e\") " pod="openstack/dnsmasq-dns-bccf8f775-2fd72" Nov 24 09:09:51 crc kubenswrapper[4886]: I1124 09:09:51.317459 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/39bb02ba-ed82-471c-b04f-89fd336948b3-logs\") pod \"nova-metadata-0\" (UID: \"39bb02ba-ed82-471c-b04f-89fd336948b3\") " pod="openstack/nova-metadata-0" Nov 24 09:09:51 crc kubenswrapper[4886]: I1124 09:09:51.318168 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0d0df575-904e-4913-9463-7c776faedd7e-ovsdbserver-sb\") pod \"dnsmasq-dns-bccf8f775-2fd72\" (UID: \"0d0df575-904e-4913-9463-7c776faedd7e\") " pod="openstack/dnsmasq-dns-bccf8f775-2fd72" Nov 24 09:09:51 crc kubenswrapper[4886]: I1124 09:09:51.319811 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0d0df575-904e-4913-9463-7c776faedd7e-ovsdbserver-nb\") pod \"dnsmasq-dns-bccf8f775-2fd72\" (UID: \"0d0df575-904e-4913-9463-7c776faedd7e\") " pod="openstack/dnsmasq-dns-bccf8f775-2fd72" Nov 24 09:09:51 crc kubenswrapper[4886]: I1124 09:09:51.321811 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0d0df575-904e-4913-9463-7c776faedd7e-config\") pod \"dnsmasq-dns-bccf8f775-2fd72\" (UID: \"0d0df575-904e-4913-9463-7c776faedd7e\") " pod="openstack/dnsmasq-dns-bccf8f775-2fd72" Nov 24 09:09:51 crc kubenswrapper[4886]: I1124 09:09:51.326529 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39bb02ba-ed82-471c-b04f-89fd336948b3-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"39bb02ba-ed82-471c-b04f-89fd336948b3\") " pod="openstack/nova-metadata-0" Nov 24 09:09:51 crc kubenswrapper[4886]: I1124 09:09:51.329830 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/39bb02ba-ed82-471c-b04f-89fd336948b3-config-data\") pod \"nova-metadata-0\" (UID: \"39bb02ba-ed82-471c-b04f-89fd336948b3\") " pod="openstack/nova-metadata-0" Nov 24 09:09:51 crc kubenswrapper[4886]: I1124 09:09:51.334601 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0d0df575-904e-4913-9463-7c776faedd7e-dns-swift-storage-0\") pod \"dnsmasq-dns-bccf8f775-2fd72\" (UID: \"0d0df575-904e-4913-9463-7c776faedd7e\") " pod="openstack/dnsmasq-dns-bccf8f775-2fd72" Nov 24 09:09:51 crc kubenswrapper[4886]: I1124 09:09:51.337512 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Nov 24 09:09:51 crc kubenswrapper[4886]: I1124 09:09:51.337580 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/89ccf13d-537d-485f-ab6d-448bc8171089-config-data\") pod \"nova-scheduler-0\" (UID: \"89ccf13d-537d-485f-ab6d-448bc8171089\") " pod="openstack/nova-scheduler-0" Nov 24 09:09:51 crc kubenswrapper[4886]: I1124 09:09:51.337784 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89ccf13d-537d-485f-ab6d-448bc8171089-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"89ccf13d-537d-485f-ab6d-448bc8171089\") " pod="openstack/nova-scheduler-0" Nov 24 09:09:51 crc kubenswrapper[4886]: I1124 09:09:51.352963 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 24 09:09:51 crc kubenswrapper[4886]: I1124 09:09:51.353605 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2z7b8\" (UniqueName: \"kubernetes.io/projected/89ccf13d-537d-485f-ab6d-448bc8171089-kube-api-access-2z7b8\") pod \"nova-scheduler-0\" (UID: \"89ccf13d-537d-485f-ab6d-448bc8171089\") " pod="openstack/nova-scheduler-0" Nov 24 09:09:51 crc kubenswrapper[4886]: I1124 09:09:51.355401 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 24 09:09:51 crc kubenswrapper[4886]: I1124 09:09:51.398131 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mqcnt\" (UniqueName: \"kubernetes.io/projected/0d0df575-904e-4913-9463-7c776faedd7e-kube-api-access-mqcnt\") pod \"dnsmasq-dns-bccf8f775-2fd72\" (UID: \"0d0df575-904e-4913-9463-7c776faedd7e\") " pod="openstack/dnsmasq-dns-bccf8f775-2fd72" Nov 24 09:09:51 crc kubenswrapper[4886]: I1124 09:09:51.410642 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-44wkw\" (UniqueName: \"kubernetes.io/projected/39bb02ba-ed82-471c-b04f-89fd336948b3-kube-api-access-44wkw\") pod \"nova-metadata-0\" (UID: \"39bb02ba-ed82-471c-b04f-89fd336948b3\") " pod="openstack/nova-metadata-0" Nov 24 09:09:51 crc kubenswrapper[4886]: I1124 09:09:51.411493 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6879ea11-ba6a-4d7e-8a6a-762f29e1b4cc-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"6879ea11-ba6a-4d7e-8a6a-762f29e1b4cc\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 09:09:51 crc kubenswrapper[4886]: I1124 09:09:51.411544 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mtdcw\" (UniqueName: \"kubernetes.io/projected/6879ea11-ba6a-4d7e-8a6a-762f29e1b4cc-kube-api-access-mtdcw\") pod \"nova-cell1-novncproxy-0\" (UID: \"6879ea11-ba6a-4d7e-8a6a-762f29e1b4cc\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 09:09:51 crc kubenswrapper[4886]: I1124 09:09:51.411608 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6879ea11-ba6a-4d7e-8a6a-762f29e1b4cc-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"6879ea11-ba6a-4d7e-8a6a-762f29e1b4cc\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 09:09:51 crc kubenswrapper[4886]: I1124 09:09:51.514890 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6879ea11-ba6a-4d7e-8a6a-762f29e1b4cc-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"6879ea11-ba6a-4d7e-8a6a-762f29e1b4cc\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 09:09:51 crc kubenswrapper[4886]: I1124 09:09:51.514979 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mtdcw\" (UniqueName: \"kubernetes.io/projected/6879ea11-ba6a-4d7e-8a6a-762f29e1b4cc-kube-api-access-mtdcw\") pod \"nova-cell1-novncproxy-0\" (UID: \"6879ea11-ba6a-4d7e-8a6a-762f29e1b4cc\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 09:09:51 crc kubenswrapper[4886]: I1124 09:09:51.515042 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6879ea11-ba6a-4d7e-8a6a-762f29e1b4cc-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"6879ea11-ba6a-4d7e-8a6a-762f29e1b4cc\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 09:09:51 crc kubenswrapper[4886]: I1124 09:09:51.528029 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6879ea11-ba6a-4d7e-8a6a-762f29e1b4cc-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"6879ea11-ba6a-4d7e-8a6a-762f29e1b4cc\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 09:09:51 crc kubenswrapper[4886]: I1124 09:09:51.531394 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6879ea11-ba6a-4d7e-8a6a-762f29e1b4cc-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"6879ea11-ba6a-4d7e-8a6a-762f29e1b4cc\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 09:09:51 crc kubenswrapper[4886]: I1124 09:09:51.543304 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mtdcw\" (UniqueName: \"kubernetes.io/projected/6879ea11-ba6a-4d7e-8a6a-762f29e1b4cc-kube-api-access-mtdcw\") pod \"nova-cell1-novncproxy-0\" (UID: \"6879ea11-ba6a-4d7e-8a6a-762f29e1b4cc\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 09:09:51 crc kubenswrapper[4886]: I1124 09:09:51.563572 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 24 09:09:51 crc kubenswrapper[4886]: I1124 09:09:51.601867 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bccf8f775-2fd72" Nov 24 09:09:51 crc kubenswrapper[4886]: I1124 09:09:51.658208 4886 generic.go:334] "Generic (PLEG): container finished" podID="eae9a888-7e95-434a-b28b-4d2ec4e6483a" containerID="8378fb38facacf81faffbc6dcc248fc5d22e3789c995a907f8de6397d8089d16" exitCode=0 Nov 24 09:09:51 crc kubenswrapper[4886]: I1124 09:09:51.658287 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"eae9a888-7e95-434a-b28b-4d2ec4e6483a","Type":"ContainerDied","Data":"8378fb38facacf81faffbc6dcc248fc5d22e3789c995a907f8de6397d8089d16"} Nov 24 09:09:51 crc kubenswrapper[4886]: I1124 09:09:51.681394 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 24 09:09:51 crc kubenswrapper[4886]: I1124 09:09:51.728126 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 09:09:51 crc kubenswrapper[4886]: I1124 09:09:51.758093 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 24 09:09:51 crc kubenswrapper[4886]: I1124 09:09:51.828501 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/eae9a888-7e95-434a-b28b-4d2ec4e6483a-log-httpd\") pod \"eae9a888-7e95-434a-b28b-4d2ec4e6483a\" (UID: \"eae9a888-7e95-434a-b28b-4d2ec4e6483a\") " Nov 24 09:09:51 crc kubenswrapper[4886]: I1124 09:09:51.828625 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/eae9a888-7e95-434a-b28b-4d2ec4e6483a-sg-core-conf-yaml\") pod \"eae9a888-7e95-434a-b28b-4d2ec4e6483a\" (UID: \"eae9a888-7e95-434a-b28b-4d2ec4e6483a\") " Nov 24 09:09:51 crc kubenswrapper[4886]: I1124 09:09:51.828705 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sdjrk\" (UniqueName: \"kubernetes.io/projected/eae9a888-7e95-434a-b28b-4d2ec4e6483a-kube-api-access-sdjrk\") pod \"eae9a888-7e95-434a-b28b-4d2ec4e6483a\" (UID: \"eae9a888-7e95-434a-b28b-4d2ec4e6483a\") " Nov 24 09:09:51 crc kubenswrapper[4886]: I1124 09:09:51.828790 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eae9a888-7e95-434a-b28b-4d2ec4e6483a-combined-ca-bundle\") pod \"eae9a888-7e95-434a-b28b-4d2ec4e6483a\" (UID: \"eae9a888-7e95-434a-b28b-4d2ec4e6483a\") " Nov 24 09:09:51 crc kubenswrapper[4886]: I1124 09:09:51.828884 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/eae9a888-7e95-434a-b28b-4d2ec4e6483a-run-httpd\") pod \"eae9a888-7e95-434a-b28b-4d2ec4e6483a\" (UID: \"eae9a888-7e95-434a-b28b-4d2ec4e6483a\") " Nov 24 09:09:51 crc kubenswrapper[4886]: I1124 09:09:51.828925 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eae9a888-7e95-434a-b28b-4d2ec4e6483a-config-data\") pod \"eae9a888-7e95-434a-b28b-4d2ec4e6483a\" (UID: \"eae9a888-7e95-434a-b28b-4d2ec4e6483a\") " Nov 24 09:09:51 crc kubenswrapper[4886]: I1124 09:09:51.829013 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eae9a888-7e95-434a-b28b-4d2ec4e6483a-scripts\") pod \"eae9a888-7e95-434a-b28b-4d2ec4e6483a\" (UID: \"eae9a888-7e95-434a-b28b-4d2ec4e6483a\") " Nov 24 09:09:51 crc kubenswrapper[4886]: I1124 09:09:51.833510 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eae9a888-7e95-434a-b28b-4d2ec4e6483a-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "eae9a888-7e95-434a-b28b-4d2ec4e6483a" (UID: "eae9a888-7e95-434a-b28b-4d2ec4e6483a"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 09:09:51 crc kubenswrapper[4886]: I1124 09:09:51.835726 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eae9a888-7e95-434a-b28b-4d2ec4e6483a-scripts" (OuterVolumeSpecName: "scripts") pod "eae9a888-7e95-434a-b28b-4d2ec4e6483a" (UID: "eae9a888-7e95-434a-b28b-4d2ec4e6483a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:09:51 crc kubenswrapper[4886]: I1124 09:09:51.838581 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eae9a888-7e95-434a-b28b-4d2ec4e6483a-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "eae9a888-7e95-434a-b28b-4d2ec4e6483a" (UID: "eae9a888-7e95-434a-b28b-4d2ec4e6483a"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 09:09:51 crc kubenswrapper[4886]: I1124 09:09:51.842257 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eae9a888-7e95-434a-b28b-4d2ec4e6483a-kube-api-access-sdjrk" (OuterVolumeSpecName: "kube-api-access-sdjrk") pod "eae9a888-7e95-434a-b28b-4d2ec4e6483a" (UID: "eae9a888-7e95-434a-b28b-4d2ec4e6483a"). InnerVolumeSpecName "kube-api-access-sdjrk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 09:09:51 crc kubenswrapper[4886]: I1124 09:09:51.922858 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eae9a888-7e95-434a-b28b-4d2ec4e6483a-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "eae9a888-7e95-434a-b28b-4d2ec4e6483a" (UID: "eae9a888-7e95-434a-b28b-4d2ec4e6483a"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:09:51 crc kubenswrapper[4886]: I1124 09:09:51.933617 4886 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/eae9a888-7e95-434a-b28b-4d2ec4e6483a-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 24 09:09:51 crc kubenswrapper[4886]: I1124 09:09:51.933943 4886 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eae9a888-7e95-434a-b28b-4d2ec4e6483a-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 09:09:51 crc kubenswrapper[4886]: I1124 09:09:51.934005 4886 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/eae9a888-7e95-434a-b28b-4d2ec4e6483a-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 24 09:09:51 crc kubenswrapper[4886]: I1124 09:09:51.934078 4886 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/eae9a888-7e95-434a-b28b-4d2ec4e6483a-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 24 09:09:51 crc kubenswrapper[4886]: I1124 09:09:51.934145 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sdjrk\" (UniqueName: \"kubernetes.io/projected/eae9a888-7e95-434a-b28b-4d2ec4e6483a-kube-api-access-sdjrk\") on node \"crc\" DevicePath \"\"" Nov 24 09:09:51 crc kubenswrapper[4886]: I1124 09:09:51.961239 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-rmkch"] Nov 24 09:09:52 crc kubenswrapper[4886]: I1124 09:09:52.072425 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eae9a888-7e95-434a-b28b-4d2ec4e6483a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "eae9a888-7e95-434a-b28b-4d2ec4e6483a" (UID: "eae9a888-7e95-434a-b28b-4d2ec4e6483a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:09:52 crc kubenswrapper[4886]: I1124 09:09:52.100731 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eae9a888-7e95-434a-b28b-4d2ec4e6483a-config-data" (OuterVolumeSpecName: "config-data") pod "eae9a888-7e95-434a-b28b-4d2ec4e6483a" (UID: "eae9a888-7e95-434a-b28b-4d2ec4e6483a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:09:52 crc kubenswrapper[4886]: I1124 09:09:52.140990 4886 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eae9a888-7e95-434a-b28b-4d2ec4e6483a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 09:09:52 crc kubenswrapper[4886]: I1124 09:09:52.141032 4886 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eae9a888-7e95-434a-b28b-4d2ec4e6483a-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 09:09:52 crc kubenswrapper[4886]: I1124 09:09:52.164652 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 24 09:09:52 crc kubenswrapper[4886]: I1124 09:09:52.202312 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-c59cj"] Nov 24 09:09:52 crc kubenswrapper[4886]: E1124 09:09:52.202893 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eae9a888-7e95-434a-b28b-4d2ec4e6483a" containerName="proxy-httpd" Nov 24 09:09:52 crc kubenswrapper[4886]: I1124 09:09:52.202911 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="eae9a888-7e95-434a-b28b-4d2ec4e6483a" containerName="proxy-httpd" Nov 24 09:09:52 crc kubenswrapper[4886]: E1124 09:09:52.202929 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eae9a888-7e95-434a-b28b-4d2ec4e6483a" containerName="sg-core" Nov 24 09:09:52 crc kubenswrapper[4886]: I1124 09:09:52.202937 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="eae9a888-7e95-434a-b28b-4d2ec4e6483a" containerName="sg-core" Nov 24 09:09:52 crc kubenswrapper[4886]: E1124 09:09:52.202965 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eae9a888-7e95-434a-b28b-4d2ec4e6483a" containerName="ceilometer-notification-agent" Nov 24 09:09:52 crc kubenswrapper[4886]: I1124 09:09:52.202973 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="eae9a888-7e95-434a-b28b-4d2ec4e6483a" containerName="ceilometer-notification-agent" Nov 24 09:09:52 crc kubenswrapper[4886]: E1124 09:09:52.202993 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eae9a888-7e95-434a-b28b-4d2ec4e6483a" containerName="ceilometer-central-agent" Nov 24 09:09:52 crc kubenswrapper[4886]: I1124 09:09:52.203004 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="eae9a888-7e95-434a-b28b-4d2ec4e6483a" containerName="ceilometer-central-agent" Nov 24 09:09:52 crc kubenswrapper[4886]: I1124 09:09:52.203270 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="eae9a888-7e95-434a-b28b-4d2ec4e6483a" containerName="ceilometer-notification-agent" Nov 24 09:09:52 crc kubenswrapper[4886]: I1124 09:09:52.203297 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="eae9a888-7e95-434a-b28b-4d2ec4e6483a" containerName="ceilometer-central-agent" Nov 24 09:09:52 crc kubenswrapper[4886]: I1124 09:09:52.203324 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="eae9a888-7e95-434a-b28b-4d2ec4e6483a" containerName="sg-core" Nov 24 09:09:52 crc kubenswrapper[4886]: I1124 09:09:52.203335 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="eae9a888-7e95-434a-b28b-4d2ec4e6483a" containerName="proxy-httpd" Nov 24 09:09:52 crc kubenswrapper[4886]: I1124 09:09:52.204257 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-c59cj" Nov 24 09:09:52 crc kubenswrapper[4886]: I1124 09:09:52.208785 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Nov 24 09:09:52 crc kubenswrapper[4886]: I1124 09:09:52.209013 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Nov 24 09:09:52 crc kubenswrapper[4886]: I1124 09:09:52.223651 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-c59cj"] Nov 24 09:09:52 crc kubenswrapper[4886]: I1124 09:09:52.346522 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/894a9de8-fef6-45c0-9464-fca3f25587e9-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-c59cj\" (UID: \"894a9de8-fef6-45c0-9464-fca3f25587e9\") " pod="openstack/nova-cell1-conductor-db-sync-c59cj" Nov 24 09:09:52 crc kubenswrapper[4886]: I1124 09:09:52.346790 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-srwnv\" (UniqueName: \"kubernetes.io/projected/894a9de8-fef6-45c0-9464-fca3f25587e9-kube-api-access-srwnv\") pod \"nova-cell1-conductor-db-sync-c59cj\" (UID: \"894a9de8-fef6-45c0-9464-fca3f25587e9\") " pod="openstack/nova-cell1-conductor-db-sync-c59cj" Nov 24 09:09:52 crc kubenswrapper[4886]: I1124 09:09:52.347020 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/894a9de8-fef6-45c0-9464-fca3f25587e9-scripts\") pod \"nova-cell1-conductor-db-sync-c59cj\" (UID: \"894a9de8-fef6-45c0-9464-fca3f25587e9\") " pod="openstack/nova-cell1-conductor-db-sync-c59cj" Nov 24 09:09:52 crc kubenswrapper[4886]: I1124 09:09:52.347137 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/894a9de8-fef6-45c0-9464-fca3f25587e9-config-data\") pod \"nova-cell1-conductor-db-sync-c59cj\" (UID: \"894a9de8-fef6-45c0-9464-fca3f25587e9\") " pod="openstack/nova-cell1-conductor-db-sync-c59cj" Nov 24 09:09:52 crc kubenswrapper[4886]: I1124 09:09:52.358382 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 24 09:09:52 crc kubenswrapper[4886]: I1124 09:09:52.448431 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/894a9de8-fef6-45c0-9464-fca3f25587e9-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-c59cj\" (UID: \"894a9de8-fef6-45c0-9464-fca3f25587e9\") " pod="openstack/nova-cell1-conductor-db-sync-c59cj" Nov 24 09:09:52 crc kubenswrapper[4886]: I1124 09:09:52.449125 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-srwnv\" (UniqueName: \"kubernetes.io/projected/894a9de8-fef6-45c0-9464-fca3f25587e9-kube-api-access-srwnv\") pod \"nova-cell1-conductor-db-sync-c59cj\" (UID: \"894a9de8-fef6-45c0-9464-fca3f25587e9\") " pod="openstack/nova-cell1-conductor-db-sync-c59cj" Nov 24 09:09:52 crc kubenswrapper[4886]: I1124 09:09:52.449212 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/894a9de8-fef6-45c0-9464-fca3f25587e9-scripts\") pod \"nova-cell1-conductor-db-sync-c59cj\" (UID: \"894a9de8-fef6-45c0-9464-fca3f25587e9\") " pod="openstack/nova-cell1-conductor-db-sync-c59cj" Nov 24 09:09:52 crc kubenswrapper[4886]: I1124 09:09:52.449256 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/894a9de8-fef6-45c0-9464-fca3f25587e9-config-data\") pod \"nova-cell1-conductor-db-sync-c59cj\" (UID: \"894a9de8-fef6-45c0-9464-fca3f25587e9\") " pod="openstack/nova-cell1-conductor-db-sync-c59cj" Nov 24 09:09:52 crc kubenswrapper[4886]: I1124 09:09:52.458593 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/894a9de8-fef6-45c0-9464-fca3f25587e9-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-c59cj\" (UID: \"894a9de8-fef6-45c0-9464-fca3f25587e9\") " pod="openstack/nova-cell1-conductor-db-sync-c59cj" Nov 24 09:09:52 crc kubenswrapper[4886]: I1124 09:09:52.458890 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/894a9de8-fef6-45c0-9464-fca3f25587e9-scripts\") pod \"nova-cell1-conductor-db-sync-c59cj\" (UID: \"894a9de8-fef6-45c0-9464-fca3f25587e9\") " pod="openstack/nova-cell1-conductor-db-sync-c59cj" Nov 24 09:09:52 crc kubenswrapper[4886]: I1124 09:09:52.461078 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/894a9de8-fef6-45c0-9464-fca3f25587e9-config-data\") pod \"nova-cell1-conductor-db-sync-c59cj\" (UID: \"894a9de8-fef6-45c0-9464-fca3f25587e9\") " pod="openstack/nova-cell1-conductor-db-sync-c59cj" Nov 24 09:09:52 crc kubenswrapper[4886]: I1124 09:09:52.485359 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-srwnv\" (UniqueName: \"kubernetes.io/projected/894a9de8-fef6-45c0-9464-fca3f25587e9-kube-api-access-srwnv\") pod \"nova-cell1-conductor-db-sync-c59cj\" (UID: \"894a9de8-fef6-45c0-9464-fca3f25587e9\") " pod="openstack/nova-cell1-conductor-db-sync-c59cj" Nov 24 09:09:52 crc kubenswrapper[4886]: I1124 09:09:52.514088 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-bccf8f775-2fd72"] Nov 24 09:09:52 crc kubenswrapper[4886]: I1124 09:09:52.537465 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 24 09:09:52 crc kubenswrapper[4886]: I1124 09:09:52.549414 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-c59cj" Nov 24 09:09:52 crc kubenswrapper[4886]: I1124 09:09:52.658591 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 24 09:09:52 crc kubenswrapper[4886]: I1124 09:09:52.683073 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"39bb02ba-ed82-471c-b04f-89fd336948b3","Type":"ContainerStarted","Data":"e2ffd1e3702717fe67b1c4c58548cda52df22dea77670df3c6025e927e3c49b7"} Nov 24 09:09:52 crc kubenswrapper[4886]: I1124 09:09:52.694590 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"eae9a888-7e95-434a-b28b-4d2ec4e6483a","Type":"ContainerDied","Data":"007747c057d96c51000ba43e25391508d3c3dc4e8cd192e4646e9f46e71f5fe2"} Nov 24 09:09:52 crc kubenswrapper[4886]: I1124 09:09:52.694663 4886 scope.go:117] "RemoveContainer" containerID="42e7ca5bb136971ff3d79892e394ef186e97834ae557592832d118b7f883de10" Nov 24 09:09:52 crc kubenswrapper[4886]: I1124 09:09:52.694821 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 09:09:52 crc kubenswrapper[4886]: I1124 09:09:52.707655 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bccf8f775-2fd72" event={"ID":"0d0df575-904e-4913-9463-7c776faedd7e","Type":"ContainerStarted","Data":"3ea60b2f2737e31e5db211755e123229296a9b0c8c5c8573c502025d7b846cd3"} Nov 24 09:09:52 crc kubenswrapper[4886]: I1124 09:09:52.715590 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"7491be7c-1c75-4527-8f77-290e42796216","Type":"ContainerStarted","Data":"ce4f89d1baada1da78a6cb6c1ddc6e2de415996c3251dc09cc7c2cf66f541093"} Nov 24 09:09:52 crc kubenswrapper[4886]: I1124 09:09:52.718341 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"89ccf13d-537d-485f-ab6d-448bc8171089","Type":"ContainerStarted","Data":"1af34c5b304a2dd7774622a5e7ce2f83738d76d7793e8f5bb057a8afd2fc9390"} Nov 24 09:09:52 crc kubenswrapper[4886]: I1124 09:09:52.720532 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-rmkch" event={"ID":"a7266b1c-bb21-4f54-994c-52ab5db8d4eb","Type":"ContainerStarted","Data":"1abe7a1f7783f8358d0fa155117a9f0b2511fc0b3628864bbfc37391e4d314cf"} Nov 24 09:09:52 crc kubenswrapper[4886]: I1124 09:09:52.721025 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-rmkch" event={"ID":"a7266b1c-bb21-4f54-994c-52ab5db8d4eb","Type":"ContainerStarted","Data":"ac82bbb990d56fda9b22ca7f9fc075500732f7e04836fc643bec91f6f9838f5c"} Nov 24 09:09:52 crc kubenswrapper[4886]: I1124 09:09:52.762215 4886 scope.go:117] "RemoveContainer" containerID="b9f144ad8b5222ae4f48fa181c294c79c4dc1fdd5a4a9cee03e1fb7ed04c80c5" Nov 24 09:09:52 crc kubenswrapper[4886]: I1124 09:09:52.776291 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 24 09:09:52 crc kubenswrapper[4886]: I1124 09:09:52.788578 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 24 09:09:52 crc kubenswrapper[4886]: I1124 09:09:52.798000 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-rmkch" podStartSLOduration=2.797971455 podStartE2EDuration="2.797971455s" podCreationTimestamp="2025-11-24 09:09:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 09:09:52.788547157 +0000 UTC m=+1248.675285292" watchObservedRunningTime="2025-11-24 09:09:52.797971455 +0000 UTC m=+1248.684709590" Nov 24 09:09:52 crc kubenswrapper[4886]: I1124 09:09:52.808672 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 24 09:09:52 crc kubenswrapper[4886]: I1124 09:09:52.811871 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 09:09:52 crc kubenswrapper[4886]: I1124 09:09:52.818775 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 24 09:09:52 crc kubenswrapper[4886]: I1124 09:09:52.820921 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 24 09:09:52 crc kubenswrapper[4886]: I1124 09:09:52.869183 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f87697b1-d576-4e2b-ba20-04d9453e0656-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f87697b1-d576-4e2b-ba20-04d9453e0656\") " pod="openstack/ceilometer-0" Nov 24 09:09:52 crc kubenswrapper[4886]: I1124 09:09:52.869245 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f87697b1-d576-4e2b-ba20-04d9453e0656-run-httpd\") pod \"ceilometer-0\" (UID: \"f87697b1-d576-4e2b-ba20-04d9453e0656\") " pod="openstack/ceilometer-0" Nov 24 09:09:52 crc kubenswrapper[4886]: I1124 09:09:52.869270 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f87697b1-d576-4e2b-ba20-04d9453e0656-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f87697b1-d576-4e2b-ba20-04d9453e0656\") " pod="openstack/ceilometer-0" Nov 24 09:09:52 crc kubenswrapper[4886]: I1124 09:09:52.869296 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f87697b1-d576-4e2b-ba20-04d9453e0656-config-data\") pod \"ceilometer-0\" (UID: \"f87697b1-d576-4e2b-ba20-04d9453e0656\") " pod="openstack/ceilometer-0" Nov 24 09:09:52 crc kubenswrapper[4886]: I1124 09:09:52.869444 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f87697b1-d576-4e2b-ba20-04d9453e0656-log-httpd\") pod \"ceilometer-0\" (UID: \"f87697b1-d576-4e2b-ba20-04d9453e0656\") " pod="openstack/ceilometer-0" Nov 24 09:09:52 crc kubenswrapper[4886]: I1124 09:09:52.869468 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f87697b1-d576-4e2b-ba20-04d9453e0656-scripts\") pod \"ceilometer-0\" (UID: \"f87697b1-d576-4e2b-ba20-04d9453e0656\") " pod="openstack/ceilometer-0" Nov 24 09:09:52 crc kubenswrapper[4886]: I1124 09:09:52.869559 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9j5cd\" (UniqueName: \"kubernetes.io/projected/f87697b1-d576-4e2b-ba20-04d9453e0656-kube-api-access-9j5cd\") pod \"ceilometer-0\" (UID: \"f87697b1-d576-4e2b-ba20-04d9453e0656\") " pod="openstack/ceilometer-0" Nov 24 09:09:52 crc kubenswrapper[4886]: I1124 09:09:52.882527 4886 scope.go:117] "RemoveContainer" containerID="8378fb38facacf81faffbc6dcc248fc5d22e3789c995a907f8de6397d8089d16" Nov 24 09:09:52 crc kubenswrapper[4886]: I1124 09:09:52.898107 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eae9a888-7e95-434a-b28b-4d2ec4e6483a" path="/var/lib/kubelet/pods/eae9a888-7e95-434a-b28b-4d2ec4e6483a/volumes" Nov 24 09:09:52 crc kubenswrapper[4886]: I1124 09:09:52.899171 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 24 09:09:52 crc kubenswrapper[4886]: I1124 09:09:52.957350 4886 scope.go:117] "RemoveContainer" containerID="29c9d2823267a5c549796b47a7f559500614ce366147eda6f2233162fdf03b24" Nov 24 09:09:52 crc kubenswrapper[4886]: I1124 09:09:52.971597 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f87697b1-d576-4e2b-ba20-04d9453e0656-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f87697b1-d576-4e2b-ba20-04d9453e0656\") " pod="openstack/ceilometer-0" Nov 24 09:09:52 crc kubenswrapper[4886]: I1124 09:09:52.971898 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f87697b1-d576-4e2b-ba20-04d9453e0656-run-httpd\") pod \"ceilometer-0\" (UID: \"f87697b1-d576-4e2b-ba20-04d9453e0656\") " pod="openstack/ceilometer-0" Nov 24 09:09:52 crc kubenswrapper[4886]: I1124 09:09:52.972035 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f87697b1-d576-4e2b-ba20-04d9453e0656-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f87697b1-d576-4e2b-ba20-04d9453e0656\") " pod="openstack/ceilometer-0" Nov 24 09:09:52 crc kubenswrapper[4886]: I1124 09:09:52.972131 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f87697b1-d576-4e2b-ba20-04d9453e0656-config-data\") pod \"ceilometer-0\" (UID: \"f87697b1-d576-4e2b-ba20-04d9453e0656\") " pod="openstack/ceilometer-0" Nov 24 09:09:52 crc kubenswrapper[4886]: I1124 09:09:52.972270 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f87697b1-d576-4e2b-ba20-04d9453e0656-log-httpd\") pod \"ceilometer-0\" (UID: \"f87697b1-d576-4e2b-ba20-04d9453e0656\") " pod="openstack/ceilometer-0" Nov 24 09:09:52 crc kubenswrapper[4886]: I1124 09:09:52.972350 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f87697b1-d576-4e2b-ba20-04d9453e0656-scripts\") pod \"ceilometer-0\" (UID: \"f87697b1-d576-4e2b-ba20-04d9453e0656\") " pod="openstack/ceilometer-0" Nov 24 09:09:52 crc kubenswrapper[4886]: I1124 09:09:52.972465 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9j5cd\" (UniqueName: \"kubernetes.io/projected/f87697b1-d576-4e2b-ba20-04d9453e0656-kube-api-access-9j5cd\") pod \"ceilometer-0\" (UID: \"f87697b1-d576-4e2b-ba20-04d9453e0656\") " pod="openstack/ceilometer-0" Nov 24 09:09:52 crc kubenswrapper[4886]: I1124 09:09:52.973276 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f87697b1-d576-4e2b-ba20-04d9453e0656-log-httpd\") pod \"ceilometer-0\" (UID: \"f87697b1-d576-4e2b-ba20-04d9453e0656\") " pod="openstack/ceilometer-0" Nov 24 09:09:52 crc kubenswrapper[4886]: I1124 09:09:52.973548 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f87697b1-d576-4e2b-ba20-04d9453e0656-run-httpd\") pod \"ceilometer-0\" (UID: \"f87697b1-d576-4e2b-ba20-04d9453e0656\") " pod="openstack/ceilometer-0" Nov 24 09:09:52 crc kubenswrapper[4886]: I1124 09:09:52.978652 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f87697b1-d576-4e2b-ba20-04d9453e0656-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f87697b1-d576-4e2b-ba20-04d9453e0656\") " pod="openstack/ceilometer-0" Nov 24 09:09:52 crc kubenswrapper[4886]: I1124 09:09:52.979407 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f87697b1-d576-4e2b-ba20-04d9453e0656-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f87697b1-d576-4e2b-ba20-04d9453e0656\") " pod="openstack/ceilometer-0" Nov 24 09:09:52 crc kubenswrapper[4886]: I1124 09:09:52.980541 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f87697b1-d576-4e2b-ba20-04d9453e0656-scripts\") pod \"ceilometer-0\" (UID: \"f87697b1-d576-4e2b-ba20-04d9453e0656\") " pod="openstack/ceilometer-0" Nov 24 09:09:52 crc kubenswrapper[4886]: I1124 09:09:52.982663 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f87697b1-d576-4e2b-ba20-04d9453e0656-config-data\") pod \"ceilometer-0\" (UID: \"f87697b1-d576-4e2b-ba20-04d9453e0656\") " pod="openstack/ceilometer-0" Nov 24 09:09:52 crc kubenswrapper[4886]: I1124 09:09:52.996134 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9j5cd\" (UniqueName: \"kubernetes.io/projected/f87697b1-d576-4e2b-ba20-04d9453e0656-kube-api-access-9j5cd\") pod \"ceilometer-0\" (UID: \"f87697b1-d576-4e2b-ba20-04d9453e0656\") " pod="openstack/ceilometer-0" Nov 24 09:09:53 crc kubenswrapper[4886]: I1124 09:09:53.120850 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-c59cj"] Nov 24 09:09:53 crc kubenswrapper[4886]: I1124 09:09:53.249533 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 09:09:53 crc kubenswrapper[4886]: I1124 09:09:53.763500 4886 generic.go:334] "Generic (PLEG): container finished" podID="0d0df575-904e-4913-9463-7c776faedd7e" containerID="7debdc75d3d3bdd633971c610231669c7ebcf26d75f311aabdd8543db55bd51c" exitCode=0 Nov 24 09:09:53 crc kubenswrapper[4886]: I1124 09:09:53.763721 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bccf8f775-2fd72" event={"ID":"0d0df575-904e-4913-9463-7c776faedd7e","Type":"ContainerDied","Data":"7debdc75d3d3bdd633971c610231669c7ebcf26d75f311aabdd8543db55bd51c"} Nov 24 09:09:53 crc kubenswrapper[4886]: I1124 09:09:53.772493 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-c59cj" event={"ID":"894a9de8-fef6-45c0-9464-fca3f25587e9","Type":"ContainerStarted","Data":"4785f8ddb6e0c935ba3f80c8611df368145a2d78eba83d4bfc13e053f494c5c9"} Nov 24 09:09:53 crc kubenswrapper[4886]: I1124 09:09:53.772541 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-c59cj" event={"ID":"894a9de8-fef6-45c0-9464-fca3f25587e9","Type":"ContainerStarted","Data":"8f8c4be71daf83215ca543c60f929bda256c00d7af6ef87bf91fd07af9179df0"} Nov 24 09:09:53 crc kubenswrapper[4886]: I1124 09:09:53.777474 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"6879ea11-ba6a-4d7e-8a6a-762f29e1b4cc","Type":"ContainerStarted","Data":"b6db09edd0afdbcf6c000602b5809f8f6ec6dd5a966b9af9cc8e5835cd1cee6e"} Nov 24 09:09:53 crc kubenswrapper[4886]: I1124 09:09:53.827462 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-c59cj" podStartSLOduration=1.827428073 podStartE2EDuration="1.827428073s" podCreationTimestamp="2025-11-24 09:09:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 09:09:53.814553548 +0000 UTC m=+1249.701291683" watchObservedRunningTime="2025-11-24 09:09:53.827428073 +0000 UTC m=+1249.714166208" Nov 24 09:09:53 crc kubenswrapper[4886]: I1124 09:09:53.873030 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 24 09:09:54 crc kubenswrapper[4886]: I1124 09:09:54.734487 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 24 09:09:54 crc kubenswrapper[4886]: I1124 09:09:54.779492 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 24 09:09:54 crc kubenswrapper[4886]: I1124 09:09:54.791854 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f87697b1-d576-4e2b-ba20-04d9453e0656","Type":"ContainerStarted","Data":"8f28f4f5f8c164ec7a9ca7ff68597f305a05de47eba718d687a3c90632fc750c"} Nov 24 09:09:54 crc kubenswrapper[4886]: I1124 09:09:54.795262 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bccf8f775-2fd72" event={"ID":"0d0df575-904e-4913-9463-7c776faedd7e","Type":"ContainerStarted","Data":"8402e4fb47d36c81de4a102f9448cd038464cceec3b425edf13179f975413aec"} Nov 24 09:09:54 crc kubenswrapper[4886]: I1124 09:09:54.819459 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-bccf8f775-2fd72" podStartSLOduration=3.819428957 podStartE2EDuration="3.819428957s" podCreationTimestamp="2025-11-24 09:09:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 09:09:54.8149602 +0000 UTC m=+1250.701698335" watchObservedRunningTime="2025-11-24 09:09:54.819428957 +0000 UTC m=+1250.706167092" Nov 24 09:09:55 crc kubenswrapper[4886]: I1124 09:09:55.814605 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-bccf8f775-2fd72" Nov 24 09:09:57 crc kubenswrapper[4886]: I1124 09:09:57.902640 4886 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 24 09:09:58 crc kubenswrapper[4886]: I1124 09:09:58.858936 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="6879ea11-ba6a-4d7e-8a6a-762f29e1b4cc" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://de223e8454c25a1bf218b4efad0685df503bc192f92b8722f135dddc734c2aae" gracePeriod=30 Nov 24 09:09:58 crc kubenswrapper[4886]: I1124 09:09:58.863464 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"89ccf13d-537d-485f-ab6d-448bc8171089","Type":"ContainerStarted","Data":"ac81ebbfa02a4916ba620ed388a3a5d33ef95d9f55675f47fb94ad64f22e51bc"} Nov 24 09:09:58 crc kubenswrapper[4886]: I1124 09:09:58.863859 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"6879ea11-ba6a-4d7e-8a6a-762f29e1b4cc","Type":"ContainerStarted","Data":"de223e8454c25a1bf218b4efad0685df503bc192f92b8722f135dddc734c2aae"} Nov 24 09:09:58 crc kubenswrapper[4886]: I1124 09:09:58.863975 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"39bb02ba-ed82-471c-b04f-89fd336948b3","Type":"ContainerStarted","Data":"994fa43d031b5d24ab3755a87ef24fb72bd1187a7023bcf539c74e5b7a3f95d4"} Nov 24 09:09:58 crc kubenswrapper[4886]: I1124 09:09:58.864044 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="39bb02ba-ed82-471c-b04f-89fd336948b3" containerName="nova-metadata-log" containerID="cri-o://dd4aa94d471878180306e271d8236da70f4a485a2bd3ca9786cb50cb52861af1" gracePeriod=30 Nov 24 09:09:58 crc kubenswrapper[4886]: I1124 09:09:58.864074 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"39bb02ba-ed82-471c-b04f-89fd336948b3","Type":"ContainerStarted","Data":"dd4aa94d471878180306e271d8236da70f4a485a2bd3ca9786cb50cb52861af1"} Nov 24 09:09:58 crc kubenswrapper[4886]: I1124 09:09:58.864222 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="39bb02ba-ed82-471c-b04f-89fd336948b3" containerName="nova-metadata-metadata" containerID="cri-o://994fa43d031b5d24ab3755a87ef24fb72bd1187a7023bcf539c74e5b7a3f95d4" gracePeriod=30 Nov 24 09:09:58 crc kubenswrapper[4886]: I1124 09:09:58.869020 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f87697b1-d576-4e2b-ba20-04d9453e0656","Type":"ContainerStarted","Data":"8bd2070a7c217a02442155c45d61562ed1a5fb43a1d566d083527f221681e331"} Nov 24 09:09:58 crc kubenswrapper[4886]: I1124 09:09:58.869090 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f87697b1-d576-4e2b-ba20-04d9453e0656","Type":"ContainerStarted","Data":"53e7cf41d6c15d164522bad27bfaa0ff7fe8414ba567a42fbb48b10ab7b149f1"} Nov 24 09:09:58 crc kubenswrapper[4886]: I1124 09:09:58.881852 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"7491be7c-1c75-4527-8f77-290e42796216","Type":"ContainerStarted","Data":"f938f5910651b9c86c5bf75b01ed7a64a9319e6df1686915eb77fb252f8323a8"} Nov 24 09:09:58 crc kubenswrapper[4886]: I1124 09:09:58.881912 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"7491be7c-1c75-4527-8f77-290e42796216","Type":"ContainerStarted","Data":"099beaee49b7b7d1aa6f753a3713dd9c58971a18d67a467336ae5022c0208c27"} Nov 24 09:09:58 crc kubenswrapper[4886]: I1124 09:09:58.912066 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.841116621 podStartE2EDuration="7.912023954s" podCreationTimestamp="2025-11-24 09:09:51 +0000 UTC" firstStartedPulling="2025-11-24 09:09:52.365341433 +0000 UTC m=+1248.252079568" lastFinishedPulling="2025-11-24 09:09:57.436248766 +0000 UTC m=+1253.322986901" observedRunningTime="2025-11-24 09:09:58.879736847 +0000 UTC m=+1254.766474982" watchObservedRunningTime="2025-11-24 09:09:58.912023954 +0000 UTC m=+1254.798762089" Nov 24 09:09:58 crc kubenswrapper[4886]: I1124 09:09:58.950823 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=3.18809182 podStartE2EDuration="7.950787175s" podCreationTimestamp="2025-11-24 09:09:51 +0000 UTC" firstStartedPulling="2025-11-24 09:09:52.675329191 +0000 UTC m=+1248.562067326" lastFinishedPulling="2025-11-24 09:09:57.438024546 +0000 UTC m=+1253.324762681" observedRunningTime="2025-11-24 09:09:58.900045784 +0000 UTC m=+1254.786783919" watchObservedRunningTime="2025-11-24 09:09:58.950787175 +0000 UTC m=+1254.837525300" Nov 24 09:09:58 crc kubenswrapper[4886]: I1124 09:09:58.984203 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=4.064386156 podStartE2EDuration="8.984163374s" podCreationTimestamp="2025-11-24 09:09:50 +0000 UTC" firstStartedPulling="2025-11-24 09:09:52.517654211 +0000 UTC m=+1248.404392356" lastFinishedPulling="2025-11-24 09:09:57.437431449 +0000 UTC m=+1253.324169574" observedRunningTime="2025-11-24 09:09:58.917775957 +0000 UTC m=+1254.804514092" watchObservedRunningTime="2025-11-24 09:09:58.984163374 +0000 UTC m=+1254.870901499" Nov 24 09:09:58 crc kubenswrapper[4886]: I1124 09:09:58.991920 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.718550699 podStartE2EDuration="8.991903584s" podCreationTimestamp="2025-11-24 09:09:50 +0000 UTC" firstStartedPulling="2025-11-24 09:09:52.162952442 +0000 UTC m=+1248.049690577" lastFinishedPulling="2025-11-24 09:09:57.436305317 +0000 UTC m=+1253.323043462" observedRunningTime="2025-11-24 09:09:58.975344363 +0000 UTC m=+1254.862082508" watchObservedRunningTime="2025-11-24 09:09:58.991903584 +0000 UTC m=+1254.878641719" Nov 24 09:09:59 crc kubenswrapper[4886]: I1124 09:09:59.772475 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 24 09:09:59 crc kubenswrapper[4886]: I1124 09:09:59.806956 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/39bb02ba-ed82-471c-b04f-89fd336948b3-logs\") pod \"39bb02ba-ed82-471c-b04f-89fd336948b3\" (UID: \"39bb02ba-ed82-471c-b04f-89fd336948b3\") " Nov 24 09:09:59 crc kubenswrapper[4886]: I1124 09:09:59.807808 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-44wkw\" (UniqueName: \"kubernetes.io/projected/39bb02ba-ed82-471c-b04f-89fd336948b3-kube-api-access-44wkw\") pod \"39bb02ba-ed82-471c-b04f-89fd336948b3\" (UID: \"39bb02ba-ed82-471c-b04f-89fd336948b3\") " Nov 24 09:09:59 crc kubenswrapper[4886]: I1124 09:09:59.807885 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/39bb02ba-ed82-471c-b04f-89fd336948b3-config-data\") pod \"39bb02ba-ed82-471c-b04f-89fd336948b3\" (UID: \"39bb02ba-ed82-471c-b04f-89fd336948b3\") " Nov 24 09:09:59 crc kubenswrapper[4886]: I1124 09:09:59.808066 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39bb02ba-ed82-471c-b04f-89fd336948b3-combined-ca-bundle\") pod \"39bb02ba-ed82-471c-b04f-89fd336948b3\" (UID: \"39bb02ba-ed82-471c-b04f-89fd336948b3\") " Nov 24 09:09:59 crc kubenswrapper[4886]: I1124 09:09:59.807463 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/39bb02ba-ed82-471c-b04f-89fd336948b3-logs" (OuterVolumeSpecName: "logs") pod "39bb02ba-ed82-471c-b04f-89fd336948b3" (UID: "39bb02ba-ed82-471c-b04f-89fd336948b3"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 09:09:59 crc kubenswrapper[4886]: I1124 09:09:59.816603 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/39bb02ba-ed82-471c-b04f-89fd336948b3-kube-api-access-44wkw" (OuterVolumeSpecName: "kube-api-access-44wkw") pod "39bb02ba-ed82-471c-b04f-89fd336948b3" (UID: "39bb02ba-ed82-471c-b04f-89fd336948b3"). InnerVolumeSpecName "kube-api-access-44wkw". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 09:09:59 crc kubenswrapper[4886]: I1124 09:09:59.841122 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/39bb02ba-ed82-471c-b04f-89fd336948b3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "39bb02ba-ed82-471c-b04f-89fd336948b3" (UID: "39bb02ba-ed82-471c-b04f-89fd336948b3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:09:59 crc kubenswrapper[4886]: I1124 09:09:59.856592 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/39bb02ba-ed82-471c-b04f-89fd336948b3-config-data" (OuterVolumeSpecName: "config-data") pod "39bb02ba-ed82-471c-b04f-89fd336948b3" (UID: "39bb02ba-ed82-471c-b04f-89fd336948b3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:09:59 crc kubenswrapper[4886]: I1124 09:09:59.909742 4886 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39bb02ba-ed82-471c-b04f-89fd336948b3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 09:09:59 crc kubenswrapper[4886]: I1124 09:09:59.909780 4886 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/39bb02ba-ed82-471c-b04f-89fd336948b3-logs\") on node \"crc\" DevicePath \"\"" Nov 24 09:09:59 crc kubenswrapper[4886]: I1124 09:09:59.909793 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-44wkw\" (UniqueName: \"kubernetes.io/projected/39bb02ba-ed82-471c-b04f-89fd336948b3-kube-api-access-44wkw\") on node \"crc\" DevicePath \"\"" Nov 24 09:09:59 crc kubenswrapper[4886]: I1124 09:09:59.909807 4886 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/39bb02ba-ed82-471c-b04f-89fd336948b3-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 09:09:59 crc kubenswrapper[4886]: I1124 09:09:59.919584 4886 generic.go:334] "Generic (PLEG): container finished" podID="39bb02ba-ed82-471c-b04f-89fd336948b3" containerID="994fa43d031b5d24ab3755a87ef24fb72bd1187a7023bcf539c74e5b7a3f95d4" exitCode=0 Nov 24 09:09:59 crc kubenswrapper[4886]: I1124 09:09:59.920136 4886 generic.go:334] "Generic (PLEG): container finished" podID="39bb02ba-ed82-471c-b04f-89fd336948b3" containerID="dd4aa94d471878180306e271d8236da70f4a485a2bd3ca9786cb50cb52861af1" exitCode=143 Nov 24 09:09:59 crc kubenswrapper[4886]: I1124 09:09:59.920070 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 24 09:09:59 crc kubenswrapper[4886]: I1124 09:09:59.920092 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"39bb02ba-ed82-471c-b04f-89fd336948b3","Type":"ContainerDied","Data":"994fa43d031b5d24ab3755a87ef24fb72bd1187a7023bcf539c74e5b7a3f95d4"} Nov 24 09:09:59 crc kubenswrapper[4886]: I1124 09:09:59.924812 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"39bb02ba-ed82-471c-b04f-89fd336948b3","Type":"ContainerDied","Data":"dd4aa94d471878180306e271d8236da70f4a485a2bd3ca9786cb50cb52861af1"} Nov 24 09:09:59 crc kubenswrapper[4886]: I1124 09:09:59.924831 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"39bb02ba-ed82-471c-b04f-89fd336948b3","Type":"ContainerDied","Data":"e2ffd1e3702717fe67b1c4c58548cda52df22dea77670df3c6025e927e3c49b7"} Nov 24 09:09:59 crc kubenswrapper[4886]: I1124 09:09:59.924853 4886 scope.go:117] "RemoveContainer" containerID="994fa43d031b5d24ab3755a87ef24fb72bd1187a7023bcf539c74e5b7a3f95d4" Nov 24 09:10:00 crc kubenswrapper[4886]: I1124 09:10:00.018033 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 24 09:10:00 crc kubenswrapper[4886]: I1124 09:10:00.018380 4886 scope.go:117] "RemoveContainer" containerID="dd4aa94d471878180306e271d8236da70f4a485a2bd3ca9786cb50cb52861af1" Nov 24 09:10:00 crc kubenswrapper[4886]: I1124 09:10:00.033727 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Nov 24 09:10:00 crc kubenswrapper[4886]: I1124 09:10:00.047734 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Nov 24 09:10:00 crc kubenswrapper[4886]: E1124 09:10:00.048698 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39bb02ba-ed82-471c-b04f-89fd336948b3" containerName="nova-metadata-log" Nov 24 09:10:00 crc kubenswrapper[4886]: I1124 09:10:00.048729 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="39bb02ba-ed82-471c-b04f-89fd336948b3" containerName="nova-metadata-log" Nov 24 09:10:00 crc kubenswrapper[4886]: E1124 09:10:00.048765 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39bb02ba-ed82-471c-b04f-89fd336948b3" containerName="nova-metadata-metadata" Nov 24 09:10:00 crc kubenswrapper[4886]: I1124 09:10:00.048776 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="39bb02ba-ed82-471c-b04f-89fd336948b3" containerName="nova-metadata-metadata" Nov 24 09:10:00 crc kubenswrapper[4886]: I1124 09:10:00.049056 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="39bb02ba-ed82-471c-b04f-89fd336948b3" containerName="nova-metadata-metadata" Nov 24 09:10:00 crc kubenswrapper[4886]: I1124 09:10:00.049094 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="39bb02ba-ed82-471c-b04f-89fd336948b3" containerName="nova-metadata-log" Nov 24 09:10:00 crc kubenswrapper[4886]: I1124 09:10:00.050882 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 24 09:10:00 crc kubenswrapper[4886]: I1124 09:10:00.056549 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Nov 24 09:10:00 crc kubenswrapper[4886]: I1124 09:10:00.056930 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Nov 24 09:10:00 crc kubenswrapper[4886]: I1124 09:10:00.060250 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 24 09:10:00 crc kubenswrapper[4886]: I1124 09:10:00.120850 4886 scope.go:117] "RemoveContainer" containerID="994fa43d031b5d24ab3755a87ef24fb72bd1187a7023bcf539c74e5b7a3f95d4" Nov 24 09:10:00 crc kubenswrapper[4886]: I1124 09:10:00.122142 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3929bf00-c514-4be5-a088-5bd22e946532-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"3929bf00-c514-4be5-a088-5bd22e946532\") " pod="openstack/nova-metadata-0" Nov 24 09:10:00 crc kubenswrapper[4886]: I1124 09:10:00.122303 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2brt5\" (UniqueName: \"kubernetes.io/projected/3929bf00-c514-4be5-a088-5bd22e946532-kube-api-access-2brt5\") pod \"nova-metadata-0\" (UID: \"3929bf00-c514-4be5-a088-5bd22e946532\") " pod="openstack/nova-metadata-0" Nov 24 09:10:00 crc kubenswrapper[4886]: I1124 09:10:00.122745 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/3929bf00-c514-4be5-a088-5bd22e946532-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"3929bf00-c514-4be5-a088-5bd22e946532\") " pod="openstack/nova-metadata-0" Nov 24 09:10:00 crc kubenswrapper[4886]: I1124 09:10:00.122848 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3929bf00-c514-4be5-a088-5bd22e946532-logs\") pod \"nova-metadata-0\" (UID: \"3929bf00-c514-4be5-a088-5bd22e946532\") " pod="openstack/nova-metadata-0" Nov 24 09:10:00 crc kubenswrapper[4886]: I1124 09:10:00.123520 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3929bf00-c514-4be5-a088-5bd22e946532-config-data\") pod \"nova-metadata-0\" (UID: \"3929bf00-c514-4be5-a088-5bd22e946532\") " pod="openstack/nova-metadata-0" Nov 24 09:10:00 crc kubenswrapper[4886]: E1124 09:10:00.125956 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"994fa43d031b5d24ab3755a87ef24fb72bd1187a7023bcf539c74e5b7a3f95d4\": container with ID starting with 994fa43d031b5d24ab3755a87ef24fb72bd1187a7023bcf539c74e5b7a3f95d4 not found: ID does not exist" containerID="994fa43d031b5d24ab3755a87ef24fb72bd1187a7023bcf539c74e5b7a3f95d4" Nov 24 09:10:00 crc kubenswrapper[4886]: I1124 09:10:00.125994 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"994fa43d031b5d24ab3755a87ef24fb72bd1187a7023bcf539c74e5b7a3f95d4"} err="failed to get container status \"994fa43d031b5d24ab3755a87ef24fb72bd1187a7023bcf539c74e5b7a3f95d4\": rpc error: code = NotFound desc = could not find container \"994fa43d031b5d24ab3755a87ef24fb72bd1187a7023bcf539c74e5b7a3f95d4\": container with ID starting with 994fa43d031b5d24ab3755a87ef24fb72bd1187a7023bcf539c74e5b7a3f95d4 not found: ID does not exist" Nov 24 09:10:00 crc kubenswrapper[4886]: I1124 09:10:00.126057 4886 scope.go:117] "RemoveContainer" containerID="dd4aa94d471878180306e271d8236da70f4a485a2bd3ca9786cb50cb52861af1" Nov 24 09:10:00 crc kubenswrapper[4886]: E1124 09:10:00.126644 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dd4aa94d471878180306e271d8236da70f4a485a2bd3ca9786cb50cb52861af1\": container with ID starting with dd4aa94d471878180306e271d8236da70f4a485a2bd3ca9786cb50cb52861af1 not found: ID does not exist" containerID="dd4aa94d471878180306e271d8236da70f4a485a2bd3ca9786cb50cb52861af1" Nov 24 09:10:00 crc kubenswrapper[4886]: I1124 09:10:00.126667 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dd4aa94d471878180306e271d8236da70f4a485a2bd3ca9786cb50cb52861af1"} err="failed to get container status \"dd4aa94d471878180306e271d8236da70f4a485a2bd3ca9786cb50cb52861af1\": rpc error: code = NotFound desc = could not find container \"dd4aa94d471878180306e271d8236da70f4a485a2bd3ca9786cb50cb52861af1\": container with ID starting with dd4aa94d471878180306e271d8236da70f4a485a2bd3ca9786cb50cb52861af1 not found: ID does not exist" Nov 24 09:10:00 crc kubenswrapper[4886]: I1124 09:10:00.126683 4886 scope.go:117] "RemoveContainer" containerID="994fa43d031b5d24ab3755a87ef24fb72bd1187a7023bcf539c74e5b7a3f95d4" Nov 24 09:10:00 crc kubenswrapper[4886]: I1124 09:10:00.126947 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"994fa43d031b5d24ab3755a87ef24fb72bd1187a7023bcf539c74e5b7a3f95d4"} err="failed to get container status \"994fa43d031b5d24ab3755a87ef24fb72bd1187a7023bcf539c74e5b7a3f95d4\": rpc error: code = NotFound desc = could not find container \"994fa43d031b5d24ab3755a87ef24fb72bd1187a7023bcf539c74e5b7a3f95d4\": container with ID starting with 994fa43d031b5d24ab3755a87ef24fb72bd1187a7023bcf539c74e5b7a3f95d4 not found: ID does not exist" Nov 24 09:10:00 crc kubenswrapper[4886]: I1124 09:10:00.126965 4886 scope.go:117] "RemoveContainer" containerID="dd4aa94d471878180306e271d8236da70f4a485a2bd3ca9786cb50cb52861af1" Nov 24 09:10:00 crc kubenswrapper[4886]: I1124 09:10:00.127183 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dd4aa94d471878180306e271d8236da70f4a485a2bd3ca9786cb50cb52861af1"} err="failed to get container status \"dd4aa94d471878180306e271d8236da70f4a485a2bd3ca9786cb50cb52861af1\": rpc error: code = NotFound desc = could not find container \"dd4aa94d471878180306e271d8236da70f4a485a2bd3ca9786cb50cb52861af1\": container with ID starting with dd4aa94d471878180306e271d8236da70f4a485a2bd3ca9786cb50cb52861af1 not found: ID does not exist" Nov 24 09:10:00 crc kubenswrapper[4886]: I1124 09:10:00.224577 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2brt5\" (UniqueName: \"kubernetes.io/projected/3929bf00-c514-4be5-a088-5bd22e946532-kube-api-access-2brt5\") pod \"nova-metadata-0\" (UID: \"3929bf00-c514-4be5-a088-5bd22e946532\") " pod="openstack/nova-metadata-0" Nov 24 09:10:00 crc kubenswrapper[4886]: I1124 09:10:00.224942 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/3929bf00-c514-4be5-a088-5bd22e946532-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"3929bf00-c514-4be5-a088-5bd22e946532\") " pod="openstack/nova-metadata-0" Nov 24 09:10:00 crc kubenswrapper[4886]: I1124 09:10:00.225093 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3929bf00-c514-4be5-a088-5bd22e946532-logs\") pod \"nova-metadata-0\" (UID: \"3929bf00-c514-4be5-a088-5bd22e946532\") " pod="openstack/nova-metadata-0" Nov 24 09:10:00 crc kubenswrapper[4886]: I1124 09:10:00.225307 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3929bf00-c514-4be5-a088-5bd22e946532-config-data\") pod \"nova-metadata-0\" (UID: \"3929bf00-c514-4be5-a088-5bd22e946532\") " pod="openstack/nova-metadata-0" Nov 24 09:10:00 crc kubenswrapper[4886]: I1124 09:10:00.225450 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3929bf00-c514-4be5-a088-5bd22e946532-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"3929bf00-c514-4be5-a088-5bd22e946532\") " pod="openstack/nova-metadata-0" Nov 24 09:10:00 crc kubenswrapper[4886]: I1124 09:10:00.225622 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3929bf00-c514-4be5-a088-5bd22e946532-logs\") pod \"nova-metadata-0\" (UID: \"3929bf00-c514-4be5-a088-5bd22e946532\") " pod="openstack/nova-metadata-0" Nov 24 09:10:00 crc kubenswrapper[4886]: I1124 09:10:00.230875 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3929bf00-c514-4be5-a088-5bd22e946532-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"3929bf00-c514-4be5-a088-5bd22e946532\") " pod="openstack/nova-metadata-0" Nov 24 09:10:00 crc kubenswrapper[4886]: I1124 09:10:00.234474 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/3929bf00-c514-4be5-a088-5bd22e946532-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"3929bf00-c514-4be5-a088-5bd22e946532\") " pod="openstack/nova-metadata-0" Nov 24 09:10:00 crc kubenswrapper[4886]: I1124 09:10:00.234893 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3929bf00-c514-4be5-a088-5bd22e946532-config-data\") pod \"nova-metadata-0\" (UID: \"3929bf00-c514-4be5-a088-5bd22e946532\") " pod="openstack/nova-metadata-0" Nov 24 09:10:00 crc kubenswrapper[4886]: I1124 09:10:00.247372 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2brt5\" (UniqueName: \"kubernetes.io/projected/3929bf00-c514-4be5-a088-5bd22e946532-kube-api-access-2brt5\") pod \"nova-metadata-0\" (UID: \"3929bf00-c514-4be5-a088-5bd22e946532\") " pod="openstack/nova-metadata-0" Nov 24 09:10:00 crc kubenswrapper[4886]: I1124 09:10:00.427370 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 24 09:10:00 crc kubenswrapper[4886]: I1124 09:10:00.861985 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="39bb02ba-ed82-471c-b04f-89fd336948b3" path="/var/lib/kubelet/pods/39bb02ba-ed82-471c-b04f-89fd336948b3/volumes" Nov 24 09:10:00 crc kubenswrapper[4886]: I1124 09:10:00.932801 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 24 09:10:00 crc kubenswrapper[4886]: I1124 09:10:00.938060 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f87697b1-d576-4e2b-ba20-04d9453e0656","Type":"ContainerStarted","Data":"918fa517447526287717f7285cc35378c1860604c8de22a18d8e6d2774c32c4b"} Nov 24 09:10:00 crc kubenswrapper[4886]: W1124 09:10:00.938851 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3929bf00_c514_4be5_a088_5bd22e946532.slice/crio-f1e7d30f02ae95fe5a92bc4c630693277241f313ded7e91ffbbbda0421e14fec WatchSource:0}: Error finding container f1e7d30f02ae95fe5a92bc4c630693277241f313ded7e91ffbbbda0421e14fec: Status 404 returned error can't find the container with id f1e7d30f02ae95fe5a92bc4c630693277241f313ded7e91ffbbbda0421e14fec Nov 24 09:10:01 crc kubenswrapper[4886]: I1124 09:10:01.356244 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 24 09:10:01 crc kubenswrapper[4886]: I1124 09:10:01.356715 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 24 09:10:01 crc kubenswrapper[4886]: I1124 09:10:01.564311 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Nov 24 09:10:01 crc kubenswrapper[4886]: I1124 09:10:01.564825 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Nov 24 09:10:01 crc kubenswrapper[4886]: I1124 09:10:01.604374 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-bccf8f775-2fd72" Nov 24 09:10:01 crc kubenswrapper[4886]: I1124 09:10:01.605955 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Nov 24 09:10:01 crc kubenswrapper[4886]: I1124 09:10:01.761048 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Nov 24 09:10:01 crc kubenswrapper[4886]: I1124 09:10:01.782626 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-5mr4b"] Nov 24 09:10:01 crc kubenswrapper[4886]: I1124 09:10:01.782877 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6578955fd5-5mr4b" podUID="ae356139-b1b8-4952-9aa4-e233d04a9a08" containerName="dnsmasq-dns" containerID="cri-o://ab8d7eb39a71552b9e686400215af23d4665ff6caeed8b010eccf04885875aa4" gracePeriod=10 Nov 24 09:10:01 crc kubenswrapper[4886]: I1124 09:10:01.784908 4886 patch_prober.go:28] interesting pod/machine-config-daemon-zc46q container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 09:10:01 crc kubenswrapper[4886]: I1124 09:10:01.784997 4886 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zc46q" podUID="23cb993e-0360-4449-b604-8ddd825a6502" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 09:10:01 crc kubenswrapper[4886]: I1124 09:10:01.785069 4886 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-zc46q" Nov 24 09:10:01 crc kubenswrapper[4886]: I1124 09:10:01.787221 4886 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"36e22a101132c390ac35de718c60f14be6675ff8618943dfbe4e49f19370e8c5"} pod="openshift-machine-config-operator/machine-config-daemon-zc46q" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 24 09:10:01 crc kubenswrapper[4886]: I1124 09:10:01.787630 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-zc46q" podUID="23cb993e-0360-4449-b604-8ddd825a6502" containerName="machine-config-daemon" containerID="cri-o://36e22a101132c390ac35de718c60f14be6675ff8618943dfbe4e49f19370e8c5" gracePeriod=600 Nov 24 09:10:02 crc kubenswrapper[4886]: I1124 09:10:02.033038 4886 generic.go:334] "Generic (PLEG): container finished" podID="ae356139-b1b8-4952-9aa4-e233d04a9a08" containerID="ab8d7eb39a71552b9e686400215af23d4665ff6caeed8b010eccf04885875aa4" exitCode=0 Nov 24 09:10:02 crc kubenswrapper[4886]: I1124 09:10:02.033313 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-5mr4b" event={"ID":"ae356139-b1b8-4952-9aa4-e233d04a9a08","Type":"ContainerDied","Data":"ab8d7eb39a71552b9e686400215af23d4665ff6caeed8b010eccf04885875aa4"} Nov 24 09:10:02 crc kubenswrapper[4886]: I1124 09:10:02.059195 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3929bf00-c514-4be5-a088-5bd22e946532","Type":"ContainerStarted","Data":"2b5d1131759d7673124acdd21e225d7efaf2aa3c9cd5bd70d6e4c2c9eafd98f7"} Nov 24 09:10:02 crc kubenswrapper[4886]: I1124 09:10:02.062522 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3929bf00-c514-4be5-a088-5bd22e946532","Type":"ContainerStarted","Data":"184a82ad126eab290808e3a9a7a18330e2d1a9237b4487a82652da677885590b"} Nov 24 09:10:02 crc kubenswrapper[4886]: I1124 09:10:02.062567 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3929bf00-c514-4be5-a088-5bd22e946532","Type":"ContainerStarted","Data":"f1e7d30f02ae95fe5a92bc4c630693277241f313ded7e91ffbbbda0421e14fec"} Nov 24 09:10:02 crc kubenswrapper[4886]: I1124 09:10:02.072591 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f87697b1-d576-4e2b-ba20-04d9453e0656","Type":"ContainerStarted","Data":"c9e7abbfd015b4860d2a4de8afda51802c6af321e9e1c646d00c2d3039d64ca2"} Nov 24 09:10:02 crc kubenswrapper[4886]: I1124 09:10:02.073845 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 24 09:10:02 crc kubenswrapper[4886]: I1124 09:10:02.098641 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.098611369 podStartE2EDuration="2.098611369s" podCreationTimestamp="2025-11-24 09:10:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 09:10:02.087305608 +0000 UTC m=+1257.974043743" watchObservedRunningTime="2025-11-24 09:10:02.098611369 +0000 UTC m=+1257.985349504" Nov 24 09:10:02 crc kubenswrapper[4886]: I1124 09:10:02.111872 4886 generic.go:334] "Generic (PLEG): container finished" podID="23cb993e-0360-4449-b604-8ddd825a6502" containerID="36e22a101132c390ac35de718c60f14be6675ff8618943dfbe4e49f19370e8c5" exitCode=0 Nov 24 09:10:02 crc kubenswrapper[4886]: I1124 09:10:02.113383 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zc46q" event={"ID":"23cb993e-0360-4449-b604-8ddd825a6502","Type":"ContainerDied","Data":"36e22a101132c390ac35de718c60f14be6675ff8618943dfbe4e49f19370e8c5"} Nov 24 09:10:02 crc kubenswrapper[4886]: I1124 09:10:02.113436 4886 scope.go:117] "RemoveContainer" containerID="c28e4d2681964faf5e8db0a7f606c313301cd5d8f7fd6af733f6e4caf7367ebc" Nov 24 09:10:02 crc kubenswrapper[4886]: I1124 09:10:02.202286 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Nov 24 09:10:02 crc kubenswrapper[4886]: I1124 09:10:02.237858 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.690437172 podStartE2EDuration="10.237835925s" podCreationTimestamp="2025-11-24 09:09:52 +0000 UTC" firstStartedPulling="2025-11-24 09:09:53.931945533 +0000 UTC m=+1249.818683668" lastFinishedPulling="2025-11-24 09:10:01.479344286 +0000 UTC m=+1257.366082421" observedRunningTime="2025-11-24 09:10:02.124665019 +0000 UTC m=+1258.011403164" watchObservedRunningTime="2025-11-24 09:10:02.237835925 +0000 UTC m=+1258.124574060" Nov 24 09:10:02 crc kubenswrapper[4886]: I1124 09:10:02.439431 4886 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="7491be7c-1c75-4527-8f77-290e42796216" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.180:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 24 09:10:02 crc kubenswrapper[4886]: I1124 09:10:02.439501 4886 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="7491be7c-1c75-4527-8f77-290e42796216" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.180:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 24 09:10:02 crc kubenswrapper[4886]: I1124 09:10:02.485606 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6578955fd5-5mr4b" Nov 24 09:10:02 crc kubenswrapper[4886]: I1124 09:10:02.672316 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ae356139-b1b8-4952-9aa4-e233d04a9a08-dns-svc\") pod \"ae356139-b1b8-4952-9aa4-e233d04a9a08\" (UID: \"ae356139-b1b8-4952-9aa4-e233d04a9a08\") " Nov 24 09:10:02 crc kubenswrapper[4886]: I1124 09:10:02.672388 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ae356139-b1b8-4952-9aa4-e233d04a9a08-ovsdbserver-sb\") pod \"ae356139-b1b8-4952-9aa4-e233d04a9a08\" (UID: \"ae356139-b1b8-4952-9aa4-e233d04a9a08\") " Nov 24 09:10:02 crc kubenswrapper[4886]: I1124 09:10:02.672470 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hk5tt\" (UniqueName: \"kubernetes.io/projected/ae356139-b1b8-4952-9aa4-e233d04a9a08-kube-api-access-hk5tt\") pod \"ae356139-b1b8-4952-9aa4-e233d04a9a08\" (UID: \"ae356139-b1b8-4952-9aa4-e233d04a9a08\") " Nov 24 09:10:02 crc kubenswrapper[4886]: I1124 09:10:02.672512 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ae356139-b1b8-4952-9aa4-e233d04a9a08-ovsdbserver-nb\") pod \"ae356139-b1b8-4952-9aa4-e233d04a9a08\" (UID: \"ae356139-b1b8-4952-9aa4-e233d04a9a08\") " Nov 24 09:10:02 crc kubenswrapper[4886]: I1124 09:10:02.672545 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ae356139-b1b8-4952-9aa4-e233d04a9a08-dns-swift-storage-0\") pod \"ae356139-b1b8-4952-9aa4-e233d04a9a08\" (UID: \"ae356139-b1b8-4952-9aa4-e233d04a9a08\") " Nov 24 09:10:02 crc kubenswrapper[4886]: I1124 09:10:02.672624 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ae356139-b1b8-4952-9aa4-e233d04a9a08-config\") pod \"ae356139-b1b8-4952-9aa4-e233d04a9a08\" (UID: \"ae356139-b1b8-4952-9aa4-e233d04a9a08\") " Nov 24 09:10:02 crc kubenswrapper[4886]: I1124 09:10:02.685443 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ae356139-b1b8-4952-9aa4-e233d04a9a08-kube-api-access-hk5tt" (OuterVolumeSpecName: "kube-api-access-hk5tt") pod "ae356139-b1b8-4952-9aa4-e233d04a9a08" (UID: "ae356139-b1b8-4952-9aa4-e233d04a9a08"). InnerVolumeSpecName "kube-api-access-hk5tt". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 09:10:02 crc kubenswrapper[4886]: I1124 09:10:02.775890 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hk5tt\" (UniqueName: \"kubernetes.io/projected/ae356139-b1b8-4952-9aa4-e233d04a9a08-kube-api-access-hk5tt\") on node \"crc\" DevicePath \"\"" Nov 24 09:10:02 crc kubenswrapper[4886]: I1124 09:10:02.783216 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ae356139-b1b8-4952-9aa4-e233d04a9a08-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "ae356139-b1b8-4952-9aa4-e233d04a9a08" (UID: "ae356139-b1b8-4952-9aa4-e233d04a9a08"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 09:10:02 crc kubenswrapper[4886]: I1124 09:10:02.783771 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ae356139-b1b8-4952-9aa4-e233d04a9a08-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "ae356139-b1b8-4952-9aa4-e233d04a9a08" (UID: "ae356139-b1b8-4952-9aa4-e233d04a9a08"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 09:10:02 crc kubenswrapper[4886]: I1124 09:10:02.815694 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ae356139-b1b8-4952-9aa4-e233d04a9a08-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "ae356139-b1b8-4952-9aa4-e233d04a9a08" (UID: "ae356139-b1b8-4952-9aa4-e233d04a9a08"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 09:10:02 crc kubenswrapper[4886]: I1124 09:10:02.818387 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ae356139-b1b8-4952-9aa4-e233d04a9a08-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "ae356139-b1b8-4952-9aa4-e233d04a9a08" (UID: "ae356139-b1b8-4952-9aa4-e233d04a9a08"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 09:10:02 crc kubenswrapper[4886]: I1124 09:10:02.830927 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ae356139-b1b8-4952-9aa4-e233d04a9a08-config" (OuterVolumeSpecName: "config") pod "ae356139-b1b8-4952-9aa4-e233d04a9a08" (UID: "ae356139-b1b8-4952-9aa4-e233d04a9a08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 09:10:02 crc kubenswrapper[4886]: I1124 09:10:02.877724 4886 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ae356139-b1b8-4952-9aa4-e233d04a9a08-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 24 09:10:02 crc kubenswrapper[4886]: I1124 09:10:02.877769 4886 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ae356139-b1b8-4952-9aa4-e233d04a9a08-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 24 09:10:02 crc kubenswrapper[4886]: I1124 09:10:02.877787 4886 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ae356139-b1b8-4952-9aa4-e233d04a9a08-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 24 09:10:02 crc kubenswrapper[4886]: I1124 09:10:02.877801 4886 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ae356139-b1b8-4952-9aa4-e233d04a9a08-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 24 09:10:02 crc kubenswrapper[4886]: I1124 09:10:02.877815 4886 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ae356139-b1b8-4952-9aa4-e233d04a9a08-config\") on node \"crc\" DevicePath \"\"" Nov 24 09:10:03 crc kubenswrapper[4886]: I1124 09:10:03.129194 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-rmkch" event={"ID":"a7266b1c-bb21-4f54-994c-52ab5db8d4eb","Type":"ContainerDied","Data":"1abe7a1f7783f8358d0fa155117a9f0b2511fc0b3628864bbfc37391e4d314cf"} Nov 24 09:10:03 crc kubenswrapper[4886]: I1124 09:10:03.130419 4886 generic.go:334] "Generic (PLEG): container finished" podID="a7266b1c-bb21-4f54-994c-52ab5db8d4eb" containerID="1abe7a1f7783f8358d0fa155117a9f0b2511fc0b3628864bbfc37391e4d314cf" exitCode=0 Nov 24 09:10:03 crc kubenswrapper[4886]: I1124 09:10:03.135347 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zc46q" event={"ID":"23cb993e-0360-4449-b604-8ddd825a6502","Type":"ContainerStarted","Data":"4f19fc5058a9ae6baaee874798ea7b1c9ef07faaf66a15067253a35a9f971d8b"} Nov 24 09:10:03 crc kubenswrapper[4886]: I1124 09:10:03.141364 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6578955fd5-5mr4b" Nov 24 09:10:03 crc kubenswrapper[4886]: I1124 09:10:03.141879 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-5mr4b" event={"ID":"ae356139-b1b8-4952-9aa4-e233d04a9a08","Type":"ContainerDied","Data":"f8de8427e939fb3d07342d41a287b1f0ed6ce3cca1ea67df27095399343cf7af"} Nov 24 09:10:03 crc kubenswrapper[4886]: I1124 09:10:03.141918 4886 scope.go:117] "RemoveContainer" containerID="ab8d7eb39a71552b9e686400215af23d4665ff6caeed8b010eccf04885875aa4" Nov 24 09:10:03 crc kubenswrapper[4886]: I1124 09:10:03.182946 4886 scope.go:117] "RemoveContainer" containerID="1d6704eda7e501572eb7ab47130a3a5c5fca8e659fac7045175870e3bbdb594e" Nov 24 09:10:03 crc kubenswrapper[4886]: I1124 09:10:03.213334 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-5mr4b"] Nov 24 09:10:03 crc kubenswrapper[4886]: I1124 09:10:03.233229 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-5mr4b"] Nov 24 09:10:04 crc kubenswrapper[4886]: I1124 09:10:04.613307 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-rmkch" Nov 24 09:10:04 crc kubenswrapper[4886]: I1124 09:10:04.725037 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a7266b1c-bb21-4f54-994c-52ab5db8d4eb-scripts\") pod \"a7266b1c-bb21-4f54-994c-52ab5db8d4eb\" (UID: \"a7266b1c-bb21-4f54-994c-52ab5db8d4eb\") " Nov 24 09:10:04 crc kubenswrapper[4886]: I1124 09:10:04.725221 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nwwsn\" (UniqueName: \"kubernetes.io/projected/a7266b1c-bb21-4f54-994c-52ab5db8d4eb-kube-api-access-nwwsn\") pod \"a7266b1c-bb21-4f54-994c-52ab5db8d4eb\" (UID: \"a7266b1c-bb21-4f54-994c-52ab5db8d4eb\") " Nov 24 09:10:04 crc kubenswrapper[4886]: I1124 09:10:04.725272 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7266b1c-bb21-4f54-994c-52ab5db8d4eb-combined-ca-bundle\") pod \"a7266b1c-bb21-4f54-994c-52ab5db8d4eb\" (UID: \"a7266b1c-bb21-4f54-994c-52ab5db8d4eb\") " Nov 24 09:10:04 crc kubenswrapper[4886]: I1124 09:10:04.725372 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a7266b1c-bb21-4f54-994c-52ab5db8d4eb-config-data\") pod \"a7266b1c-bb21-4f54-994c-52ab5db8d4eb\" (UID: \"a7266b1c-bb21-4f54-994c-52ab5db8d4eb\") " Nov 24 09:10:04 crc kubenswrapper[4886]: I1124 09:10:04.733653 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a7266b1c-bb21-4f54-994c-52ab5db8d4eb-kube-api-access-nwwsn" (OuterVolumeSpecName: "kube-api-access-nwwsn") pod "a7266b1c-bb21-4f54-994c-52ab5db8d4eb" (UID: "a7266b1c-bb21-4f54-994c-52ab5db8d4eb"). InnerVolumeSpecName "kube-api-access-nwwsn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 09:10:04 crc kubenswrapper[4886]: I1124 09:10:04.737571 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7266b1c-bb21-4f54-994c-52ab5db8d4eb-scripts" (OuterVolumeSpecName: "scripts") pod "a7266b1c-bb21-4f54-994c-52ab5db8d4eb" (UID: "a7266b1c-bb21-4f54-994c-52ab5db8d4eb"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:10:04 crc kubenswrapper[4886]: I1124 09:10:04.770382 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7266b1c-bb21-4f54-994c-52ab5db8d4eb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a7266b1c-bb21-4f54-994c-52ab5db8d4eb" (UID: "a7266b1c-bb21-4f54-994c-52ab5db8d4eb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:10:04 crc kubenswrapper[4886]: I1124 09:10:04.771217 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7266b1c-bb21-4f54-994c-52ab5db8d4eb-config-data" (OuterVolumeSpecName: "config-data") pod "a7266b1c-bb21-4f54-994c-52ab5db8d4eb" (UID: "a7266b1c-bb21-4f54-994c-52ab5db8d4eb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:10:04 crc kubenswrapper[4886]: I1124 09:10:04.832458 4886 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a7266b1c-bb21-4f54-994c-52ab5db8d4eb-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 09:10:04 crc kubenswrapper[4886]: I1124 09:10:04.832508 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nwwsn\" (UniqueName: \"kubernetes.io/projected/a7266b1c-bb21-4f54-994c-52ab5db8d4eb-kube-api-access-nwwsn\") on node \"crc\" DevicePath \"\"" Nov 24 09:10:04 crc kubenswrapper[4886]: I1124 09:10:04.832533 4886 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7266b1c-bb21-4f54-994c-52ab5db8d4eb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 09:10:04 crc kubenswrapper[4886]: I1124 09:10:04.832547 4886 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a7266b1c-bb21-4f54-994c-52ab5db8d4eb-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 09:10:04 crc kubenswrapper[4886]: I1124 09:10:04.866228 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ae356139-b1b8-4952-9aa4-e233d04a9a08" path="/var/lib/kubelet/pods/ae356139-b1b8-4952-9aa4-e233d04a9a08/volumes" Nov 24 09:10:05 crc kubenswrapper[4886]: I1124 09:10:05.199588 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-rmkch" event={"ID":"a7266b1c-bb21-4f54-994c-52ab5db8d4eb","Type":"ContainerDied","Data":"ac82bbb990d56fda9b22ca7f9fc075500732f7e04836fc643bec91f6f9838f5c"} Nov 24 09:10:05 crc kubenswrapper[4886]: I1124 09:10:05.200322 4886 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ac82bbb990d56fda9b22ca7f9fc075500732f7e04836fc643bec91f6f9838f5c" Nov 24 09:10:05 crc kubenswrapper[4886]: I1124 09:10:05.199724 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-rmkch" Nov 24 09:10:05 crc kubenswrapper[4886]: I1124 09:10:05.370633 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 24 09:10:05 crc kubenswrapper[4886]: I1124 09:10:05.371132 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="7491be7c-1c75-4527-8f77-290e42796216" containerName="nova-api-log" containerID="cri-o://099beaee49b7b7d1aa6f753a3713dd9c58971a18d67a467336ae5022c0208c27" gracePeriod=30 Nov 24 09:10:05 crc kubenswrapper[4886]: I1124 09:10:05.371228 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="7491be7c-1c75-4527-8f77-290e42796216" containerName="nova-api-api" containerID="cri-o://f938f5910651b9c86c5bf75b01ed7a64a9319e6df1686915eb77fb252f8323a8" gracePeriod=30 Nov 24 09:10:05 crc kubenswrapper[4886]: I1124 09:10:05.385810 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Nov 24 09:10:05 crc kubenswrapper[4886]: I1124 09:10:05.386308 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="89ccf13d-537d-485f-ab6d-448bc8171089" containerName="nova-scheduler-scheduler" containerID="cri-o://ac81ebbfa02a4916ba620ed388a3a5d33ef95d9f55675f47fb94ad64f22e51bc" gracePeriod=30 Nov 24 09:10:05 crc kubenswrapper[4886]: I1124 09:10:05.429013 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 24 09:10:05 crc kubenswrapper[4886]: I1124 09:10:05.429412 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 24 09:10:05 crc kubenswrapper[4886]: I1124 09:10:05.497122 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 24 09:10:05 crc kubenswrapper[4886]: E1124 09:10:05.564082 4886 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda7266b1c_bb21_4f54_994c_52ab5db8d4eb.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7491be7c_1c75_4527_8f77_290e42796216.slice/crio-conmon-099beaee49b7b7d1aa6f753a3713dd9c58971a18d67a467336ae5022c0208c27.scope\": RecentStats: unable to find data in memory cache]" Nov 24 09:10:06 crc kubenswrapper[4886]: I1124 09:10:06.214407 4886 generic.go:334] "Generic (PLEG): container finished" podID="7491be7c-1c75-4527-8f77-290e42796216" containerID="099beaee49b7b7d1aa6f753a3713dd9c58971a18d67a467336ae5022c0208c27" exitCode=143 Nov 24 09:10:06 crc kubenswrapper[4886]: I1124 09:10:06.214494 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"7491be7c-1c75-4527-8f77-290e42796216","Type":"ContainerDied","Data":"099beaee49b7b7d1aa6f753a3713dd9c58971a18d67a467336ae5022c0208c27"} Nov 24 09:10:06 crc kubenswrapper[4886]: E1124 09:10:06.564674 4886 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ac81ebbfa02a4916ba620ed388a3a5d33ef95d9f55675f47fb94ad64f22e51bc is running failed: container process not found" containerID="ac81ebbfa02a4916ba620ed388a3a5d33ef95d9f55675f47fb94ad64f22e51bc" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 24 09:10:06 crc kubenswrapper[4886]: E1124 09:10:06.565599 4886 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ac81ebbfa02a4916ba620ed388a3a5d33ef95d9f55675f47fb94ad64f22e51bc is running failed: container process not found" containerID="ac81ebbfa02a4916ba620ed388a3a5d33ef95d9f55675f47fb94ad64f22e51bc" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 24 09:10:06 crc kubenswrapper[4886]: E1124 09:10:06.566194 4886 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ac81ebbfa02a4916ba620ed388a3a5d33ef95d9f55675f47fb94ad64f22e51bc is running failed: container process not found" containerID="ac81ebbfa02a4916ba620ed388a3a5d33ef95d9f55675f47fb94ad64f22e51bc" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 24 09:10:06 crc kubenswrapper[4886]: E1124 09:10:06.566232 4886 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ac81ebbfa02a4916ba620ed388a3a5d33ef95d9f55675f47fb94ad64f22e51bc is running failed: container process not found" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="89ccf13d-537d-485f-ab6d-448bc8171089" containerName="nova-scheduler-scheduler" Nov 24 09:10:06 crc kubenswrapper[4886]: I1124 09:10:06.820473 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 24 09:10:06 crc kubenswrapper[4886]: I1124 09:10:06.995476 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/89ccf13d-537d-485f-ab6d-448bc8171089-config-data\") pod \"89ccf13d-537d-485f-ab6d-448bc8171089\" (UID: \"89ccf13d-537d-485f-ab6d-448bc8171089\") " Nov 24 09:10:06 crc kubenswrapper[4886]: I1124 09:10:06.995874 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2z7b8\" (UniqueName: \"kubernetes.io/projected/89ccf13d-537d-485f-ab6d-448bc8171089-kube-api-access-2z7b8\") pod \"89ccf13d-537d-485f-ab6d-448bc8171089\" (UID: \"89ccf13d-537d-485f-ab6d-448bc8171089\") " Nov 24 09:10:06 crc kubenswrapper[4886]: I1124 09:10:06.995903 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89ccf13d-537d-485f-ab6d-448bc8171089-combined-ca-bundle\") pod \"89ccf13d-537d-485f-ab6d-448bc8171089\" (UID: \"89ccf13d-537d-485f-ab6d-448bc8171089\") " Nov 24 09:10:07 crc kubenswrapper[4886]: I1124 09:10:07.013211 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/89ccf13d-537d-485f-ab6d-448bc8171089-kube-api-access-2z7b8" (OuterVolumeSpecName: "kube-api-access-2z7b8") pod "89ccf13d-537d-485f-ab6d-448bc8171089" (UID: "89ccf13d-537d-485f-ab6d-448bc8171089"). InnerVolumeSpecName "kube-api-access-2z7b8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 09:10:07 crc kubenswrapper[4886]: I1124 09:10:07.031612 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/89ccf13d-537d-485f-ab6d-448bc8171089-config-data" (OuterVolumeSpecName: "config-data") pod "89ccf13d-537d-485f-ab6d-448bc8171089" (UID: "89ccf13d-537d-485f-ab6d-448bc8171089"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:10:07 crc kubenswrapper[4886]: I1124 09:10:07.043065 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/89ccf13d-537d-485f-ab6d-448bc8171089-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "89ccf13d-537d-485f-ab6d-448bc8171089" (UID: "89ccf13d-537d-485f-ab6d-448bc8171089"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:10:07 crc kubenswrapper[4886]: I1124 09:10:07.099725 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2z7b8\" (UniqueName: \"kubernetes.io/projected/89ccf13d-537d-485f-ab6d-448bc8171089-kube-api-access-2z7b8\") on node \"crc\" DevicePath \"\"" Nov 24 09:10:07 crc kubenswrapper[4886]: I1124 09:10:07.099775 4886 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89ccf13d-537d-485f-ab6d-448bc8171089-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 09:10:07 crc kubenswrapper[4886]: I1124 09:10:07.099787 4886 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/89ccf13d-537d-485f-ab6d-448bc8171089-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 09:10:07 crc kubenswrapper[4886]: I1124 09:10:07.229737 4886 generic.go:334] "Generic (PLEG): container finished" podID="894a9de8-fef6-45c0-9464-fca3f25587e9" containerID="4785f8ddb6e0c935ba3f80c8611df368145a2d78eba83d4bfc13e053f494c5c9" exitCode=0 Nov 24 09:10:07 crc kubenswrapper[4886]: I1124 09:10:07.229838 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-c59cj" event={"ID":"894a9de8-fef6-45c0-9464-fca3f25587e9","Type":"ContainerDied","Data":"4785f8ddb6e0c935ba3f80c8611df368145a2d78eba83d4bfc13e053f494c5c9"} Nov 24 09:10:07 crc kubenswrapper[4886]: I1124 09:10:07.235360 4886 generic.go:334] "Generic (PLEG): container finished" podID="89ccf13d-537d-485f-ab6d-448bc8171089" containerID="ac81ebbfa02a4916ba620ed388a3a5d33ef95d9f55675f47fb94ad64f22e51bc" exitCode=0 Nov 24 09:10:07 crc kubenswrapper[4886]: I1124 09:10:07.235499 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 24 09:10:07 crc kubenswrapper[4886]: I1124 09:10:07.235507 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"89ccf13d-537d-485f-ab6d-448bc8171089","Type":"ContainerDied","Data":"ac81ebbfa02a4916ba620ed388a3a5d33ef95d9f55675f47fb94ad64f22e51bc"} Nov 24 09:10:07 crc kubenswrapper[4886]: I1124 09:10:07.235604 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"89ccf13d-537d-485f-ab6d-448bc8171089","Type":"ContainerDied","Data":"1af34c5b304a2dd7774622a5e7ce2f83738d76d7793e8f5bb057a8afd2fc9390"} Nov 24 09:10:07 crc kubenswrapper[4886]: I1124 09:10:07.235641 4886 scope.go:117] "RemoveContainer" containerID="ac81ebbfa02a4916ba620ed388a3a5d33ef95d9f55675f47fb94ad64f22e51bc" Nov 24 09:10:07 crc kubenswrapper[4886]: I1124 09:10:07.235695 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="3929bf00-c514-4be5-a088-5bd22e946532" containerName="nova-metadata-log" containerID="cri-o://184a82ad126eab290808e3a9a7a18330e2d1a9237b4487a82652da677885590b" gracePeriod=30 Nov 24 09:10:07 crc kubenswrapper[4886]: I1124 09:10:07.235815 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="3929bf00-c514-4be5-a088-5bd22e946532" containerName="nova-metadata-metadata" containerID="cri-o://2b5d1131759d7673124acdd21e225d7efaf2aa3c9cd5bd70d6e4c2c9eafd98f7" gracePeriod=30 Nov 24 09:10:07 crc kubenswrapper[4886]: I1124 09:10:07.285745 4886 scope.go:117] "RemoveContainer" containerID="ac81ebbfa02a4916ba620ed388a3a5d33ef95d9f55675f47fb94ad64f22e51bc" Nov 24 09:10:07 crc kubenswrapper[4886]: E1124 09:10:07.286470 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ac81ebbfa02a4916ba620ed388a3a5d33ef95d9f55675f47fb94ad64f22e51bc\": container with ID starting with ac81ebbfa02a4916ba620ed388a3a5d33ef95d9f55675f47fb94ad64f22e51bc not found: ID does not exist" containerID="ac81ebbfa02a4916ba620ed388a3a5d33ef95d9f55675f47fb94ad64f22e51bc" Nov 24 09:10:07 crc kubenswrapper[4886]: I1124 09:10:07.286539 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ac81ebbfa02a4916ba620ed388a3a5d33ef95d9f55675f47fb94ad64f22e51bc"} err="failed to get container status \"ac81ebbfa02a4916ba620ed388a3a5d33ef95d9f55675f47fb94ad64f22e51bc\": rpc error: code = NotFound desc = could not find container \"ac81ebbfa02a4916ba620ed388a3a5d33ef95d9f55675f47fb94ad64f22e51bc\": container with ID starting with ac81ebbfa02a4916ba620ed388a3a5d33ef95d9f55675f47fb94ad64f22e51bc not found: ID does not exist" Nov 24 09:10:07 crc kubenswrapper[4886]: I1124 09:10:07.294307 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Nov 24 09:10:07 crc kubenswrapper[4886]: I1124 09:10:07.307717 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Nov 24 09:10:07 crc kubenswrapper[4886]: I1124 09:10:07.329558 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Nov 24 09:10:07 crc kubenswrapper[4886]: E1124 09:10:07.330122 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae356139-b1b8-4952-9aa4-e233d04a9a08" containerName="dnsmasq-dns" Nov 24 09:10:07 crc kubenswrapper[4886]: I1124 09:10:07.330167 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae356139-b1b8-4952-9aa4-e233d04a9a08" containerName="dnsmasq-dns" Nov 24 09:10:07 crc kubenswrapper[4886]: E1124 09:10:07.330199 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae356139-b1b8-4952-9aa4-e233d04a9a08" containerName="init" Nov 24 09:10:07 crc kubenswrapper[4886]: I1124 09:10:07.330207 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae356139-b1b8-4952-9aa4-e233d04a9a08" containerName="init" Nov 24 09:10:07 crc kubenswrapper[4886]: E1124 09:10:07.330220 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7266b1c-bb21-4f54-994c-52ab5db8d4eb" containerName="nova-manage" Nov 24 09:10:07 crc kubenswrapper[4886]: I1124 09:10:07.330227 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7266b1c-bb21-4f54-994c-52ab5db8d4eb" containerName="nova-manage" Nov 24 09:10:07 crc kubenswrapper[4886]: E1124 09:10:07.330236 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89ccf13d-537d-485f-ab6d-448bc8171089" containerName="nova-scheduler-scheduler" Nov 24 09:10:07 crc kubenswrapper[4886]: I1124 09:10:07.330245 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="89ccf13d-537d-485f-ab6d-448bc8171089" containerName="nova-scheduler-scheduler" Nov 24 09:10:07 crc kubenswrapper[4886]: I1124 09:10:07.330499 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="ae356139-b1b8-4952-9aa4-e233d04a9a08" containerName="dnsmasq-dns" Nov 24 09:10:07 crc kubenswrapper[4886]: I1124 09:10:07.330531 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="a7266b1c-bb21-4f54-994c-52ab5db8d4eb" containerName="nova-manage" Nov 24 09:10:07 crc kubenswrapper[4886]: I1124 09:10:07.330554 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="89ccf13d-537d-485f-ab6d-448bc8171089" containerName="nova-scheduler-scheduler" Nov 24 09:10:07 crc kubenswrapper[4886]: I1124 09:10:07.334378 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 24 09:10:07 crc kubenswrapper[4886]: I1124 09:10:07.338352 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Nov 24 09:10:07 crc kubenswrapper[4886]: I1124 09:10:07.349405 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 24 09:10:07 crc kubenswrapper[4886]: I1124 09:10:07.508883 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9bpkl\" (UniqueName: \"kubernetes.io/projected/0402c99c-2124-499a-8682-bc7cab563f47-kube-api-access-9bpkl\") pod \"nova-scheduler-0\" (UID: \"0402c99c-2124-499a-8682-bc7cab563f47\") " pod="openstack/nova-scheduler-0" Nov 24 09:10:07 crc kubenswrapper[4886]: I1124 09:10:07.509995 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0402c99c-2124-499a-8682-bc7cab563f47-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"0402c99c-2124-499a-8682-bc7cab563f47\") " pod="openstack/nova-scheduler-0" Nov 24 09:10:07 crc kubenswrapper[4886]: I1124 09:10:07.510380 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0402c99c-2124-499a-8682-bc7cab563f47-config-data\") pod \"nova-scheduler-0\" (UID: \"0402c99c-2124-499a-8682-bc7cab563f47\") " pod="openstack/nova-scheduler-0" Nov 24 09:10:07 crc kubenswrapper[4886]: I1124 09:10:07.612974 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0402c99c-2124-499a-8682-bc7cab563f47-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"0402c99c-2124-499a-8682-bc7cab563f47\") " pod="openstack/nova-scheduler-0" Nov 24 09:10:07 crc kubenswrapper[4886]: I1124 09:10:07.613091 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0402c99c-2124-499a-8682-bc7cab563f47-config-data\") pod \"nova-scheduler-0\" (UID: \"0402c99c-2124-499a-8682-bc7cab563f47\") " pod="openstack/nova-scheduler-0" Nov 24 09:10:07 crc kubenswrapper[4886]: I1124 09:10:07.613133 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9bpkl\" (UniqueName: \"kubernetes.io/projected/0402c99c-2124-499a-8682-bc7cab563f47-kube-api-access-9bpkl\") pod \"nova-scheduler-0\" (UID: \"0402c99c-2124-499a-8682-bc7cab563f47\") " pod="openstack/nova-scheduler-0" Nov 24 09:10:07 crc kubenswrapper[4886]: I1124 09:10:07.637408 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0402c99c-2124-499a-8682-bc7cab563f47-config-data\") pod \"nova-scheduler-0\" (UID: \"0402c99c-2124-499a-8682-bc7cab563f47\") " pod="openstack/nova-scheduler-0" Nov 24 09:10:07 crc kubenswrapper[4886]: I1124 09:10:07.639127 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0402c99c-2124-499a-8682-bc7cab563f47-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"0402c99c-2124-499a-8682-bc7cab563f47\") " pod="openstack/nova-scheduler-0" Nov 24 09:10:07 crc kubenswrapper[4886]: I1124 09:10:07.660193 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9bpkl\" (UniqueName: \"kubernetes.io/projected/0402c99c-2124-499a-8682-bc7cab563f47-kube-api-access-9bpkl\") pod \"nova-scheduler-0\" (UID: \"0402c99c-2124-499a-8682-bc7cab563f47\") " pod="openstack/nova-scheduler-0" Nov 24 09:10:07 crc kubenswrapper[4886]: I1124 09:10:07.853095 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 24 09:10:08 crc kubenswrapper[4886]: I1124 09:10:08.025075 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 24 09:10:08 crc kubenswrapper[4886]: I1124 09:10:08.123002 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/3929bf00-c514-4be5-a088-5bd22e946532-nova-metadata-tls-certs\") pod \"3929bf00-c514-4be5-a088-5bd22e946532\" (UID: \"3929bf00-c514-4be5-a088-5bd22e946532\") " Nov 24 09:10:08 crc kubenswrapper[4886]: I1124 09:10:08.123351 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2brt5\" (UniqueName: \"kubernetes.io/projected/3929bf00-c514-4be5-a088-5bd22e946532-kube-api-access-2brt5\") pod \"3929bf00-c514-4be5-a088-5bd22e946532\" (UID: \"3929bf00-c514-4be5-a088-5bd22e946532\") " Nov 24 09:10:08 crc kubenswrapper[4886]: I1124 09:10:08.123422 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3929bf00-c514-4be5-a088-5bd22e946532-logs\") pod \"3929bf00-c514-4be5-a088-5bd22e946532\" (UID: \"3929bf00-c514-4be5-a088-5bd22e946532\") " Nov 24 09:10:08 crc kubenswrapper[4886]: I1124 09:10:08.123475 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3929bf00-c514-4be5-a088-5bd22e946532-combined-ca-bundle\") pod \"3929bf00-c514-4be5-a088-5bd22e946532\" (UID: \"3929bf00-c514-4be5-a088-5bd22e946532\") " Nov 24 09:10:08 crc kubenswrapper[4886]: I1124 09:10:08.123511 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3929bf00-c514-4be5-a088-5bd22e946532-config-data\") pod \"3929bf00-c514-4be5-a088-5bd22e946532\" (UID: \"3929bf00-c514-4be5-a088-5bd22e946532\") " Nov 24 09:10:08 crc kubenswrapper[4886]: I1124 09:10:08.124135 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3929bf00-c514-4be5-a088-5bd22e946532-logs" (OuterVolumeSpecName: "logs") pod "3929bf00-c514-4be5-a088-5bd22e946532" (UID: "3929bf00-c514-4be5-a088-5bd22e946532"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 09:10:08 crc kubenswrapper[4886]: I1124 09:10:08.130220 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3929bf00-c514-4be5-a088-5bd22e946532-kube-api-access-2brt5" (OuterVolumeSpecName: "kube-api-access-2brt5") pod "3929bf00-c514-4be5-a088-5bd22e946532" (UID: "3929bf00-c514-4be5-a088-5bd22e946532"). InnerVolumeSpecName "kube-api-access-2brt5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 09:10:08 crc kubenswrapper[4886]: I1124 09:10:08.160379 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3929bf00-c514-4be5-a088-5bd22e946532-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3929bf00-c514-4be5-a088-5bd22e946532" (UID: "3929bf00-c514-4be5-a088-5bd22e946532"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:10:08 crc kubenswrapper[4886]: I1124 09:10:08.162876 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3929bf00-c514-4be5-a088-5bd22e946532-config-data" (OuterVolumeSpecName: "config-data") pod "3929bf00-c514-4be5-a088-5bd22e946532" (UID: "3929bf00-c514-4be5-a088-5bd22e946532"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:10:08 crc kubenswrapper[4886]: I1124 09:10:08.201140 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3929bf00-c514-4be5-a088-5bd22e946532-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "3929bf00-c514-4be5-a088-5bd22e946532" (UID: "3929bf00-c514-4be5-a088-5bd22e946532"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:10:08 crc kubenswrapper[4886]: I1124 09:10:08.226911 4886 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3929bf00-c514-4be5-a088-5bd22e946532-logs\") on node \"crc\" DevicePath \"\"" Nov 24 09:10:08 crc kubenswrapper[4886]: I1124 09:10:08.226968 4886 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3929bf00-c514-4be5-a088-5bd22e946532-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 09:10:08 crc kubenswrapper[4886]: I1124 09:10:08.226983 4886 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3929bf00-c514-4be5-a088-5bd22e946532-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 09:10:08 crc kubenswrapper[4886]: I1124 09:10:08.226996 4886 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/3929bf00-c514-4be5-a088-5bd22e946532-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 24 09:10:08 crc kubenswrapper[4886]: I1124 09:10:08.227009 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2brt5\" (UniqueName: \"kubernetes.io/projected/3929bf00-c514-4be5-a088-5bd22e946532-kube-api-access-2brt5\") on node \"crc\" DevicePath \"\"" Nov 24 09:10:08 crc kubenswrapper[4886]: I1124 09:10:08.271372 4886 generic.go:334] "Generic (PLEG): container finished" podID="3929bf00-c514-4be5-a088-5bd22e946532" containerID="2b5d1131759d7673124acdd21e225d7efaf2aa3c9cd5bd70d6e4c2c9eafd98f7" exitCode=0 Nov 24 09:10:08 crc kubenswrapper[4886]: I1124 09:10:08.271439 4886 generic.go:334] "Generic (PLEG): container finished" podID="3929bf00-c514-4be5-a088-5bd22e946532" containerID="184a82ad126eab290808e3a9a7a18330e2d1a9237b4487a82652da677885590b" exitCode=143 Nov 24 09:10:08 crc kubenswrapper[4886]: I1124 09:10:08.271473 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 24 09:10:08 crc kubenswrapper[4886]: I1124 09:10:08.271529 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3929bf00-c514-4be5-a088-5bd22e946532","Type":"ContainerDied","Data":"2b5d1131759d7673124acdd21e225d7efaf2aa3c9cd5bd70d6e4c2c9eafd98f7"} Nov 24 09:10:08 crc kubenswrapper[4886]: I1124 09:10:08.271619 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3929bf00-c514-4be5-a088-5bd22e946532","Type":"ContainerDied","Data":"184a82ad126eab290808e3a9a7a18330e2d1a9237b4487a82652da677885590b"} Nov 24 09:10:08 crc kubenswrapper[4886]: I1124 09:10:08.271644 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3929bf00-c514-4be5-a088-5bd22e946532","Type":"ContainerDied","Data":"f1e7d30f02ae95fe5a92bc4c630693277241f313ded7e91ffbbbda0421e14fec"} Nov 24 09:10:08 crc kubenswrapper[4886]: I1124 09:10:08.271673 4886 scope.go:117] "RemoveContainer" containerID="2b5d1131759d7673124acdd21e225d7efaf2aa3c9cd5bd70d6e4c2c9eafd98f7" Nov 24 09:10:08 crc kubenswrapper[4886]: I1124 09:10:08.324649 4886 scope.go:117] "RemoveContainer" containerID="184a82ad126eab290808e3a9a7a18330e2d1a9237b4487a82652da677885590b" Nov 24 09:10:08 crc kubenswrapper[4886]: I1124 09:10:08.332044 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 24 09:10:08 crc kubenswrapper[4886]: I1124 09:10:08.344084 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Nov 24 09:10:08 crc kubenswrapper[4886]: I1124 09:10:08.363318 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Nov 24 09:10:08 crc kubenswrapper[4886]: E1124 09:10:08.363944 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3929bf00-c514-4be5-a088-5bd22e946532" containerName="nova-metadata-log" Nov 24 09:10:08 crc kubenswrapper[4886]: I1124 09:10:08.363968 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="3929bf00-c514-4be5-a088-5bd22e946532" containerName="nova-metadata-log" Nov 24 09:10:08 crc kubenswrapper[4886]: E1124 09:10:08.364002 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3929bf00-c514-4be5-a088-5bd22e946532" containerName="nova-metadata-metadata" Nov 24 09:10:08 crc kubenswrapper[4886]: I1124 09:10:08.364010 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="3929bf00-c514-4be5-a088-5bd22e946532" containerName="nova-metadata-metadata" Nov 24 09:10:08 crc kubenswrapper[4886]: I1124 09:10:08.364248 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="3929bf00-c514-4be5-a088-5bd22e946532" containerName="nova-metadata-metadata" Nov 24 09:10:08 crc kubenswrapper[4886]: I1124 09:10:08.364268 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="3929bf00-c514-4be5-a088-5bd22e946532" containerName="nova-metadata-log" Nov 24 09:10:08 crc kubenswrapper[4886]: I1124 09:10:08.365509 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 24 09:10:08 crc kubenswrapper[4886]: I1124 09:10:08.370572 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Nov 24 09:10:08 crc kubenswrapper[4886]: I1124 09:10:08.376376 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Nov 24 09:10:08 crc kubenswrapper[4886]: I1124 09:10:08.376610 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 24 09:10:08 crc kubenswrapper[4886]: I1124 09:10:08.389700 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 24 09:10:08 crc kubenswrapper[4886]: I1124 09:10:08.402877 4886 scope.go:117] "RemoveContainer" containerID="2b5d1131759d7673124acdd21e225d7efaf2aa3c9cd5bd70d6e4c2c9eafd98f7" Nov 24 09:10:08 crc kubenswrapper[4886]: E1124 09:10:08.404347 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2b5d1131759d7673124acdd21e225d7efaf2aa3c9cd5bd70d6e4c2c9eafd98f7\": container with ID starting with 2b5d1131759d7673124acdd21e225d7efaf2aa3c9cd5bd70d6e4c2c9eafd98f7 not found: ID does not exist" containerID="2b5d1131759d7673124acdd21e225d7efaf2aa3c9cd5bd70d6e4c2c9eafd98f7" Nov 24 09:10:08 crc kubenswrapper[4886]: I1124 09:10:08.404394 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2b5d1131759d7673124acdd21e225d7efaf2aa3c9cd5bd70d6e4c2c9eafd98f7"} err="failed to get container status \"2b5d1131759d7673124acdd21e225d7efaf2aa3c9cd5bd70d6e4c2c9eafd98f7\": rpc error: code = NotFound desc = could not find container \"2b5d1131759d7673124acdd21e225d7efaf2aa3c9cd5bd70d6e4c2c9eafd98f7\": container with ID starting with 2b5d1131759d7673124acdd21e225d7efaf2aa3c9cd5bd70d6e4c2c9eafd98f7 not found: ID does not exist" Nov 24 09:10:08 crc kubenswrapper[4886]: I1124 09:10:08.404431 4886 scope.go:117] "RemoveContainer" containerID="184a82ad126eab290808e3a9a7a18330e2d1a9237b4487a82652da677885590b" Nov 24 09:10:08 crc kubenswrapper[4886]: E1124 09:10:08.404743 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"184a82ad126eab290808e3a9a7a18330e2d1a9237b4487a82652da677885590b\": container with ID starting with 184a82ad126eab290808e3a9a7a18330e2d1a9237b4487a82652da677885590b not found: ID does not exist" containerID="184a82ad126eab290808e3a9a7a18330e2d1a9237b4487a82652da677885590b" Nov 24 09:10:08 crc kubenswrapper[4886]: I1124 09:10:08.404767 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"184a82ad126eab290808e3a9a7a18330e2d1a9237b4487a82652da677885590b"} err="failed to get container status \"184a82ad126eab290808e3a9a7a18330e2d1a9237b4487a82652da677885590b\": rpc error: code = NotFound desc = could not find container \"184a82ad126eab290808e3a9a7a18330e2d1a9237b4487a82652da677885590b\": container with ID starting with 184a82ad126eab290808e3a9a7a18330e2d1a9237b4487a82652da677885590b not found: ID does not exist" Nov 24 09:10:08 crc kubenswrapper[4886]: I1124 09:10:08.404785 4886 scope.go:117] "RemoveContainer" containerID="2b5d1131759d7673124acdd21e225d7efaf2aa3c9cd5bd70d6e4c2c9eafd98f7" Nov 24 09:10:08 crc kubenswrapper[4886]: I1124 09:10:08.405737 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2b5d1131759d7673124acdd21e225d7efaf2aa3c9cd5bd70d6e4c2c9eafd98f7"} err="failed to get container status \"2b5d1131759d7673124acdd21e225d7efaf2aa3c9cd5bd70d6e4c2c9eafd98f7\": rpc error: code = NotFound desc = could not find container \"2b5d1131759d7673124acdd21e225d7efaf2aa3c9cd5bd70d6e4c2c9eafd98f7\": container with ID starting with 2b5d1131759d7673124acdd21e225d7efaf2aa3c9cd5bd70d6e4c2c9eafd98f7 not found: ID does not exist" Nov 24 09:10:08 crc kubenswrapper[4886]: I1124 09:10:08.405783 4886 scope.go:117] "RemoveContainer" containerID="184a82ad126eab290808e3a9a7a18330e2d1a9237b4487a82652da677885590b" Nov 24 09:10:08 crc kubenswrapper[4886]: I1124 09:10:08.406219 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"184a82ad126eab290808e3a9a7a18330e2d1a9237b4487a82652da677885590b"} err="failed to get container status \"184a82ad126eab290808e3a9a7a18330e2d1a9237b4487a82652da677885590b\": rpc error: code = NotFound desc = could not find container \"184a82ad126eab290808e3a9a7a18330e2d1a9237b4487a82652da677885590b\": container with ID starting with 184a82ad126eab290808e3a9a7a18330e2d1a9237b4487a82652da677885590b not found: ID does not exist" Nov 24 09:10:08 crc kubenswrapper[4886]: I1124 09:10:08.538320 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2ce44156-b3a8-4520-aab3-2c829c7d26cb-logs\") pod \"nova-metadata-0\" (UID: \"2ce44156-b3a8-4520-aab3-2c829c7d26cb\") " pod="openstack/nova-metadata-0" Nov 24 09:10:08 crc kubenswrapper[4886]: I1124 09:10:08.538851 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ce44156-b3a8-4520-aab3-2c829c7d26cb-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"2ce44156-b3a8-4520-aab3-2c829c7d26cb\") " pod="openstack/nova-metadata-0" Nov 24 09:10:08 crc kubenswrapper[4886]: I1124 09:10:08.538908 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/2ce44156-b3a8-4520-aab3-2c829c7d26cb-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"2ce44156-b3a8-4520-aab3-2c829c7d26cb\") " pod="openstack/nova-metadata-0" Nov 24 09:10:08 crc kubenswrapper[4886]: I1124 09:10:08.539137 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ce44156-b3a8-4520-aab3-2c829c7d26cb-config-data\") pod \"nova-metadata-0\" (UID: \"2ce44156-b3a8-4520-aab3-2c829c7d26cb\") " pod="openstack/nova-metadata-0" Nov 24 09:10:08 crc kubenswrapper[4886]: I1124 09:10:08.539343 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m5dxg\" (UniqueName: \"kubernetes.io/projected/2ce44156-b3a8-4520-aab3-2c829c7d26cb-kube-api-access-m5dxg\") pod \"nova-metadata-0\" (UID: \"2ce44156-b3a8-4520-aab3-2c829c7d26cb\") " pod="openstack/nova-metadata-0" Nov 24 09:10:08 crc kubenswrapper[4886]: I1124 09:10:08.641740 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ce44156-b3a8-4520-aab3-2c829c7d26cb-config-data\") pod \"nova-metadata-0\" (UID: \"2ce44156-b3a8-4520-aab3-2c829c7d26cb\") " pod="openstack/nova-metadata-0" Nov 24 09:10:08 crc kubenswrapper[4886]: I1124 09:10:08.641933 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m5dxg\" (UniqueName: \"kubernetes.io/projected/2ce44156-b3a8-4520-aab3-2c829c7d26cb-kube-api-access-m5dxg\") pod \"nova-metadata-0\" (UID: \"2ce44156-b3a8-4520-aab3-2c829c7d26cb\") " pod="openstack/nova-metadata-0" Nov 24 09:10:08 crc kubenswrapper[4886]: I1124 09:10:08.642001 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2ce44156-b3a8-4520-aab3-2c829c7d26cb-logs\") pod \"nova-metadata-0\" (UID: \"2ce44156-b3a8-4520-aab3-2c829c7d26cb\") " pod="openstack/nova-metadata-0" Nov 24 09:10:08 crc kubenswrapper[4886]: I1124 09:10:08.642038 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ce44156-b3a8-4520-aab3-2c829c7d26cb-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"2ce44156-b3a8-4520-aab3-2c829c7d26cb\") " pod="openstack/nova-metadata-0" Nov 24 09:10:08 crc kubenswrapper[4886]: I1124 09:10:08.642077 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/2ce44156-b3a8-4520-aab3-2c829c7d26cb-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"2ce44156-b3a8-4520-aab3-2c829c7d26cb\") " pod="openstack/nova-metadata-0" Nov 24 09:10:08 crc kubenswrapper[4886]: I1124 09:10:08.644263 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2ce44156-b3a8-4520-aab3-2c829c7d26cb-logs\") pod \"nova-metadata-0\" (UID: \"2ce44156-b3a8-4520-aab3-2c829c7d26cb\") " pod="openstack/nova-metadata-0" Nov 24 09:10:08 crc kubenswrapper[4886]: I1124 09:10:08.648803 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ce44156-b3a8-4520-aab3-2c829c7d26cb-config-data\") pod \"nova-metadata-0\" (UID: \"2ce44156-b3a8-4520-aab3-2c829c7d26cb\") " pod="openstack/nova-metadata-0" Nov 24 09:10:08 crc kubenswrapper[4886]: I1124 09:10:08.649111 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ce44156-b3a8-4520-aab3-2c829c7d26cb-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"2ce44156-b3a8-4520-aab3-2c829c7d26cb\") " pod="openstack/nova-metadata-0" Nov 24 09:10:08 crc kubenswrapper[4886]: I1124 09:10:08.649121 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/2ce44156-b3a8-4520-aab3-2c829c7d26cb-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"2ce44156-b3a8-4520-aab3-2c829c7d26cb\") " pod="openstack/nova-metadata-0" Nov 24 09:10:08 crc kubenswrapper[4886]: I1124 09:10:08.663812 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m5dxg\" (UniqueName: \"kubernetes.io/projected/2ce44156-b3a8-4520-aab3-2c829c7d26cb-kube-api-access-m5dxg\") pod \"nova-metadata-0\" (UID: \"2ce44156-b3a8-4520-aab3-2c829c7d26cb\") " pod="openstack/nova-metadata-0" Nov 24 09:10:08 crc kubenswrapper[4886]: I1124 09:10:08.759260 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 24 09:10:08 crc kubenswrapper[4886]: I1124 09:10:08.772180 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-c59cj" Nov 24 09:10:08 crc kubenswrapper[4886]: I1124 09:10:08.864338 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3929bf00-c514-4be5-a088-5bd22e946532" path="/var/lib/kubelet/pods/3929bf00-c514-4be5-a088-5bd22e946532/volumes" Nov 24 09:10:08 crc kubenswrapper[4886]: I1124 09:10:08.866477 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="89ccf13d-537d-485f-ab6d-448bc8171089" path="/var/lib/kubelet/pods/89ccf13d-537d-485f-ab6d-448bc8171089/volumes" Nov 24 09:10:08 crc kubenswrapper[4886]: I1124 09:10:08.955442 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/894a9de8-fef6-45c0-9464-fca3f25587e9-config-data\") pod \"894a9de8-fef6-45c0-9464-fca3f25587e9\" (UID: \"894a9de8-fef6-45c0-9464-fca3f25587e9\") " Nov 24 09:10:08 crc kubenswrapper[4886]: I1124 09:10:08.955604 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-srwnv\" (UniqueName: \"kubernetes.io/projected/894a9de8-fef6-45c0-9464-fca3f25587e9-kube-api-access-srwnv\") pod \"894a9de8-fef6-45c0-9464-fca3f25587e9\" (UID: \"894a9de8-fef6-45c0-9464-fca3f25587e9\") " Nov 24 09:10:08 crc kubenswrapper[4886]: I1124 09:10:08.955635 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/894a9de8-fef6-45c0-9464-fca3f25587e9-scripts\") pod \"894a9de8-fef6-45c0-9464-fca3f25587e9\" (UID: \"894a9de8-fef6-45c0-9464-fca3f25587e9\") " Nov 24 09:10:08 crc kubenswrapper[4886]: I1124 09:10:08.956474 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/894a9de8-fef6-45c0-9464-fca3f25587e9-combined-ca-bundle\") pod \"894a9de8-fef6-45c0-9464-fca3f25587e9\" (UID: \"894a9de8-fef6-45c0-9464-fca3f25587e9\") " Nov 24 09:10:08 crc kubenswrapper[4886]: I1124 09:10:08.965480 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 24 09:10:08 crc kubenswrapper[4886]: I1124 09:10:08.984294 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/894a9de8-fef6-45c0-9464-fca3f25587e9-scripts" (OuterVolumeSpecName: "scripts") pod "894a9de8-fef6-45c0-9464-fca3f25587e9" (UID: "894a9de8-fef6-45c0-9464-fca3f25587e9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:10:08 crc kubenswrapper[4886]: I1124 09:10:08.984341 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/894a9de8-fef6-45c0-9464-fca3f25587e9-kube-api-access-srwnv" (OuterVolumeSpecName: "kube-api-access-srwnv") pod "894a9de8-fef6-45c0-9464-fca3f25587e9" (UID: "894a9de8-fef6-45c0-9464-fca3f25587e9"). InnerVolumeSpecName "kube-api-access-srwnv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 09:10:09 crc kubenswrapper[4886]: I1124 09:10:08.998592 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/894a9de8-fef6-45c0-9464-fca3f25587e9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "894a9de8-fef6-45c0-9464-fca3f25587e9" (UID: "894a9de8-fef6-45c0-9464-fca3f25587e9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:10:09 crc kubenswrapper[4886]: I1124 09:10:08.999209 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/894a9de8-fef6-45c0-9464-fca3f25587e9-config-data" (OuterVolumeSpecName: "config-data") pod "894a9de8-fef6-45c0-9464-fca3f25587e9" (UID: "894a9de8-fef6-45c0-9464-fca3f25587e9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:10:09 crc kubenswrapper[4886]: I1124 09:10:09.058912 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4qqkd\" (UniqueName: \"kubernetes.io/projected/7491be7c-1c75-4527-8f77-290e42796216-kube-api-access-4qqkd\") pod \"7491be7c-1c75-4527-8f77-290e42796216\" (UID: \"7491be7c-1c75-4527-8f77-290e42796216\") " Nov 24 09:10:09 crc kubenswrapper[4886]: I1124 09:10:09.058976 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7491be7c-1c75-4527-8f77-290e42796216-logs\") pod \"7491be7c-1c75-4527-8f77-290e42796216\" (UID: \"7491be7c-1c75-4527-8f77-290e42796216\") " Nov 24 09:10:09 crc kubenswrapper[4886]: I1124 09:10:09.059176 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7491be7c-1c75-4527-8f77-290e42796216-combined-ca-bundle\") pod \"7491be7c-1c75-4527-8f77-290e42796216\" (UID: \"7491be7c-1c75-4527-8f77-290e42796216\") " Nov 24 09:10:09 crc kubenswrapper[4886]: I1124 09:10:09.059223 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7491be7c-1c75-4527-8f77-290e42796216-config-data\") pod \"7491be7c-1c75-4527-8f77-290e42796216\" (UID: \"7491be7c-1c75-4527-8f77-290e42796216\") " Nov 24 09:10:09 crc kubenswrapper[4886]: I1124 09:10:09.059776 4886 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/894a9de8-fef6-45c0-9464-fca3f25587e9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 09:10:09 crc kubenswrapper[4886]: I1124 09:10:09.059791 4886 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/894a9de8-fef6-45c0-9464-fca3f25587e9-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 09:10:09 crc kubenswrapper[4886]: I1124 09:10:09.059802 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-srwnv\" (UniqueName: \"kubernetes.io/projected/894a9de8-fef6-45c0-9464-fca3f25587e9-kube-api-access-srwnv\") on node \"crc\" DevicePath \"\"" Nov 24 09:10:09 crc kubenswrapper[4886]: I1124 09:10:09.059815 4886 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/894a9de8-fef6-45c0-9464-fca3f25587e9-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 09:10:09 crc kubenswrapper[4886]: I1124 09:10:09.060353 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7491be7c-1c75-4527-8f77-290e42796216-logs" (OuterVolumeSpecName: "logs") pod "7491be7c-1c75-4527-8f77-290e42796216" (UID: "7491be7c-1c75-4527-8f77-290e42796216"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 09:10:09 crc kubenswrapper[4886]: I1124 09:10:09.066314 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7491be7c-1c75-4527-8f77-290e42796216-kube-api-access-4qqkd" (OuterVolumeSpecName: "kube-api-access-4qqkd") pod "7491be7c-1c75-4527-8f77-290e42796216" (UID: "7491be7c-1c75-4527-8f77-290e42796216"). InnerVolumeSpecName "kube-api-access-4qqkd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 09:10:09 crc kubenswrapper[4886]: I1124 09:10:09.089084 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7491be7c-1c75-4527-8f77-290e42796216-config-data" (OuterVolumeSpecName: "config-data") pod "7491be7c-1c75-4527-8f77-290e42796216" (UID: "7491be7c-1c75-4527-8f77-290e42796216"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:10:09 crc kubenswrapper[4886]: I1124 09:10:09.091688 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7491be7c-1c75-4527-8f77-290e42796216-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7491be7c-1c75-4527-8f77-290e42796216" (UID: "7491be7c-1c75-4527-8f77-290e42796216"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:10:09 crc kubenswrapper[4886]: I1124 09:10:09.161712 4886 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7491be7c-1c75-4527-8f77-290e42796216-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 09:10:09 crc kubenswrapper[4886]: I1124 09:10:09.162360 4886 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7491be7c-1c75-4527-8f77-290e42796216-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 09:10:09 crc kubenswrapper[4886]: I1124 09:10:09.162511 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4qqkd\" (UniqueName: \"kubernetes.io/projected/7491be7c-1c75-4527-8f77-290e42796216-kube-api-access-4qqkd\") on node \"crc\" DevicePath \"\"" Nov 24 09:10:09 crc kubenswrapper[4886]: I1124 09:10:09.162579 4886 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7491be7c-1c75-4527-8f77-290e42796216-logs\") on node \"crc\" DevicePath \"\"" Nov 24 09:10:09 crc kubenswrapper[4886]: I1124 09:10:09.330496 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-c59cj" event={"ID":"894a9de8-fef6-45c0-9464-fca3f25587e9","Type":"ContainerDied","Data":"8f8c4be71daf83215ca543c60f929bda256c00d7af6ef87bf91fd07af9179df0"} Nov 24 09:10:09 crc kubenswrapper[4886]: I1124 09:10:09.330566 4886 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8f8c4be71daf83215ca543c60f929bda256c00d7af6ef87bf91fd07af9179df0" Nov 24 09:10:09 crc kubenswrapper[4886]: I1124 09:10:09.330591 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 24 09:10:09 crc kubenswrapper[4886]: I1124 09:10:09.331427 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-c59cj" Nov 24 09:10:09 crc kubenswrapper[4886]: I1124 09:10:09.352840 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"0402c99c-2124-499a-8682-bc7cab563f47","Type":"ContainerStarted","Data":"9b5c5ed6eaa4982cd125c0d41e3f4375821f9e410254cab23496e2ee15e87f92"} Nov 24 09:10:09 crc kubenswrapper[4886]: I1124 09:10:09.352898 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"0402c99c-2124-499a-8682-bc7cab563f47","Type":"ContainerStarted","Data":"2f172c7edd110cd54853b066061695f0362812ca17546900ab64b895b0bfcff0"} Nov 24 09:10:09 crc kubenswrapper[4886]: I1124 09:10:09.365103 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Nov 24 09:10:09 crc kubenswrapper[4886]: E1124 09:10:09.365669 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="894a9de8-fef6-45c0-9464-fca3f25587e9" containerName="nova-cell1-conductor-db-sync" Nov 24 09:10:09 crc kubenswrapper[4886]: I1124 09:10:09.365693 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="894a9de8-fef6-45c0-9464-fca3f25587e9" containerName="nova-cell1-conductor-db-sync" Nov 24 09:10:09 crc kubenswrapper[4886]: E1124 09:10:09.365723 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7491be7c-1c75-4527-8f77-290e42796216" containerName="nova-api-log" Nov 24 09:10:09 crc kubenswrapper[4886]: I1124 09:10:09.365732 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="7491be7c-1c75-4527-8f77-290e42796216" containerName="nova-api-log" Nov 24 09:10:09 crc kubenswrapper[4886]: E1124 09:10:09.365744 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7491be7c-1c75-4527-8f77-290e42796216" containerName="nova-api-api" Nov 24 09:10:09 crc kubenswrapper[4886]: I1124 09:10:09.365753 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="7491be7c-1c75-4527-8f77-290e42796216" containerName="nova-api-api" Nov 24 09:10:09 crc kubenswrapper[4886]: I1124 09:10:09.365976 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="7491be7c-1c75-4527-8f77-290e42796216" containerName="nova-api-log" Nov 24 09:10:09 crc kubenswrapper[4886]: I1124 09:10:09.365996 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="7491be7c-1c75-4527-8f77-290e42796216" containerName="nova-api-api" Nov 24 09:10:09 crc kubenswrapper[4886]: I1124 09:10:09.366005 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="894a9de8-fef6-45c0-9464-fca3f25587e9" containerName="nova-cell1-conductor-db-sync" Nov 24 09:10:09 crc kubenswrapper[4886]: I1124 09:10:09.372741 4886 generic.go:334] "Generic (PLEG): container finished" podID="7491be7c-1c75-4527-8f77-290e42796216" containerID="f938f5910651b9c86c5bf75b01ed7a64a9319e6df1686915eb77fb252f8323a8" exitCode=0 Nov 24 09:10:09 crc kubenswrapper[4886]: I1124 09:10:09.372899 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 24 09:10:09 crc kubenswrapper[4886]: I1124 09:10:09.374684 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"7491be7c-1c75-4527-8f77-290e42796216","Type":"ContainerDied","Data":"f938f5910651b9c86c5bf75b01ed7a64a9319e6df1686915eb77fb252f8323a8"} Nov 24 09:10:09 crc kubenswrapper[4886]: I1124 09:10:09.374756 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"7491be7c-1c75-4527-8f77-290e42796216","Type":"ContainerDied","Data":"ce4f89d1baada1da78a6cb6c1ddc6e2de415996c3251dc09cc7c2cf66f541093"} Nov 24 09:10:09 crc kubenswrapper[4886]: I1124 09:10:09.374786 4886 scope.go:117] "RemoveContainer" containerID="f938f5910651b9c86c5bf75b01ed7a64a9319e6df1686915eb77fb252f8323a8" Nov 24 09:10:09 crc kubenswrapper[4886]: I1124 09:10:09.375069 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Nov 24 09:10:09 crc kubenswrapper[4886]: I1124 09:10:09.399717 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Nov 24 09:10:09 crc kubenswrapper[4886]: I1124 09:10:09.430369 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Nov 24 09:10:09 crc kubenswrapper[4886]: I1124 09:10:09.431019 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.430987662 podStartE2EDuration="2.430987662s" podCreationTimestamp="2025-11-24 09:10:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 09:10:09.38937042 +0000 UTC m=+1265.276108555" watchObservedRunningTime="2025-11-24 09:10:09.430987662 +0000 UTC m=+1265.317725797" Nov 24 09:10:09 crc kubenswrapper[4886]: I1124 09:10:09.469218 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec788e35-0154-4b74-86b4-5a21037b3e4a-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"ec788e35-0154-4b74-86b4-5a21037b3e4a\") " pod="openstack/nova-cell1-conductor-0" Nov 24 09:10:09 crc kubenswrapper[4886]: I1124 09:10:09.469295 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wzb5f\" (UniqueName: \"kubernetes.io/projected/ec788e35-0154-4b74-86b4-5a21037b3e4a-kube-api-access-wzb5f\") pod \"nova-cell1-conductor-0\" (UID: \"ec788e35-0154-4b74-86b4-5a21037b3e4a\") " pod="openstack/nova-cell1-conductor-0" Nov 24 09:10:09 crc kubenswrapper[4886]: I1124 09:10:09.469512 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec788e35-0154-4b74-86b4-5a21037b3e4a-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"ec788e35-0154-4b74-86b4-5a21037b3e4a\") " pod="openstack/nova-cell1-conductor-0" Nov 24 09:10:09 crc kubenswrapper[4886]: I1124 09:10:09.501976 4886 scope.go:117] "RemoveContainer" containerID="099beaee49b7b7d1aa6f753a3713dd9c58971a18d67a467336ae5022c0208c27" Nov 24 09:10:09 crc kubenswrapper[4886]: I1124 09:10:09.537574 4886 scope.go:117] "RemoveContainer" containerID="f938f5910651b9c86c5bf75b01ed7a64a9319e6df1686915eb77fb252f8323a8" Nov 24 09:10:09 crc kubenswrapper[4886]: E1124 09:10:09.538121 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f938f5910651b9c86c5bf75b01ed7a64a9319e6df1686915eb77fb252f8323a8\": container with ID starting with f938f5910651b9c86c5bf75b01ed7a64a9319e6df1686915eb77fb252f8323a8 not found: ID does not exist" containerID="f938f5910651b9c86c5bf75b01ed7a64a9319e6df1686915eb77fb252f8323a8" Nov 24 09:10:09 crc kubenswrapper[4886]: I1124 09:10:09.538176 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f938f5910651b9c86c5bf75b01ed7a64a9319e6df1686915eb77fb252f8323a8"} err="failed to get container status \"f938f5910651b9c86c5bf75b01ed7a64a9319e6df1686915eb77fb252f8323a8\": rpc error: code = NotFound desc = could not find container \"f938f5910651b9c86c5bf75b01ed7a64a9319e6df1686915eb77fb252f8323a8\": container with ID starting with f938f5910651b9c86c5bf75b01ed7a64a9319e6df1686915eb77fb252f8323a8 not found: ID does not exist" Nov 24 09:10:09 crc kubenswrapper[4886]: I1124 09:10:09.538203 4886 scope.go:117] "RemoveContainer" containerID="099beaee49b7b7d1aa6f753a3713dd9c58971a18d67a467336ae5022c0208c27" Nov 24 09:10:09 crc kubenswrapper[4886]: E1124 09:10:09.538616 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"099beaee49b7b7d1aa6f753a3713dd9c58971a18d67a467336ae5022c0208c27\": container with ID starting with 099beaee49b7b7d1aa6f753a3713dd9c58971a18d67a467336ae5022c0208c27 not found: ID does not exist" containerID="099beaee49b7b7d1aa6f753a3713dd9c58971a18d67a467336ae5022c0208c27" Nov 24 09:10:09 crc kubenswrapper[4886]: I1124 09:10:09.538706 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"099beaee49b7b7d1aa6f753a3713dd9c58971a18d67a467336ae5022c0208c27"} err="failed to get container status \"099beaee49b7b7d1aa6f753a3713dd9c58971a18d67a467336ae5022c0208c27\": rpc error: code = NotFound desc = could not find container \"099beaee49b7b7d1aa6f753a3713dd9c58971a18d67a467336ae5022c0208c27\": container with ID starting with 099beaee49b7b7d1aa6f753a3713dd9c58971a18d67a467336ae5022c0208c27 not found: ID does not exist" Nov 24 09:10:09 crc kubenswrapper[4886]: I1124 09:10:09.548898 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 24 09:10:09 crc kubenswrapper[4886]: I1124 09:10:09.572870 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec788e35-0154-4b74-86b4-5a21037b3e4a-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"ec788e35-0154-4b74-86b4-5a21037b3e4a\") " pod="openstack/nova-cell1-conductor-0" Nov 24 09:10:09 crc kubenswrapper[4886]: I1124 09:10:09.572940 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wzb5f\" (UniqueName: \"kubernetes.io/projected/ec788e35-0154-4b74-86b4-5a21037b3e4a-kube-api-access-wzb5f\") pod \"nova-cell1-conductor-0\" (UID: \"ec788e35-0154-4b74-86b4-5a21037b3e4a\") " pod="openstack/nova-cell1-conductor-0" Nov 24 09:10:09 crc kubenswrapper[4886]: I1124 09:10:09.572969 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec788e35-0154-4b74-86b4-5a21037b3e4a-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"ec788e35-0154-4b74-86b4-5a21037b3e4a\") " pod="openstack/nova-cell1-conductor-0" Nov 24 09:10:09 crc kubenswrapper[4886]: I1124 09:10:09.579247 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Nov 24 09:10:09 crc kubenswrapper[4886]: I1124 09:10:09.580424 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec788e35-0154-4b74-86b4-5a21037b3e4a-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"ec788e35-0154-4b74-86b4-5a21037b3e4a\") " pod="openstack/nova-cell1-conductor-0" Nov 24 09:10:09 crc kubenswrapper[4886]: I1124 09:10:09.584098 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec788e35-0154-4b74-86b4-5a21037b3e4a-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"ec788e35-0154-4b74-86b4-5a21037b3e4a\") " pod="openstack/nova-cell1-conductor-0" Nov 24 09:10:09 crc kubenswrapper[4886]: I1124 09:10:09.591467 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Nov 24 09:10:09 crc kubenswrapper[4886]: I1124 09:10:09.597613 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 24 09:10:09 crc kubenswrapper[4886]: I1124 09:10:09.600571 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Nov 24 09:10:09 crc kubenswrapper[4886]: I1124 09:10:09.602911 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 24 09:10:09 crc kubenswrapper[4886]: I1124 09:10:09.606890 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wzb5f\" (UniqueName: \"kubernetes.io/projected/ec788e35-0154-4b74-86b4-5a21037b3e4a-kube-api-access-wzb5f\") pod \"nova-cell1-conductor-0\" (UID: \"ec788e35-0154-4b74-86b4-5a21037b3e4a\") " pod="openstack/nova-cell1-conductor-0" Nov 24 09:10:09 crc kubenswrapper[4886]: I1124 09:10:09.675229 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3861616d-d10e-4420-8770-397a0f78e143-logs\") pod \"nova-api-0\" (UID: \"3861616d-d10e-4420-8770-397a0f78e143\") " pod="openstack/nova-api-0" Nov 24 09:10:09 crc kubenswrapper[4886]: I1124 09:10:09.675787 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3861616d-d10e-4420-8770-397a0f78e143-config-data\") pod \"nova-api-0\" (UID: \"3861616d-d10e-4420-8770-397a0f78e143\") " pod="openstack/nova-api-0" Nov 24 09:10:09 crc kubenswrapper[4886]: I1124 09:10:09.676438 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pkklr\" (UniqueName: \"kubernetes.io/projected/3861616d-d10e-4420-8770-397a0f78e143-kube-api-access-pkklr\") pod \"nova-api-0\" (UID: \"3861616d-d10e-4420-8770-397a0f78e143\") " pod="openstack/nova-api-0" Nov 24 09:10:09 crc kubenswrapper[4886]: I1124 09:10:09.676626 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3861616d-d10e-4420-8770-397a0f78e143-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"3861616d-d10e-4420-8770-397a0f78e143\") " pod="openstack/nova-api-0" Nov 24 09:10:09 crc kubenswrapper[4886]: I1124 09:10:09.779109 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3861616d-d10e-4420-8770-397a0f78e143-logs\") pod \"nova-api-0\" (UID: \"3861616d-d10e-4420-8770-397a0f78e143\") " pod="openstack/nova-api-0" Nov 24 09:10:09 crc kubenswrapper[4886]: I1124 09:10:09.779588 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3861616d-d10e-4420-8770-397a0f78e143-config-data\") pod \"nova-api-0\" (UID: \"3861616d-d10e-4420-8770-397a0f78e143\") " pod="openstack/nova-api-0" Nov 24 09:10:09 crc kubenswrapper[4886]: I1124 09:10:09.779527 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3861616d-d10e-4420-8770-397a0f78e143-logs\") pod \"nova-api-0\" (UID: \"3861616d-d10e-4420-8770-397a0f78e143\") " pod="openstack/nova-api-0" Nov 24 09:10:09 crc kubenswrapper[4886]: I1124 09:10:09.779801 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pkklr\" (UniqueName: \"kubernetes.io/projected/3861616d-d10e-4420-8770-397a0f78e143-kube-api-access-pkklr\") pod \"nova-api-0\" (UID: \"3861616d-d10e-4420-8770-397a0f78e143\") " pod="openstack/nova-api-0" Nov 24 09:10:09 crc kubenswrapper[4886]: I1124 09:10:09.780238 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3861616d-d10e-4420-8770-397a0f78e143-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"3861616d-d10e-4420-8770-397a0f78e143\") " pod="openstack/nova-api-0" Nov 24 09:10:09 crc kubenswrapper[4886]: I1124 09:10:09.784751 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3861616d-d10e-4420-8770-397a0f78e143-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"3861616d-d10e-4420-8770-397a0f78e143\") " pod="openstack/nova-api-0" Nov 24 09:10:09 crc kubenswrapper[4886]: I1124 09:10:09.786577 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3861616d-d10e-4420-8770-397a0f78e143-config-data\") pod \"nova-api-0\" (UID: \"3861616d-d10e-4420-8770-397a0f78e143\") " pod="openstack/nova-api-0" Nov 24 09:10:09 crc kubenswrapper[4886]: I1124 09:10:09.797630 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pkklr\" (UniqueName: \"kubernetes.io/projected/3861616d-d10e-4420-8770-397a0f78e143-kube-api-access-pkklr\") pod \"nova-api-0\" (UID: \"3861616d-d10e-4420-8770-397a0f78e143\") " pod="openstack/nova-api-0" Nov 24 09:10:09 crc kubenswrapper[4886]: I1124 09:10:09.832400 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Nov 24 09:10:10 crc kubenswrapper[4886]: I1124 09:10:10.062383 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 24 09:10:10 crc kubenswrapper[4886]: I1124 09:10:10.337600 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Nov 24 09:10:10 crc kubenswrapper[4886]: I1124 09:10:10.422263 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"ec788e35-0154-4b74-86b4-5a21037b3e4a","Type":"ContainerStarted","Data":"a6e810744fd437d1d1ad5859533b26aaf5252f9124ebcf66db039632b9a22544"} Nov 24 09:10:10 crc kubenswrapper[4886]: I1124 09:10:10.423938 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"2ce44156-b3a8-4520-aab3-2c829c7d26cb","Type":"ContainerStarted","Data":"b35611f43db7655fcabeb5604ac46c1a7a03ba2fdc1fc83b5e66a41e15d1b2e1"} Nov 24 09:10:10 crc kubenswrapper[4886]: I1124 09:10:10.423997 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"2ce44156-b3a8-4520-aab3-2c829c7d26cb","Type":"ContainerStarted","Data":"e417465e4c231478cf42687cc55cc3ea3f124f708793e33bd266e2a67d28f1fa"} Nov 24 09:10:10 crc kubenswrapper[4886]: I1124 09:10:10.424015 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"2ce44156-b3a8-4520-aab3-2c829c7d26cb","Type":"ContainerStarted","Data":"4309c12fefb1a592f0a9b96345ae3ecca5e5af5c2e5629d7a3696f59fca9a272"} Nov 24 09:10:10 crc kubenswrapper[4886]: I1124 09:10:10.455689 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.455672485 podStartE2EDuration="2.455672485s" podCreationTimestamp="2025-11-24 09:10:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 09:10:10.439482645 +0000 UTC m=+1266.326220790" watchObservedRunningTime="2025-11-24 09:10:10.455672485 +0000 UTC m=+1266.342410610" Nov 24 09:10:10 crc kubenswrapper[4886]: I1124 09:10:10.514163 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 24 09:10:10 crc kubenswrapper[4886]: W1124 09:10:10.530484 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3861616d_d10e_4420_8770_397a0f78e143.slice/crio-4487dfb331ade92398b0c82c2a0bc69e7cbbac93b06d109955bc17055d850750 WatchSource:0}: Error finding container 4487dfb331ade92398b0c82c2a0bc69e7cbbac93b06d109955bc17055d850750: Status 404 returned error can't find the container with id 4487dfb331ade92398b0c82c2a0bc69e7cbbac93b06d109955bc17055d850750 Nov 24 09:10:10 crc kubenswrapper[4886]: I1124 09:10:10.871196 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7491be7c-1c75-4527-8f77-290e42796216" path="/var/lib/kubelet/pods/7491be7c-1c75-4527-8f77-290e42796216/volumes" Nov 24 09:10:11 crc kubenswrapper[4886]: I1124 09:10:11.437379 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"ec788e35-0154-4b74-86b4-5a21037b3e4a","Type":"ContainerStarted","Data":"cdfd5145923891bb78fb2068146990358bc966bd58030a2b633b557ce6d8459e"} Nov 24 09:10:11 crc kubenswrapper[4886]: I1124 09:10:11.437898 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Nov 24 09:10:11 crc kubenswrapper[4886]: I1124 09:10:11.443285 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3861616d-d10e-4420-8770-397a0f78e143","Type":"ContainerStarted","Data":"3e2d5dd317c3db1365151c8dc7b2306d617a7c3f1530891a448d4717818cc319"} Nov 24 09:10:11 crc kubenswrapper[4886]: I1124 09:10:11.443335 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3861616d-d10e-4420-8770-397a0f78e143","Type":"ContainerStarted","Data":"acb87bfd28f41612d07151291eb5e647cca4c25ae31c44a96323a5203de93d1f"} Nov 24 09:10:11 crc kubenswrapper[4886]: I1124 09:10:11.443345 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3861616d-d10e-4420-8770-397a0f78e143","Type":"ContainerStarted","Data":"4487dfb331ade92398b0c82c2a0bc69e7cbbac93b06d109955bc17055d850750"} Nov 24 09:10:11 crc kubenswrapper[4886]: I1124 09:10:11.464385 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.464358253 podStartE2EDuration="2.464358253s" podCreationTimestamp="2025-11-24 09:10:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 09:10:11.455992916 +0000 UTC m=+1267.342731051" watchObservedRunningTime="2025-11-24 09:10:11.464358253 +0000 UTC m=+1267.351096388" Nov 24 09:10:11 crc kubenswrapper[4886]: I1124 09:10:11.481868 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.48183929 podStartE2EDuration="2.48183929s" podCreationTimestamp="2025-11-24 09:10:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 09:10:11.48183112 +0000 UTC m=+1267.368569265" watchObservedRunningTime="2025-11-24 09:10:11.48183929 +0000 UTC m=+1267.368577435" Nov 24 09:10:12 crc kubenswrapper[4886]: I1124 09:10:12.861329 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Nov 24 09:10:13 crc kubenswrapper[4886]: I1124 09:10:13.759948 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 24 09:10:13 crc kubenswrapper[4886]: I1124 09:10:13.760423 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 24 09:10:15 crc kubenswrapper[4886]: E1124 09:10:15.842061 4886 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda7266b1c_bb21_4f54_994c_52ab5db8d4eb.slice\": RecentStats: unable to find data in memory cache]" Nov 24 09:10:17 crc kubenswrapper[4886]: I1124 09:10:17.854267 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Nov 24 09:10:17 crc kubenswrapper[4886]: I1124 09:10:17.887073 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Nov 24 09:10:18 crc kubenswrapper[4886]: I1124 09:10:18.546792 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Nov 24 09:10:18 crc kubenswrapper[4886]: I1124 09:10:18.761009 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Nov 24 09:10:18 crc kubenswrapper[4886]: I1124 09:10:18.761056 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Nov 24 09:10:19 crc kubenswrapper[4886]: I1124 09:10:19.771537 4886 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="2ce44156-b3a8-4520-aab3-2c829c7d26cb" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.189:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 24 09:10:19 crc kubenswrapper[4886]: I1124 09:10:19.772584 4886 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="2ce44156-b3a8-4520-aab3-2c829c7d26cb" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.189:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 24 09:10:19 crc kubenswrapper[4886]: I1124 09:10:19.870033 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Nov 24 09:10:20 crc kubenswrapper[4886]: I1124 09:10:20.063752 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 24 09:10:20 crc kubenswrapper[4886]: I1124 09:10:20.064365 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 24 09:10:21 crc kubenswrapper[4886]: I1124 09:10:21.145566 4886 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="3861616d-d10e-4420-8770-397a0f78e143" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.191:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 24 09:10:21 crc kubenswrapper[4886]: I1124 09:10:21.145588 4886 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="3861616d-d10e-4420-8770-397a0f78e143" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.191:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 24 09:10:23 crc kubenswrapper[4886]: I1124 09:10:23.258527 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Nov 24 09:10:26 crc kubenswrapper[4886]: E1124 09:10:26.109384 4886 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda7266b1c_bb21_4f54_994c_52ab5db8d4eb.slice\": RecentStats: unable to find data in memory cache]" Nov 24 09:10:26 crc kubenswrapper[4886]: I1124 09:10:26.937872 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 24 09:10:26 crc kubenswrapper[4886]: I1124 09:10:26.938144 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="c4dbd151-5916-42f8-9555-adf76d2480bf" containerName="kube-state-metrics" containerID="cri-o://2d72f50960632eae7569ce326a40fd66d91d0ce713c3955b5a409d075fe344e4" gracePeriod=30 Nov 24 09:10:27 crc kubenswrapper[4886]: I1124 09:10:27.493167 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 24 09:10:27 crc kubenswrapper[4886]: I1124 09:10:27.578061 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ptwg4\" (UniqueName: \"kubernetes.io/projected/c4dbd151-5916-42f8-9555-adf76d2480bf-kube-api-access-ptwg4\") pod \"c4dbd151-5916-42f8-9555-adf76d2480bf\" (UID: \"c4dbd151-5916-42f8-9555-adf76d2480bf\") " Nov 24 09:10:27 crc kubenswrapper[4886]: I1124 09:10:27.591099 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c4dbd151-5916-42f8-9555-adf76d2480bf-kube-api-access-ptwg4" (OuterVolumeSpecName: "kube-api-access-ptwg4") pod "c4dbd151-5916-42f8-9555-adf76d2480bf" (UID: "c4dbd151-5916-42f8-9555-adf76d2480bf"). InnerVolumeSpecName "kube-api-access-ptwg4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 09:10:27 crc kubenswrapper[4886]: I1124 09:10:27.631511 4886 generic.go:334] "Generic (PLEG): container finished" podID="c4dbd151-5916-42f8-9555-adf76d2480bf" containerID="2d72f50960632eae7569ce326a40fd66d91d0ce713c3955b5a409d075fe344e4" exitCode=2 Nov 24 09:10:27 crc kubenswrapper[4886]: I1124 09:10:27.631568 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"c4dbd151-5916-42f8-9555-adf76d2480bf","Type":"ContainerDied","Data":"2d72f50960632eae7569ce326a40fd66d91d0ce713c3955b5a409d075fe344e4"} Nov 24 09:10:27 crc kubenswrapper[4886]: I1124 09:10:27.631604 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"c4dbd151-5916-42f8-9555-adf76d2480bf","Type":"ContainerDied","Data":"bfe200e2ebb6377374ae537725aff3e0b9cd28c95eae2dcf3ee4cd2ebe7c960d"} Nov 24 09:10:27 crc kubenswrapper[4886]: I1124 09:10:27.631641 4886 scope.go:117] "RemoveContainer" containerID="2d72f50960632eae7569ce326a40fd66d91d0ce713c3955b5a409d075fe344e4" Nov 24 09:10:27 crc kubenswrapper[4886]: I1124 09:10:27.631784 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 24 09:10:27 crc kubenswrapper[4886]: I1124 09:10:27.673120 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 24 09:10:27 crc kubenswrapper[4886]: I1124 09:10:27.685122 4886 scope.go:117] "RemoveContainer" containerID="2d72f50960632eae7569ce326a40fd66d91d0ce713c3955b5a409d075fe344e4" Nov 24 09:10:27 crc kubenswrapper[4886]: E1124 09:10:27.685874 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2d72f50960632eae7569ce326a40fd66d91d0ce713c3955b5a409d075fe344e4\": container with ID starting with 2d72f50960632eae7569ce326a40fd66d91d0ce713c3955b5a409d075fe344e4 not found: ID does not exist" containerID="2d72f50960632eae7569ce326a40fd66d91d0ce713c3955b5a409d075fe344e4" Nov 24 09:10:27 crc kubenswrapper[4886]: I1124 09:10:27.685941 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2d72f50960632eae7569ce326a40fd66d91d0ce713c3955b5a409d075fe344e4"} err="failed to get container status \"2d72f50960632eae7569ce326a40fd66d91d0ce713c3955b5a409d075fe344e4\": rpc error: code = NotFound desc = could not find container \"2d72f50960632eae7569ce326a40fd66d91d0ce713c3955b5a409d075fe344e4\": container with ID starting with 2d72f50960632eae7569ce326a40fd66d91d0ce713c3955b5a409d075fe344e4 not found: ID does not exist" Nov 24 09:10:27 crc kubenswrapper[4886]: I1124 09:10:27.687040 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ptwg4\" (UniqueName: \"kubernetes.io/projected/c4dbd151-5916-42f8-9555-adf76d2480bf-kube-api-access-ptwg4\") on node \"crc\" DevicePath \"\"" Nov 24 09:10:27 crc kubenswrapper[4886]: I1124 09:10:27.689641 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 24 09:10:27 crc kubenswrapper[4886]: I1124 09:10:27.701748 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Nov 24 09:10:27 crc kubenswrapper[4886]: E1124 09:10:27.702289 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4dbd151-5916-42f8-9555-adf76d2480bf" containerName="kube-state-metrics" Nov 24 09:10:27 crc kubenswrapper[4886]: I1124 09:10:27.702307 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4dbd151-5916-42f8-9555-adf76d2480bf" containerName="kube-state-metrics" Nov 24 09:10:27 crc kubenswrapper[4886]: I1124 09:10:27.702498 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="c4dbd151-5916-42f8-9555-adf76d2480bf" containerName="kube-state-metrics" Nov 24 09:10:27 crc kubenswrapper[4886]: I1124 09:10:27.703253 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 24 09:10:27 crc kubenswrapper[4886]: I1124 09:10:27.705388 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Nov 24 09:10:27 crc kubenswrapper[4886]: I1124 09:10:27.705457 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Nov 24 09:10:27 crc kubenswrapper[4886]: I1124 09:10:27.727282 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 24 09:10:27 crc kubenswrapper[4886]: I1124 09:10:27.788814 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/39f8779a-9800-4658-aa0a-8603669d7fbe-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"39f8779a-9800-4658-aa0a-8603669d7fbe\") " pod="openstack/kube-state-metrics-0" Nov 24 09:10:27 crc kubenswrapper[4886]: I1124 09:10:27.788880 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/39f8779a-9800-4658-aa0a-8603669d7fbe-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"39f8779a-9800-4658-aa0a-8603669d7fbe\") " pod="openstack/kube-state-metrics-0" Nov 24 09:10:27 crc kubenswrapper[4886]: I1124 09:10:27.788921 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39f8779a-9800-4658-aa0a-8603669d7fbe-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"39f8779a-9800-4658-aa0a-8603669d7fbe\") " pod="openstack/kube-state-metrics-0" Nov 24 09:10:27 crc kubenswrapper[4886]: I1124 09:10:27.789042 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-szl6s\" (UniqueName: \"kubernetes.io/projected/39f8779a-9800-4658-aa0a-8603669d7fbe-kube-api-access-szl6s\") pod \"kube-state-metrics-0\" (UID: \"39f8779a-9800-4658-aa0a-8603669d7fbe\") " pod="openstack/kube-state-metrics-0" Nov 24 09:10:27 crc kubenswrapper[4886]: I1124 09:10:27.891236 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-szl6s\" (UniqueName: \"kubernetes.io/projected/39f8779a-9800-4658-aa0a-8603669d7fbe-kube-api-access-szl6s\") pod \"kube-state-metrics-0\" (UID: \"39f8779a-9800-4658-aa0a-8603669d7fbe\") " pod="openstack/kube-state-metrics-0" Nov 24 09:10:27 crc kubenswrapper[4886]: I1124 09:10:27.891294 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/39f8779a-9800-4658-aa0a-8603669d7fbe-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"39f8779a-9800-4658-aa0a-8603669d7fbe\") " pod="openstack/kube-state-metrics-0" Nov 24 09:10:27 crc kubenswrapper[4886]: I1124 09:10:27.891325 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/39f8779a-9800-4658-aa0a-8603669d7fbe-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"39f8779a-9800-4658-aa0a-8603669d7fbe\") " pod="openstack/kube-state-metrics-0" Nov 24 09:10:27 crc kubenswrapper[4886]: I1124 09:10:27.891367 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39f8779a-9800-4658-aa0a-8603669d7fbe-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"39f8779a-9800-4658-aa0a-8603669d7fbe\") " pod="openstack/kube-state-metrics-0" Nov 24 09:10:27 crc kubenswrapper[4886]: I1124 09:10:27.903292 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/39f8779a-9800-4658-aa0a-8603669d7fbe-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"39f8779a-9800-4658-aa0a-8603669d7fbe\") " pod="openstack/kube-state-metrics-0" Nov 24 09:10:27 crc kubenswrapper[4886]: I1124 09:10:27.903701 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/39f8779a-9800-4658-aa0a-8603669d7fbe-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"39f8779a-9800-4658-aa0a-8603669d7fbe\") " pod="openstack/kube-state-metrics-0" Nov 24 09:10:27 crc kubenswrapper[4886]: I1124 09:10:27.903705 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39f8779a-9800-4658-aa0a-8603669d7fbe-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"39f8779a-9800-4658-aa0a-8603669d7fbe\") " pod="openstack/kube-state-metrics-0" Nov 24 09:10:27 crc kubenswrapper[4886]: I1124 09:10:27.915924 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-szl6s\" (UniqueName: \"kubernetes.io/projected/39f8779a-9800-4658-aa0a-8603669d7fbe-kube-api-access-szl6s\") pod \"kube-state-metrics-0\" (UID: \"39f8779a-9800-4658-aa0a-8603669d7fbe\") " pod="openstack/kube-state-metrics-0" Nov 24 09:10:28 crc kubenswrapper[4886]: I1124 09:10:28.027915 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 24 09:10:28 crc kubenswrapper[4886]: I1124 09:10:28.632163 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 24 09:10:28 crc kubenswrapper[4886]: I1124 09:10:28.647188 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"39f8779a-9800-4658-aa0a-8603669d7fbe","Type":"ContainerStarted","Data":"07114d3ec4dff1acc98753502a5531b967f9ac7e1ea75aaff265c1a413eafd88"} Nov 24 09:10:28 crc kubenswrapper[4886]: I1124 09:10:28.767086 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Nov 24 09:10:28 crc kubenswrapper[4886]: I1124 09:10:28.772845 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Nov 24 09:10:28 crc kubenswrapper[4886]: I1124 09:10:28.774726 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Nov 24 09:10:28 crc kubenswrapper[4886]: I1124 09:10:28.862011 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c4dbd151-5916-42f8-9555-adf76d2480bf" path="/var/lib/kubelet/pods/c4dbd151-5916-42f8-9555-adf76d2480bf/volumes" Nov 24 09:10:29 crc kubenswrapper[4886]: I1124 09:10:29.161759 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 24 09:10:29 crc kubenswrapper[4886]: I1124 09:10:29.162623 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f87697b1-d576-4e2b-ba20-04d9453e0656" containerName="ceilometer-central-agent" containerID="cri-o://53e7cf41d6c15d164522bad27bfaa0ff7fe8414ba567a42fbb48b10ab7b149f1" gracePeriod=30 Nov 24 09:10:29 crc kubenswrapper[4886]: I1124 09:10:29.162778 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f87697b1-d576-4e2b-ba20-04d9453e0656" containerName="proxy-httpd" containerID="cri-o://c9e7abbfd015b4860d2a4de8afda51802c6af321e9e1c646d00c2d3039d64ca2" gracePeriod=30 Nov 24 09:10:29 crc kubenswrapper[4886]: I1124 09:10:29.162835 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f87697b1-d576-4e2b-ba20-04d9453e0656" containerName="sg-core" containerID="cri-o://918fa517447526287717f7285cc35378c1860604c8de22a18d8e6d2774c32c4b" gracePeriod=30 Nov 24 09:10:29 crc kubenswrapper[4886]: I1124 09:10:29.162871 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f87697b1-d576-4e2b-ba20-04d9453e0656" containerName="ceilometer-notification-agent" containerID="cri-o://8bd2070a7c217a02442155c45d61562ed1a5fb43a1d566d083527f221681e331" gracePeriod=30 Nov 24 09:10:29 crc kubenswrapper[4886]: I1124 09:10:29.414695 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 24 09:10:29 crc kubenswrapper[4886]: I1124 09:10:29.552799 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6879ea11-ba6a-4d7e-8a6a-762f29e1b4cc-combined-ca-bundle\") pod \"6879ea11-ba6a-4d7e-8a6a-762f29e1b4cc\" (UID: \"6879ea11-ba6a-4d7e-8a6a-762f29e1b4cc\") " Nov 24 09:10:29 crc kubenswrapper[4886]: I1124 09:10:29.552860 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mtdcw\" (UniqueName: \"kubernetes.io/projected/6879ea11-ba6a-4d7e-8a6a-762f29e1b4cc-kube-api-access-mtdcw\") pod \"6879ea11-ba6a-4d7e-8a6a-762f29e1b4cc\" (UID: \"6879ea11-ba6a-4d7e-8a6a-762f29e1b4cc\") " Nov 24 09:10:29 crc kubenswrapper[4886]: I1124 09:10:29.553015 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6879ea11-ba6a-4d7e-8a6a-762f29e1b4cc-config-data\") pod \"6879ea11-ba6a-4d7e-8a6a-762f29e1b4cc\" (UID: \"6879ea11-ba6a-4d7e-8a6a-762f29e1b4cc\") " Nov 24 09:10:29 crc kubenswrapper[4886]: I1124 09:10:29.562495 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6879ea11-ba6a-4d7e-8a6a-762f29e1b4cc-kube-api-access-mtdcw" (OuterVolumeSpecName: "kube-api-access-mtdcw") pod "6879ea11-ba6a-4d7e-8a6a-762f29e1b4cc" (UID: "6879ea11-ba6a-4d7e-8a6a-762f29e1b4cc"). InnerVolumeSpecName "kube-api-access-mtdcw". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 09:10:29 crc kubenswrapper[4886]: I1124 09:10:29.588139 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6879ea11-ba6a-4d7e-8a6a-762f29e1b4cc-config-data" (OuterVolumeSpecName: "config-data") pod "6879ea11-ba6a-4d7e-8a6a-762f29e1b4cc" (UID: "6879ea11-ba6a-4d7e-8a6a-762f29e1b4cc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:10:29 crc kubenswrapper[4886]: I1124 09:10:29.592884 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6879ea11-ba6a-4d7e-8a6a-762f29e1b4cc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6879ea11-ba6a-4d7e-8a6a-762f29e1b4cc" (UID: "6879ea11-ba6a-4d7e-8a6a-762f29e1b4cc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:10:29 crc kubenswrapper[4886]: I1124 09:10:29.655559 4886 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6879ea11-ba6a-4d7e-8a6a-762f29e1b4cc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 09:10:29 crc kubenswrapper[4886]: I1124 09:10:29.655598 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mtdcw\" (UniqueName: \"kubernetes.io/projected/6879ea11-ba6a-4d7e-8a6a-762f29e1b4cc-kube-api-access-mtdcw\") on node \"crc\" DevicePath \"\"" Nov 24 09:10:29 crc kubenswrapper[4886]: I1124 09:10:29.655609 4886 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6879ea11-ba6a-4d7e-8a6a-762f29e1b4cc-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 09:10:29 crc kubenswrapper[4886]: I1124 09:10:29.662237 4886 generic.go:334] "Generic (PLEG): container finished" podID="6879ea11-ba6a-4d7e-8a6a-762f29e1b4cc" containerID="de223e8454c25a1bf218b4efad0685df503bc192f92b8722f135dddc734c2aae" exitCode=137 Nov 24 09:10:29 crc kubenswrapper[4886]: I1124 09:10:29.662339 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 24 09:10:29 crc kubenswrapper[4886]: I1124 09:10:29.662359 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"6879ea11-ba6a-4d7e-8a6a-762f29e1b4cc","Type":"ContainerDied","Data":"de223e8454c25a1bf218b4efad0685df503bc192f92b8722f135dddc734c2aae"} Nov 24 09:10:29 crc kubenswrapper[4886]: I1124 09:10:29.662449 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"6879ea11-ba6a-4d7e-8a6a-762f29e1b4cc","Type":"ContainerDied","Data":"b6db09edd0afdbcf6c000602b5809f8f6ec6dd5a966b9af9cc8e5835cd1cee6e"} Nov 24 09:10:29 crc kubenswrapper[4886]: I1124 09:10:29.662476 4886 scope.go:117] "RemoveContainer" containerID="de223e8454c25a1bf218b4efad0685df503bc192f92b8722f135dddc734c2aae" Nov 24 09:10:29 crc kubenswrapper[4886]: I1124 09:10:29.665031 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"39f8779a-9800-4658-aa0a-8603669d7fbe","Type":"ContainerStarted","Data":"46522d596c1e7ba98a7b5af3d7ba95b6a446009c1d81deaf9affe84821053f24"} Nov 24 09:10:29 crc kubenswrapper[4886]: I1124 09:10:29.665211 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Nov 24 09:10:29 crc kubenswrapper[4886]: I1124 09:10:29.667814 4886 generic.go:334] "Generic (PLEG): container finished" podID="f87697b1-d576-4e2b-ba20-04d9453e0656" containerID="c9e7abbfd015b4860d2a4de8afda51802c6af321e9e1c646d00c2d3039d64ca2" exitCode=0 Nov 24 09:10:29 crc kubenswrapper[4886]: I1124 09:10:29.667845 4886 generic.go:334] "Generic (PLEG): container finished" podID="f87697b1-d576-4e2b-ba20-04d9453e0656" containerID="918fa517447526287717f7285cc35378c1860604c8de22a18d8e6d2774c32c4b" exitCode=2 Nov 24 09:10:29 crc kubenswrapper[4886]: I1124 09:10:29.667856 4886 generic.go:334] "Generic (PLEG): container finished" podID="f87697b1-d576-4e2b-ba20-04d9453e0656" containerID="53e7cf41d6c15d164522bad27bfaa0ff7fe8414ba567a42fbb48b10ab7b149f1" exitCode=0 Nov 24 09:10:29 crc kubenswrapper[4886]: I1124 09:10:29.667963 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f87697b1-d576-4e2b-ba20-04d9453e0656","Type":"ContainerDied","Data":"c9e7abbfd015b4860d2a4de8afda51802c6af321e9e1c646d00c2d3039d64ca2"} Nov 24 09:10:29 crc kubenswrapper[4886]: I1124 09:10:29.668050 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f87697b1-d576-4e2b-ba20-04d9453e0656","Type":"ContainerDied","Data":"918fa517447526287717f7285cc35378c1860604c8de22a18d8e6d2774c32c4b"} Nov 24 09:10:29 crc kubenswrapper[4886]: I1124 09:10:29.668071 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f87697b1-d576-4e2b-ba20-04d9453e0656","Type":"ContainerDied","Data":"53e7cf41d6c15d164522bad27bfaa0ff7fe8414ba567a42fbb48b10ab7b149f1"} Nov 24 09:10:29 crc kubenswrapper[4886]: I1124 09:10:29.678450 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Nov 24 09:10:29 crc kubenswrapper[4886]: I1124 09:10:29.692533 4886 scope.go:117] "RemoveContainer" containerID="de223e8454c25a1bf218b4efad0685df503bc192f92b8722f135dddc734c2aae" Nov 24 09:10:29 crc kubenswrapper[4886]: E1124 09:10:29.693141 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"de223e8454c25a1bf218b4efad0685df503bc192f92b8722f135dddc734c2aae\": container with ID starting with de223e8454c25a1bf218b4efad0685df503bc192f92b8722f135dddc734c2aae not found: ID does not exist" containerID="de223e8454c25a1bf218b4efad0685df503bc192f92b8722f135dddc734c2aae" Nov 24 09:10:29 crc kubenswrapper[4886]: I1124 09:10:29.693196 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"de223e8454c25a1bf218b4efad0685df503bc192f92b8722f135dddc734c2aae"} err="failed to get container status \"de223e8454c25a1bf218b4efad0685df503bc192f92b8722f135dddc734c2aae\": rpc error: code = NotFound desc = could not find container \"de223e8454c25a1bf218b4efad0685df503bc192f92b8722f135dddc734c2aae\": container with ID starting with de223e8454c25a1bf218b4efad0685df503bc192f92b8722f135dddc734c2aae not found: ID does not exist" Nov 24 09:10:29 crc kubenswrapper[4886]: I1124 09:10:29.697667 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=2.264553632 podStartE2EDuration="2.697656887s" podCreationTimestamp="2025-11-24 09:10:27 +0000 UTC" firstStartedPulling="2025-11-24 09:10:28.632543395 +0000 UTC m=+1284.519281530" lastFinishedPulling="2025-11-24 09:10:29.06564665 +0000 UTC m=+1284.952384785" observedRunningTime="2025-11-24 09:10:29.694812706 +0000 UTC m=+1285.581550841" watchObservedRunningTime="2025-11-24 09:10:29.697656887 +0000 UTC m=+1285.584395022" Nov 24 09:10:29 crc kubenswrapper[4886]: I1124 09:10:29.815036 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 24 09:10:29 crc kubenswrapper[4886]: I1124 09:10:29.824605 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 24 09:10:29 crc kubenswrapper[4886]: I1124 09:10:29.848939 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 24 09:10:29 crc kubenswrapper[4886]: E1124 09:10:29.849601 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6879ea11-ba6a-4d7e-8a6a-762f29e1b4cc" containerName="nova-cell1-novncproxy-novncproxy" Nov 24 09:10:29 crc kubenswrapper[4886]: I1124 09:10:29.849617 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="6879ea11-ba6a-4d7e-8a6a-762f29e1b4cc" containerName="nova-cell1-novncproxy-novncproxy" Nov 24 09:10:29 crc kubenswrapper[4886]: I1124 09:10:29.849827 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="6879ea11-ba6a-4d7e-8a6a-762f29e1b4cc" containerName="nova-cell1-novncproxy-novncproxy" Nov 24 09:10:29 crc kubenswrapper[4886]: I1124 09:10:29.851040 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 24 09:10:29 crc kubenswrapper[4886]: I1124 09:10:29.853867 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Nov 24 09:10:29 crc kubenswrapper[4886]: I1124 09:10:29.854668 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Nov 24 09:10:29 crc kubenswrapper[4886]: I1124 09:10:29.854850 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Nov 24 09:10:29 crc kubenswrapper[4886]: I1124 09:10:29.857839 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 24 09:10:29 crc kubenswrapper[4886]: I1124 09:10:29.962520 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/ffad73ea-b29e-4f4c-aa2c-ad30a864a8b4-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"ffad73ea-b29e-4f4c-aa2c-ad30a864a8b4\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 09:10:29 crc kubenswrapper[4886]: I1124 09:10:29.962622 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ffad73ea-b29e-4f4c-aa2c-ad30a864a8b4-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"ffad73ea-b29e-4f4c-aa2c-ad30a864a8b4\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 09:10:29 crc kubenswrapper[4886]: I1124 09:10:29.962647 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/ffad73ea-b29e-4f4c-aa2c-ad30a864a8b4-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"ffad73ea-b29e-4f4c-aa2c-ad30a864a8b4\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 09:10:29 crc kubenswrapper[4886]: I1124 09:10:29.962683 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ffad73ea-b29e-4f4c-aa2c-ad30a864a8b4-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"ffad73ea-b29e-4f4c-aa2c-ad30a864a8b4\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 09:10:29 crc kubenswrapper[4886]: I1124 09:10:29.962746 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b8wrt\" (UniqueName: \"kubernetes.io/projected/ffad73ea-b29e-4f4c-aa2c-ad30a864a8b4-kube-api-access-b8wrt\") pod \"nova-cell1-novncproxy-0\" (UID: \"ffad73ea-b29e-4f4c-aa2c-ad30a864a8b4\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 09:10:30 crc kubenswrapper[4886]: I1124 09:10:30.064554 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/ffad73ea-b29e-4f4c-aa2c-ad30a864a8b4-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"ffad73ea-b29e-4f4c-aa2c-ad30a864a8b4\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 09:10:30 crc kubenswrapper[4886]: I1124 09:10:30.064658 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ffad73ea-b29e-4f4c-aa2c-ad30a864a8b4-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"ffad73ea-b29e-4f4c-aa2c-ad30a864a8b4\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 09:10:30 crc kubenswrapper[4886]: I1124 09:10:30.064681 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/ffad73ea-b29e-4f4c-aa2c-ad30a864a8b4-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"ffad73ea-b29e-4f4c-aa2c-ad30a864a8b4\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 09:10:30 crc kubenswrapper[4886]: I1124 09:10:30.064716 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ffad73ea-b29e-4f4c-aa2c-ad30a864a8b4-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"ffad73ea-b29e-4f4c-aa2c-ad30a864a8b4\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 09:10:30 crc kubenswrapper[4886]: I1124 09:10:30.064780 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b8wrt\" (UniqueName: \"kubernetes.io/projected/ffad73ea-b29e-4f4c-aa2c-ad30a864a8b4-kube-api-access-b8wrt\") pod \"nova-cell1-novncproxy-0\" (UID: \"ffad73ea-b29e-4f4c-aa2c-ad30a864a8b4\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 09:10:30 crc kubenswrapper[4886]: I1124 09:10:30.071101 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Nov 24 09:10:30 crc kubenswrapper[4886]: I1124 09:10:30.071882 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Nov 24 09:10:30 crc kubenswrapper[4886]: I1124 09:10:30.072114 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Nov 24 09:10:30 crc kubenswrapper[4886]: I1124 09:10:30.072312 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/ffad73ea-b29e-4f4c-aa2c-ad30a864a8b4-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"ffad73ea-b29e-4f4c-aa2c-ad30a864a8b4\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 09:10:30 crc kubenswrapper[4886]: I1124 09:10:30.073179 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ffad73ea-b29e-4f4c-aa2c-ad30a864a8b4-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"ffad73ea-b29e-4f4c-aa2c-ad30a864a8b4\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 09:10:30 crc kubenswrapper[4886]: I1124 09:10:30.074074 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ffad73ea-b29e-4f4c-aa2c-ad30a864a8b4-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"ffad73ea-b29e-4f4c-aa2c-ad30a864a8b4\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 09:10:30 crc kubenswrapper[4886]: I1124 09:10:30.081507 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/ffad73ea-b29e-4f4c-aa2c-ad30a864a8b4-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"ffad73ea-b29e-4f4c-aa2c-ad30a864a8b4\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 09:10:30 crc kubenswrapper[4886]: I1124 09:10:30.083045 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Nov 24 09:10:30 crc kubenswrapper[4886]: I1124 09:10:30.084589 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b8wrt\" (UniqueName: \"kubernetes.io/projected/ffad73ea-b29e-4f4c-aa2c-ad30a864a8b4-kube-api-access-b8wrt\") pod \"nova-cell1-novncproxy-0\" (UID: \"ffad73ea-b29e-4f4c-aa2c-ad30a864a8b4\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 09:10:30 crc kubenswrapper[4886]: I1124 09:10:30.183977 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 24 09:10:30 crc kubenswrapper[4886]: I1124 09:10:30.650572 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 24 09:10:30 crc kubenswrapper[4886]: I1124 09:10:30.684824 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"ffad73ea-b29e-4f4c-aa2c-ad30a864a8b4","Type":"ContainerStarted","Data":"5c556d9fb8c146a7446b46b48ae3b162d5b3dc4eed8443c95119a0feb77767fc"} Nov 24 09:10:30 crc kubenswrapper[4886]: I1124 09:10:30.693088 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Nov 24 09:10:30 crc kubenswrapper[4886]: I1124 09:10:30.697448 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Nov 24 09:10:30 crc kubenswrapper[4886]: I1124 09:10:30.866139 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6879ea11-ba6a-4d7e-8a6a-762f29e1b4cc" path="/var/lib/kubelet/pods/6879ea11-ba6a-4d7e-8a6a-762f29e1b4cc/volumes" Nov 24 09:10:30 crc kubenswrapper[4886]: I1124 09:10:30.945810 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-cd5cbd7b9-b9stw"] Nov 24 09:10:30 crc kubenswrapper[4886]: I1124 09:10:30.948460 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cd5cbd7b9-b9stw" Nov 24 09:10:30 crc kubenswrapper[4886]: I1124 09:10:30.962027 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-cd5cbd7b9-b9stw"] Nov 24 09:10:31 crc kubenswrapper[4886]: I1124 09:10:31.095330 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e564c004-2962-49a2-84d2-bf67161bcea5-ovsdbserver-nb\") pod \"dnsmasq-dns-cd5cbd7b9-b9stw\" (UID: \"e564c004-2962-49a2-84d2-bf67161bcea5\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-b9stw" Nov 24 09:10:31 crc kubenswrapper[4886]: I1124 09:10:31.095497 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e564c004-2962-49a2-84d2-bf67161bcea5-dns-swift-storage-0\") pod \"dnsmasq-dns-cd5cbd7b9-b9stw\" (UID: \"e564c004-2962-49a2-84d2-bf67161bcea5\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-b9stw" Nov 24 09:10:31 crc kubenswrapper[4886]: I1124 09:10:31.095530 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e564c004-2962-49a2-84d2-bf67161bcea5-ovsdbserver-sb\") pod \"dnsmasq-dns-cd5cbd7b9-b9stw\" (UID: \"e564c004-2962-49a2-84d2-bf67161bcea5\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-b9stw" Nov 24 09:10:31 crc kubenswrapper[4886]: I1124 09:10:31.095575 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xfwqp\" (UniqueName: \"kubernetes.io/projected/e564c004-2962-49a2-84d2-bf67161bcea5-kube-api-access-xfwqp\") pod \"dnsmasq-dns-cd5cbd7b9-b9stw\" (UID: \"e564c004-2962-49a2-84d2-bf67161bcea5\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-b9stw" Nov 24 09:10:31 crc kubenswrapper[4886]: I1124 09:10:31.095619 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e564c004-2962-49a2-84d2-bf67161bcea5-dns-svc\") pod \"dnsmasq-dns-cd5cbd7b9-b9stw\" (UID: \"e564c004-2962-49a2-84d2-bf67161bcea5\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-b9stw" Nov 24 09:10:31 crc kubenswrapper[4886]: I1124 09:10:31.095691 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e564c004-2962-49a2-84d2-bf67161bcea5-config\") pod \"dnsmasq-dns-cd5cbd7b9-b9stw\" (UID: \"e564c004-2962-49a2-84d2-bf67161bcea5\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-b9stw" Nov 24 09:10:31 crc kubenswrapper[4886]: I1124 09:10:31.198249 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e564c004-2962-49a2-84d2-bf67161bcea5-dns-svc\") pod \"dnsmasq-dns-cd5cbd7b9-b9stw\" (UID: \"e564c004-2962-49a2-84d2-bf67161bcea5\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-b9stw" Nov 24 09:10:31 crc kubenswrapper[4886]: I1124 09:10:31.198356 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e564c004-2962-49a2-84d2-bf67161bcea5-config\") pod \"dnsmasq-dns-cd5cbd7b9-b9stw\" (UID: \"e564c004-2962-49a2-84d2-bf67161bcea5\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-b9stw" Nov 24 09:10:31 crc kubenswrapper[4886]: I1124 09:10:31.198409 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e564c004-2962-49a2-84d2-bf67161bcea5-ovsdbserver-nb\") pod \"dnsmasq-dns-cd5cbd7b9-b9stw\" (UID: \"e564c004-2962-49a2-84d2-bf67161bcea5\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-b9stw" Nov 24 09:10:31 crc kubenswrapper[4886]: I1124 09:10:31.198796 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e564c004-2962-49a2-84d2-bf67161bcea5-dns-swift-storage-0\") pod \"dnsmasq-dns-cd5cbd7b9-b9stw\" (UID: \"e564c004-2962-49a2-84d2-bf67161bcea5\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-b9stw" Nov 24 09:10:31 crc kubenswrapper[4886]: I1124 09:10:31.198962 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e564c004-2962-49a2-84d2-bf67161bcea5-ovsdbserver-sb\") pod \"dnsmasq-dns-cd5cbd7b9-b9stw\" (UID: \"e564c004-2962-49a2-84d2-bf67161bcea5\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-b9stw" Nov 24 09:10:31 crc kubenswrapper[4886]: I1124 09:10:31.199204 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xfwqp\" (UniqueName: \"kubernetes.io/projected/e564c004-2962-49a2-84d2-bf67161bcea5-kube-api-access-xfwqp\") pod \"dnsmasq-dns-cd5cbd7b9-b9stw\" (UID: \"e564c004-2962-49a2-84d2-bf67161bcea5\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-b9stw" Nov 24 09:10:31 crc kubenswrapper[4886]: I1124 09:10:31.199649 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e564c004-2962-49a2-84d2-bf67161bcea5-ovsdbserver-nb\") pod \"dnsmasq-dns-cd5cbd7b9-b9stw\" (UID: \"e564c004-2962-49a2-84d2-bf67161bcea5\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-b9stw" Nov 24 09:10:31 crc kubenswrapper[4886]: I1124 09:10:31.199793 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e564c004-2962-49a2-84d2-bf67161bcea5-config\") pod \"dnsmasq-dns-cd5cbd7b9-b9stw\" (UID: \"e564c004-2962-49a2-84d2-bf67161bcea5\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-b9stw" Nov 24 09:10:31 crc kubenswrapper[4886]: I1124 09:10:31.199901 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e564c004-2962-49a2-84d2-bf67161bcea5-dns-swift-storage-0\") pod \"dnsmasq-dns-cd5cbd7b9-b9stw\" (UID: \"e564c004-2962-49a2-84d2-bf67161bcea5\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-b9stw" Nov 24 09:10:31 crc kubenswrapper[4886]: I1124 09:10:31.200104 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e564c004-2962-49a2-84d2-bf67161bcea5-ovsdbserver-sb\") pod \"dnsmasq-dns-cd5cbd7b9-b9stw\" (UID: \"e564c004-2962-49a2-84d2-bf67161bcea5\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-b9stw" Nov 24 09:10:31 crc kubenswrapper[4886]: I1124 09:10:31.200573 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e564c004-2962-49a2-84d2-bf67161bcea5-dns-svc\") pod \"dnsmasq-dns-cd5cbd7b9-b9stw\" (UID: \"e564c004-2962-49a2-84d2-bf67161bcea5\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-b9stw" Nov 24 09:10:31 crc kubenswrapper[4886]: I1124 09:10:31.223190 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xfwqp\" (UniqueName: \"kubernetes.io/projected/e564c004-2962-49a2-84d2-bf67161bcea5-kube-api-access-xfwqp\") pod \"dnsmasq-dns-cd5cbd7b9-b9stw\" (UID: \"e564c004-2962-49a2-84d2-bf67161bcea5\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-b9stw" Nov 24 09:10:31 crc kubenswrapper[4886]: I1124 09:10:31.287356 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cd5cbd7b9-b9stw" Nov 24 09:10:31 crc kubenswrapper[4886]: I1124 09:10:31.699014 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"ffad73ea-b29e-4f4c-aa2c-ad30a864a8b4","Type":"ContainerStarted","Data":"aa6da61e24be107832d45229f161e885cd79fe9d1e91dc731b76e82fea319148"} Nov 24 09:10:31 crc kubenswrapper[4886]: I1124 09:10:31.828733 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.828704292 podStartE2EDuration="2.828704292s" podCreationTimestamp="2025-11-24 09:10:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 09:10:31.728566067 +0000 UTC m=+1287.615304212" watchObservedRunningTime="2025-11-24 09:10:31.828704292 +0000 UTC m=+1287.715442427" Nov 24 09:10:31 crc kubenswrapper[4886]: I1124 09:10:31.832774 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-cd5cbd7b9-b9stw"] Nov 24 09:10:32 crc kubenswrapper[4886]: I1124 09:10:32.607571 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 09:10:32 crc kubenswrapper[4886]: I1124 09:10:32.715953 4886 generic.go:334] "Generic (PLEG): container finished" podID="f87697b1-d576-4e2b-ba20-04d9453e0656" containerID="8bd2070a7c217a02442155c45d61562ed1a5fb43a1d566d083527f221681e331" exitCode=0 Nov 24 09:10:32 crc kubenswrapper[4886]: I1124 09:10:32.716087 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f87697b1-d576-4e2b-ba20-04d9453e0656","Type":"ContainerDied","Data":"8bd2070a7c217a02442155c45d61562ed1a5fb43a1d566d083527f221681e331"} Nov 24 09:10:32 crc kubenswrapper[4886]: I1124 09:10:32.716128 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f87697b1-d576-4e2b-ba20-04d9453e0656","Type":"ContainerDied","Data":"8f28f4f5f8c164ec7a9ca7ff68597f305a05de47eba718d687a3c90632fc750c"} Nov 24 09:10:32 crc kubenswrapper[4886]: I1124 09:10:32.716167 4886 scope.go:117] "RemoveContainer" containerID="c9e7abbfd015b4860d2a4de8afda51802c6af321e9e1c646d00c2d3039d64ca2" Nov 24 09:10:32 crc kubenswrapper[4886]: I1124 09:10:32.716353 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 09:10:32 crc kubenswrapper[4886]: I1124 09:10:32.722680 4886 generic.go:334] "Generic (PLEG): container finished" podID="e564c004-2962-49a2-84d2-bf67161bcea5" containerID="08df42843a65b38061288da35baee011b1000818a6160a75a43fe53c0ee8acef" exitCode=0 Nov 24 09:10:32 crc kubenswrapper[4886]: I1124 09:10:32.722865 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cd5cbd7b9-b9stw" event={"ID":"e564c004-2962-49a2-84d2-bf67161bcea5","Type":"ContainerDied","Data":"08df42843a65b38061288da35baee011b1000818a6160a75a43fe53c0ee8acef"} Nov 24 09:10:32 crc kubenswrapper[4886]: I1124 09:10:32.722942 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cd5cbd7b9-b9stw" event={"ID":"e564c004-2962-49a2-84d2-bf67161bcea5","Type":"ContainerStarted","Data":"0336fb4795107360d4c9afe4c40716bdd22683e9cc6397b664f8c3bf2c69e62b"} Nov 24 09:10:32 crc kubenswrapper[4886]: I1124 09:10:32.742246 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f87697b1-d576-4e2b-ba20-04d9453e0656-log-httpd\") pod \"f87697b1-d576-4e2b-ba20-04d9453e0656\" (UID: \"f87697b1-d576-4e2b-ba20-04d9453e0656\") " Nov 24 09:10:32 crc kubenswrapper[4886]: I1124 09:10:32.742301 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f87697b1-d576-4e2b-ba20-04d9453e0656-run-httpd\") pod \"f87697b1-d576-4e2b-ba20-04d9453e0656\" (UID: \"f87697b1-d576-4e2b-ba20-04d9453e0656\") " Nov 24 09:10:32 crc kubenswrapper[4886]: I1124 09:10:32.742358 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f87697b1-d576-4e2b-ba20-04d9453e0656-config-data\") pod \"f87697b1-d576-4e2b-ba20-04d9453e0656\" (UID: \"f87697b1-d576-4e2b-ba20-04d9453e0656\") " Nov 24 09:10:32 crc kubenswrapper[4886]: I1124 09:10:32.742414 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f87697b1-d576-4e2b-ba20-04d9453e0656-scripts\") pod \"f87697b1-d576-4e2b-ba20-04d9453e0656\" (UID: \"f87697b1-d576-4e2b-ba20-04d9453e0656\") " Nov 24 09:10:32 crc kubenswrapper[4886]: I1124 09:10:32.743710 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9j5cd\" (UniqueName: \"kubernetes.io/projected/f87697b1-d576-4e2b-ba20-04d9453e0656-kube-api-access-9j5cd\") pod \"f87697b1-d576-4e2b-ba20-04d9453e0656\" (UID: \"f87697b1-d576-4e2b-ba20-04d9453e0656\") " Nov 24 09:10:32 crc kubenswrapper[4886]: I1124 09:10:32.743841 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f87697b1-d576-4e2b-ba20-04d9453e0656-combined-ca-bundle\") pod \"f87697b1-d576-4e2b-ba20-04d9453e0656\" (UID: \"f87697b1-d576-4e2b-ba20-04d9453e0656\") " Nov 24 09:10:32 crc kubenswrapper[4886]: I1124 09:10:32.743945 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f87697b1-d576-4e2b-ba20-04d9453e0656-sg-core-conf-yaml\") pod \"f87697b1-d576-4e2b-ba20-04d9453e0656\" (UID: \"f87697b1-d576-4e2b-ba20-04d9453e0656\") " Nov 24 09:10:32 crc kubenswrapper[4886]: I1124 09:10:32.746575 4886 scope.go:117] "RemoveContainer" containerID="918fa517447526287717f7285cc35378c1860604c8de22a18d8e6d2774c32c4b" Nov 24 09:10:32 crc kubenswrapper[4886]: I1124 09:10:32.747125 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f87697b1-d576-4e2b-ba20-04d9453e0656-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "f87697b1-d576-4e2b-ba20-04d9453e0656" (UID: "f87697b1-d576-4e2b-ba20-04d9453e0656"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 09:10:32 crc kubenswrapper[4886]: I1124 09:10:32.748019 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f87697b1-d576-4e2b-ba20-04d9453e0656-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "f87697b1-d576-4e2b-ba20-04d9453e0656" (UID: "f87697b1-d576-4e2b-ba20-04d9453e0656"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 09:10:32 crc kubenswrapper[4886]: I1124 09:10:32.762117 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f87697b1-d576-4e2b-ba20-04d9453e0656-scripts" (OuterVolumeSpecName: "scripts") pod "f87697b1-d576-4e2b-ba20-04d9453e0656" (UID: "f87697b1-d576-4e2b-ba20-04d9453e0656"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:10:32 crc kubenswrapper[4886]: I1124 09:10:32.799591 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f87697b1-d576-4e2b-ba20-04d9453e0656-kube-api-access-9j5cd" (OuterVolumeSpecName: "kube-api-access-9j5cd") pod "f87697b1-d576-4e2b-ba20-04d9453e0656" (UID: "f87697b1-d576-4e2b-ba20-04d9453e0656"). InnerVolumeSpecName "kube-api-access-9j5cd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 09:10:32 crc kubenswrapper[4886]: I1124 09:10:32.817078 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f87697b1-d576-4e2b-ba20-04d9453e0656-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "f87697b1-d576-4e2b-ba20-04d9453e0656" (UID: "f87697b1-d576-4e2b-ba20-04d9453e0656"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:10:32 crc kubenswrapper[4886]: I1124 09:10:32.831890 4886 scope.go:117] "RemoveContainer" containerID="8bd2070a7c217a02442155c45d61562ed1a5fb43a1d566d083527f221681e331" Nov 24 09:10:32 crc kubenswrapper[4886]: I1124 09:10:32.861497 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9j5cd\" (UniqueName: \"kubernetes.io/projected/f87697b1-d576-4e2b-ba20-04d9453e0656-kube-api-access-9j5cd\") on node \"crc\" DevicePath \"\"" Nov 24 09:10:32 crc kubenswrapper[4886]: I1124 09:10:32.861551 4886 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f87697b1-d576-4e2b-ba20-04d9453e0656-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 24 09:10:32 crc kubenswrapper[4886]: I1124 09:10:32.861564 4886 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f87697b1-d576-4e2b-ba20-04d9453e0656-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 24 09:10:32 crc kubenswrapper[4886]: I1124 09:10:32.861581 4886 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f87697b1-d576-4e2b-ba20-04d9453e0656-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 24 09:10:32 crc kubenswrapper[4886]: I1124 09:10:32.861595 4886 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f87697b1-d576-4e2b-ba20-04d9453e0656-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 09:10:32 crc kubenswrapper[4886]: I1124 09:10:32.890881 4886 scope.go:117] "RemoveContainer" containerID="53e7cf41d6c15d164522bad27bfaa0ff7fe8414ba567a42fbb48b10ab7b149f1" Nov 24 09:10:32 crc kubenswrapper[4886]: I1124 09:10:32.911433 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f87697b1-d576-4e2b-ba20-04d9453e0656-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f87697b1-d576-4e2b-ba20-04d9453e0656" (UID: "f87697b1-d576-4e2b-ba20-04d9453e0656"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:10:32 crc kubenswrapper[4886]: I1124 09:10:32.953881 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f87697b1-d576-4e2b-ba20-04d9453e0656-config-data" (OuterVolumeSpecName: "config-data") pod "f87697b1-d576-4e2b-ba20-04d9453e0656" (UID: "f87697b1-d576-4e2b-ba20-04d9453e0656"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:10:32 crc kubenswrapper[4886]: I1124 09:10:32.968801 4886 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f87697b1-d576-4e2b-ba20-04d9453e0656-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 09:10:32 crc kubenswrapper[4886]: I1124 09:10:32.968900 4886 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f87697b1-d576-4e2b-ba20-04d9453e0656-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 09:10:32 crc kubenswrapper[4886]: I1124 09:10:32.972341 4886 scope.go:117] "RemoveContainer" containerID="c9e7abbfd015b4860d2a4de8afda51802c6af321e9e1c646d00c2d3039d64ca2" Nov 24 09:10:32 crc kubenswrapper[4886]: E1124 09:10:32.973061 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c9e7abbfd015b4860d2a4de8afda51802c6af321e9e1c646d00c2d3039d64ca2\": container with ID starting with c9e7abbfd015b4860d2a4de8afda51802c6af321e9e1c646d00c2d3039d64ca2 not found: ID does not exist" containerID="c9e7abbfd015b4860d2a4de8afda51802c6af321e9e1c646d00c2d3039d64ca2" Nov 24 09:10:32 crc kubenswrapper[4886]: I1124 09:10:32.973105 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c9e7abbfd015b4860d2a4de8afda51802c6af321e9e1c646d00c2d3039d64ca2"} err="failed to get container status \"c9e7abbfd015b4860d2a4de8afda51802c6af321e9e1c646d00c2d3039d64ca2\": rpc error: code = NotFound desc = could not find container \"c9e7abbfd015b4860d2a4de8afda51802c6af321e9e1c646d00c2d3039d64ca2\": container with ID starting with c9e7abbfd015b4860d2a4de8afda51802c6af321e9e1c646d00c2d3039d64ca2 not found: ID does not exist" Nov 24 09:10:32 crc kubenswrapper[4886]: I1124 09:10:32.973140 4886 scope.go:117] "RemoveContainer" containerID="918fa517447526287717f7285cc35378c1860604c8de22a18d8e6d2774c32c4b" Nov 24 09:10:32 crc kubenswrapper[4886]: E1124 09:10:32.977758 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"918fa517447526287717f7285cc35378c1860604c8de22a18d8e6d2774c32c4b\": container with ID starting with 918fa517447526287717f7285cc35378c1860604c8de22a18d8e6d2774c32c4b not found: ID does not exist" containerID="918fa517447526287717f7285cc35378c1860604c8de22a18d8e6d2774c32c4b" Nov 24 09:10:32 crc kubenswrapper[4886]: I1124 09:10:32.977815 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"918fa517447526287717f7285cc35378c1860604c8de22a18d8e6d2774c32c4b"} err="failed to get container status \"918fa517447526287717f7285cc35378c1860604c8de22a18d8e6d2774c32c4b\": rpc error: code = NotFound desc = could not find container \"918fa517447526287717f7285cc35378c1860604c8de22a18d8e6d2774c32c4b\": container with ID starting with 918fa517447526287717f7285cc35378c1860604c8de22a18d8e6d2774c32c4b not found: ID does not exist" Nov 24 09:10:32 crc kubenswrapper[4886]: I1124 09:10:32.977850 4886 scope.go:117] "RemoveContainer" containerID="8bd2070a7c217a02442155c45d61562ed1a5fb43a1d566d083527f221681e331" Nov 24 09:10:32 crc kubenswrapper[4886]: E1124 09:10:32.979940 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8bd2070a7c217a02442155c45d61562ed1a5fb43a1d566d083527f221681e331\": container with ID starting with 8bd2070a7c217a02442155c45d61562ed1a5fb43a1d566d083527f221681e331 not found: ID does not exist" containerID="8bd2070a7c217a02442155c45d61562ed1a5fb43a1d566d083527f221681e331" Nov 24 09:10:32 crc kubenswrapper[4886]: I1124 09:10:32.979979 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8bd2070a7c217a02442155c45d61562ed1a5fb43a1d566d083527f221681e331"} err="failed to get container status \"8bd2070a7c217a02442155c45d61562ed1a5fb43a1d566d083527f221681e331\": rpc error: code = NotFound desc = could not find container \"8bd2070a7c217a02442155c45d61562ed1a5fb43a1d566d083527f221681e331\": container with ID starting with 8bd2070a7c217a02442155c45d61562ed1a5fb43a1d566d083527f221681e331 not found: ID does not exist" Nov 24 09:10:32 crc kubenswrapper[4886]: I1124 09:10:32.979998 4886 scope.go:117] "RemoveContainer" containerID="53e7cf41d6c15d164522bad27bfaa0ff7fe8414ba567a42fbb48b10ab7b149f1" Nov 24 09:10:32 crc kubenswrapper[4886]: E1124 09:10:32.983047 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"53e7cf41d6c15d164522bad27bfaa0ff7fe8414ba567a42fbb48b10ab7b149f1\": container with ID starting with 53e7cf41d6c15d164522bad27bfaa0ff7fe8414ba567a42fbb48b10ab7b149f1 not found: ID does not exist" containerID="53e7cf41d6c15d164522bad27bfaa0ff7fe8414ba567a42fbb48b10ab7b149f1" Nov 24 09:10:32 crc kubenswrapper[4886]: I1124 09:10:32.983095 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"53e7cf41d6c15d164522bad27bfaa0ff7fe8414ba567a42fbb48b10ab7b149f1"} err="failed to get container status \"53e7cf41d6c15d164522bad27bfaa0ff7fe8414ba567a42fbb48b10ab7b149f1\": rpc error: code = NotFound desc = could not find container \"53e7cf41d6c15d164522bad27bfaa0ff7fe8414ba567a42fbb48b10ab7b149f1\": container with ID starting with 53e7cf41d6c15d164522bad27bfaa0ff7fe8414ba567a42fbb48b10ab7b149f1 not found: ID does not exist" Nov 24 09:10:33 crc kubenswrapper[4886]: I1124 09:10:33.057434 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 24 09:10:33 crc kubenswrapper[4886]: I1124 09:10:33.070535 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 24 09:10:33 crc kubenswrapper[4886]: I1124 09:10:33.096478 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 24 09:10:33 crc kubenswrapper[4886]: E1124 09:10:33.097166 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f87697b1-d576-4e2b-ba20-04d9453e0656" containerName="proxy-httpd" Nov 24 09:10:33 crc kubenswrapper[4886]: I1124 09:10:33.097188 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="f87697b1-d576-4e2b-ba20-04d9453e0656" containerName="proxy-httpd" Nov 24 09:10:33 crc kubenswrapper[4886]: E1124 09:10:33.097227 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f87697b1-d576-4e2b-ba20-04d9453e0656" containerName="sg-core" Nov 24 09:10:33 crc kubenswrapper[4886]: I1124 09:10:33.097234 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="f87697b1-d576-4e2b-ba20-04d9453e0656" containerName="sg-core" Nov 24 09:10:33 crc kubenswrapper[4886]: E1124 09:10:33.097263 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f87697b1-d576-4e2b-ba20-04d9453e0656" containerName="ceilometer-central-agent" Nov 24 09:10:33 crc kubenswrapper[4886]: I1124 09:10:33.097274 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="f87697b1-d576-4e2b-ba20-04d9453e0656" containerName="ceilometer-central-agent" Nov 24 09:10:33 crc kubenswrapper[4886]: E1124 09:10:33.097308 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f87697b1-d576-4e2b-ba20-04d9453e0656" containerName="ceilometer-notification-agent" Nov 24 09:10:33 crc kubenswrapper[4886]: I1124 09:10:33.097315 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="f87697b1-d576-4e2b-ba20-04d9453e0656" containerName="ceilometer-notification-agent" Nov 24 09:10:33 crc kubenswrapper[4886]: I1124 09:10:33.097561 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="f87697b1-d576-4e2b-ba20-04d9453e0656" containerName="ceilometer-notification-agent" Nov 24 09:10:33 crc kubenswrapper[4886]: I1124 09:10:33.097616 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="f87697b1-d576-4e2b-ba20-04d9453e0656" containerName="proxy-httpd" Nov 24 09:10:33 crc kubenswrapper[4886]: I1124 09:10:33.097630 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="f87697b1-d576-4e2b-ba20-04d9453e0656" containerName="sg-core" Nov 24 09:10:33 crc kubenswrapper[4886]: I1124 09:10:33.097644 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="f87697b1-d576-4e2b-ba20-04d9453e0656" containerName="ceilometer-central-agent" Nov 24 09:10:33 crc kubenswrapper[4886]: I1124 09:10:33.099686 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 09:10:33 crc kubenswrapper[4886]: I1124 09:10:33.124762 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Nov 24 09:10:33 crc kubenswrapper[4886]: I1124 09:10:33.125005 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 24 09:10:33 crc kubenswrapper[4886]: I1124 09:10:33.125055 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 24 09:10:33 crc kubenswrapper[4886]: I1124 09:10:33.132580 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 24 09:10:33 crc kubenswrapper[4886]: I1124 09:10:33.174804 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/442a4f83-b738-44b6-8018-d89ba1f03cce-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"442a4f83-b738-44b6-8018-d89ba1f03cce\") " pod="openstack/ceilometer-0" Nov 24 09:10:33 crc kubenswrapper[4886]: I1124 09:10:33.175325 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/442a4f83-b738-44b6-8018-d89ba1f03cce-config-data\") pod \"ceilometer-0\" (UID: \"442a4f83-b738-44b6-8018-d89ba1f03cce\") " pod="openstack/ceilometer-0" Nov 24 09:10:33 crc kubenswrapper[4886]: I1124 09:10:33.175364 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/442a4f83-b738-44b6-8018-d89ba1f03cce-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"442a4f83-b738-44b6-8018-d89ba1f03cce\") " pod="openstack/ceilometer-0" Nov 24 09:10:33 crc kubenswrapper[4886]: I1124 09:10:33.175626 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/442a4f83-b738-44b6-8018-d89ba1f03cce-log-httpd\") pod \"ceilometer-0\" (UID: \"442a4f83-b738-44b6-8018-d89ba1f03cce\") " pod="openstack/ceilometer-0" Nov 24 09:10:33 crc kubenswrapper[4886]: I1124 09:10:33.175703 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tngf9\" (UniqueName: \"kubernetes.io/projected/442a4f83-b738-44b6-8018-d89ba1f03cce-kube-api-access-tngf9\") pod \"ceilometer-0\" (UID: \"442a4f83-b738-44b6-8018-d89ba1f03cce\") " pod="openstack/ceilometer-0" Nov 24 09:10:33 crc kubenswrapper[4886]: I1124 09:10:33.175879 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/442a4f83-b738-44b6-8018-d89ba1f03cce-run-httpd\") pod \"ceilometer-0\" (UID: \"442a4f83-b738-44b6-8018-d89ba1f03cce\") " pod="openstack/ceilometer-0" Nov 24 09:10:33 crc kubenswrapper[4886]: I1124 09:10:33.176064 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/442a4f83-b738-44b6-8018-d89ba1f03cce-scripts\") pod \"ceilometer-0\" (UID: \"442a4f83-b738-44b6-8018-d89ba1f03cce\") " pod="openstack/ceilometer-0" Nov 24 09:10:33 crc kubenswrapper[4886]: I1124 09:10:33.176189 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/442a4f83-b738-44b6-8018-d89ba1f03cce-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"442a4f83-b738-44b6-8018-d89ba1f03cce\") " pod="openstack/ceilometer-0" Nov 24 09:10:33 crc kubenswrapper[4886]: I1124 09:10:33.278216 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/442a4f83-b738-44b6-8018-d89ba1f03cce-scripts\") pod \"ceilometer-0\" (UID: \"442a4f83-b738-44b6-8018-d89ba1f03cce\") " pod="openstack/ceilometer-0" Nov 24 09:10:33 crc kubenswrapper[4886]: I1124 09:10:33.278292 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/442a4f83-b738-44b6-8018-d89ba1f03cce-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"442a4f83-b738-44b6-8018-d89ba1f03cce\") " pod="openstack/ceilometer-0" Nov 24 09:10:33 crc kubenswrapper[4886]: I1124 09:10:33.278410 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/442a4f83-b738-44b6-8018-d89ba1f03cce-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"442a4f83-b738-44b6-8018-d89ba1f03cce\") " pod="openstack/ceilometer-0" Nov 24 09:10:33 crc kubenswrapper[4886]: I1124 09:10:33.278450 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/442a4f83-b738-44b6-8018-d89ba1f03cce-config-data\") pod \"ceilometer-0\" (UID: \"442a4f83-b738-44b6-8018-d89ba1f03cce\") " pod="openstack/ceilometer-0" Nov 24 09:10:33 crc kubenswrapper[4886]: I1124 09:10:33.278476 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/442a4f83-b738-44b6-8018-d89ba1f03cce-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"442a4f83-b738-44b6-8018-d89ba1f03cce\") " pod="openstack/ceilometer-0" Nov 24 09:10:33 crc kubenswrapper[4886]: I1124 09:10:33.279418 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/442a4f83-b738-44b6-8018-d89ba1f03cce-log-httpd\") pod \"ceilometer-0\" (UID: \"442a4f83-b738-44b6-8018-d89ba1f03cce\") " pod="openstack/ceilometer-0" Nov 24 09:10:33 crc kubenswrapper[4886]: I1124 09:10:33.279460 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tngf9\" (UniqueName: \"kubernetes.io/projected/442a4f83-b738-44b6-8018-d89ba1f03cce-kube-api-access-tngf9\") pod \"ceilometer-0\" (UID: \"442a4f83-b738-44b6-8018-d89ba1f03cce\") " pod="openstack/ceilometer-0" Nov 24 09:10:33 crc kubenswrapper[4886]: I1124 09:10:33.279507 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/442a4f83-b738-44b6-8018-d89ba1f03cce-run-httpd\") pod \"ceilometer-0\" (UID: \"442a4f83-b738-44b6-8018-d89ba1f03cce\") " pod="openstack/ceilometer-0" Nov 24 09:10:33 crc kubenswrapper[4886]: I1124 09:10:33.280008 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/442a4f83-b738-44b6-8018-d89ba1f03cce-run-httpd\") pod \"ceilometer-0\" (UID: \"442a4f83-b738-44b6-8018-d89ba1f03cce\") " pod="openstack/ceilometer-0" Nov 24 09:10:33 crc kubenswrapper[4886]: I1124 09:10:33.280018 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/442a4f83-b738-44b6-8018-d89ba1f03cce-log-httpd\") pod \"ceilometer-0\" (UID: \"442a4f83-b738-44b6-8018-d89ba1f03cce\") " pod="openstack/ceilometer-0" Nov 24 09:10:33 crc kubenswrapper[4886]: I1124 09:10:33.283370 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/442a4f83-b738-44b6-8018-d89ba1f03cce-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"442a4f83-b738-44b6-8018-d89ba1f03cce\") " pod="openstack/ceilometer-0" Nov 24 09:10:33 crc kubenswrapper[4886]: I1124 09:10:33.283661 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/442a4f83-b738-44b6-8018-d89ba1f03cce-scripts\") pod \"ceilometer-0\" (UID: \"442a4f83-b738-44b6-8018-d89ba1f03cce\") " pod="openstack/ceilometer-0" Nov 24 09:10:33 crc kubenswrapper[4886]: I1124 09:10:33.284013 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/442a4f83-b738-44b6-8018-d89ba1f03cce-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"442a4f83-b738-44b6-8018-d89ba1f03cce\") " pod="openstack/ceilometer-0" Nov 24 09:10:33 crc kubenswrapper[4886]: I1124 09:10:33.286655 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/442a4f83-b738-44b6-8018-d89ba1f03cce-config-data\") pod \"ceilometer-0\" (UID: \"442a4f83-b738-44b6-8018-d89ba1f03cce\") " pod="openstack/ceilometer-0" Nov 24 09:10:33 crc kubenswrapper[4886]: I1124 09:10:33.290650 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/442a4f83-b738-44b6-8018-d89ba1f03cce-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"442a4f83-b738-44b6-8018-d89ba1f03cce\") " pod="openstack/ceilometer-0" Nov 24 09:10:33 crc kubenswrapper[4886]: I1124 09:10:33.301349 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tngf9\" (UniqueName: \"kubernetes.io/projected/442a4f83-b738-44b6-8018-d89ba1f03cce-kube-api-access-tngf9\") pod \"ceilometer-0\" (UID: \"442a4f83-b738-44b6-8018-d89ba1f03cce\") " pod="openstack/ceilometer-0" Nov 24 09:10:33 crc kubenswrapper[4886]: I1124 09:10:33.459909 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 09:10:33 crc kubenswrapper[4886]: I1124 09:10:33.752876 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 24 09:10:33 crc kubenswrapper[4886]: I1124 09:10:33.754708 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cd5cbd7b9-b9stw" event={"ID":"e564c004-2962-49a2-84d2-bf67161bcea5","Type":"ContainerStarted","Data":"facf18a5cb2de9714641a9cc922379afba6e64a7a9358fb303cf1c952b52fba0"} Nov 24 09:10:33 crc kubenswrapper[4886]: I1124 09:10:33.754832 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-cd5cbd7b9-b9stw" Nov 24 09:10:33 crc kubenswrapper[4886]: I1124 09:10:33.758368 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="3861616d-d10e-4420-8770-397a0f78e143" containerName="nova-api-log" containerID="cri-o://acb87bfd28f41612d07151291eb5e647cca4c25ae31c44a96323a5203de93d1f" gracePeriod=30 Nov 24 09:10:33 crc kubenswrapper[4886]: I1124 09:10:33.758535 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="3861616d-d10e-4420-8770-397a0f78e143" containerName="nova-api-api" containerID="cri-o://3e2d5dd317c3db1365151c8dc7b2306d617a7c3f1530891a448d4717818cc319" gracePeriod=30 Nov 24 09:10:33 crc kubenswrapper[4886]: I1124 09:10:33.776103 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 24 09:10:33 crc kubenswrapper[4886]: I1124 09:10:33.793218 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-cd5cbd7b9-b9stw" podStartSLOduration=3.793192517 podStartE2EDuration="3.793192517s" podCreationTimestamp="2025-11-24 09:10:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 09:10:33.780022433 +0000 UTC m=+1289.666760568" watchObservedRunningTime="2025-11-24 09:10:33.793192517 +0000 UTC m=+1289.679930652" Nov 24 09:10:33 crc kubenswrapper[4886]: I1124 09:10:33.930631 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 24 09:10:33 crc kubenswrapper[4886]: W1124 09:10:33.932644 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod442a4f83_b738_44b6_8018_d89ba1f03cce.slice/crio-36bcaf0bceb6f01c9f9ec92c2bba889b4a2181cdf8e5ac0790a8dd43a8b3ca3c WatchSource:0}: Error finding container 36bcaf0bceb6f01c9f9ec92c2bba889b4a2181cdf8e5ac0790a8dd43a8b3ca3c: Status 404 returned error can't find the container with id 36bcaf0bceb6f01c9f9ec92c2bba889b4a2181cdf8e5ac0790a8dd43a8b3ca3c Nov 24 09:10:34 crc kubenswrapper[4886]: I1124 09:10:34.769481 4886 generic.go:334] "Generic (PLEG): container finished" podID="3861616d-d10e-4420-8770-397a0f78e143" containerID="acb87bfd28f41612d07151291eb5e647cca4c25ae31c44a96323a5203de93d1f" exitCode=143 Nov 24 09:10:34 crc kubenswrapper[4886]: I1124 09:10:34.769570 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3861616d-d10e-4420-8770-397a0f78e143","Type":"ContainerDied","Data":"acb87bfd28f41612d07151291eb5e647cca4c25ae31c44a96323a5203de93d1f"} Nov 24 09:10:34 crc kubenswrapper[4886]: I1124 09:10:34.772760 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"442a4f83-b738-44b6-8018-d89ba1f03cce","Type":"ContainerStarted","Data":"6d467f9bbfae23d680ea947cb4cd29445a583e1388be2058f16b700e1bb51635"} Nov 24 09:10:34 crc kubenswrapper[4886]: I1124 09:10:34.772807 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"442a4f83-b738-44b6-8018-d89ba1f03cce","Type":"ContainerStarted","Data":"36bcaf0bceb6f01c9f9ec92c2bba889b4a2181cdf8e5ac0790a8dd43a8b3ca3c"} Nov 24 09:10:34 crc kubenswrapper[4886]: I1124 09:10:34.864431 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f87697b1-d576-4e2b-ba20-04d9453e0656" path="/var/lib/kubelet/pods/f87697b1-d576-4e2b-ba20-04d9453e0656/volumes" Nov 24 09:10:35 crc kubenswrapper[4886]: I1124 09:10:35.184280 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Nov 24 09:10:35 crc kubenswrapper[4886]: I1124 09:10:35.790362 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"442a4f83-b738-44b6-8018-d89ba1f03cce","Type":"ContainerStarted","Data":"311f860f33095878ebefd254f861fa960cc5376577ba8d52de935299150f4196"} Nov 24 09:10:36 crc kubenswrapper[4886]: E1124 09:10:36.366821 4886 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda7266b1c_bb21_4f54_994c_52ab5db8d4eb.slice\": RecentStats: unable to find data in memory cache]" Nov 24 09:10:36 crc kubenswrapper[4886]: I1124 09:10:36.806082 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"442a4f83-b738-44b6-8018-d89ba1f03cce","Type":"ContainerStarted","Data":"ed57bfbf05bd6612fb15dc8e39f21c9a6f60bb0944661aef6f82b69902fc4ba5"} Nov 24 09:10:37 crc kubenswrapper[4886]: I1124 09:10:37.428974 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 24 09:10:37 crc kubenswrapper[4886]: I1124 09:10:37.581215 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3861616d-d10e-4420-8770-397a0f78e143-logs\") pod \"3861616d-d10e-4420-8770-397a0f78e143\" (UID: \"3861616d-d10e-4420-8770-397a0f78e143\") " Nov 24 09:10:37 crc kubenswrapper[4886]: I1124 09:10:37.581429 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3861616d-d10e-4420-8770-397a0f78e143-config-data\") pod \"3861616d-d10e-4420-8770-397a0f78e143\" (UID: \"3861616d-d10e-4420-8770-397a0f78e143\") " Nov 24 09:10:37 crc kubenswrapper[4886]: I1124 09:10:37.581450 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pkklr\" (UniqueName: \"kubernetes.io/projected/3861616d-d10e-4420-8770-397a0f78e143-kube-api-access-pkklr\") pod \"3861616d-d10e-4420-8770-397a0f78e143\" (UID: \"3861616d-d10e-4420-8770-397a0f78e143\") " Nov 24 09:10:37 crc kubenswrapper[4886]: I1124 09:10:37.581516 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3861616d-d10e-4420-8770-397a0f78e143-combined-ca-bundle\") pod \"3861616d-d10e-4420-8770-397a0f78e143\" (UID: \"3861616d-d10e-4420-8770-397a0f78e143\") " Nov 24 09:10:37 crc kubenswrapper[4886]: I1124 09:10:37.582182 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3861616d-d10e-4420-8770-397a0f78e143-logs" (OuterVolumeSpecName: "logs") pod "3861616d-d10e-4420-8770-397a0f78e143" (UID: "3861616d-d10e-4420-8770-397a0f78e143"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 09:10:37 crc kubenswrapper[4886]: I1124 09:10:37.588099 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3861616d-d10e-4420-8770-397a0f78e143-kube-api-access-pkklr" (OuterVolumeSpecName: "kube-api-access-pkklr") pod "3861616d-d10e-4420-8770-397a0f78e143" (UID: "3861616d-d10e-4420-8770-397a0f78e143"). InnerVolumeSpecName "kube-api-access-pkklr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 09:10:37 crc kubenswrapper[4886]: I1124 09:10:37.645340 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3861616d-d10e-4420-8770-397a0f78e143-config-data" (OuterVolumeSpecName: "config-data") pod "3861616d-d10e-4420-8770-397a0f78e143" (UID: "3861616d-d10e-4420-8770-397a0f78e143"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:10:37 crc kubenswrapper[4886]: I1124 09:10:37.679335 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3861616d-d10e-4420-8770-397a0f78e143-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3861616d-d10e-4420-8770-397a0f78e143" (UID: "3861616d-d10e-4420-8770-397a0f78e143"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:10:37 crc kubenswrapper[4886]: I1124 09:10:37.683995 4886 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3861616d-d10e-4420-8770-397a0f78e143-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 09:10:37 crc kubenswrapper[4886]: I1124 09:10:37.684047 4886 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3861616d-d10e-4420-8770-397a0f78e143-logs\") on node \"crc\" DevicePath \"\"" Nov 24 09:10:37 crc kubenswrapper[4886]: I1124 09:10:37.684067 4886 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3861616d-d10e-4420-8770-397a0f78e143-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 09:10:37 crc kubenswrapper[4886]: I1124 09:10:37.684078 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pkklr\" (UniqueName: \"kubernetes.io/projected/3861616d-d10e-4420-8770-397a0f78e143-kube-api-access-pkklr\") on node \"crc\" DevicePath \"\"" Nov 24 09:10:37 crc kubenswrapper[4886]: I1124 09:10:37.818627 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"442a4f83-b738-44b6-8018-d89ba1f03cce","Type":"ContainerStarted","Data":"82343b82905a4aa02d5c2a9fed37be9e6c5fc4943bcf7a0cccf816b9cae45361"} Nov 24 09:10:37 crc kubenswrapper[4886]: I1124 09:10:37.818732 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="442a4f83-b738-44b6-8018-d89ba1f03cce" containerName="ceilometer-central-agent" containerID="cri-o://6d467f9bbfae23d680ea947cb4cd29445a583e1388be2058f16b700e1bb51635" gracePeriod=30 Nov 24 09:10:37 crc kubenswrapper[4886]: I1124 09:10:37.818782 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 24 09:10:37 crc kubenswrapper[4886]: I1124 09:10:37.818813 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="442a4f83-b738-44b6-8018-d89ba1f03cce" containerName="sg-core" containerID="cri-o://ed57bfbf05bd6612fb15dc8e39f21c9a6f60bb0944661aef6f82b69902fc4ba5" gracePeriod=30 Nov 24 09:10:37 crc kubenswrapper[4886]: I1124 09:10:37.818800 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="442a4f83-b738-44b6-8018-d89ba1f03cce" containerName="proxy-httpd" containerID="cri-o://82343b82905a4aa02d5c2a9fed37be9e6c5fc4943bcf7a0cccf816b9cae45361" gracePeriod=30 Nov 24 09:10:37 crc kubenswrapper[4886]: I1124 09:10:37.818826 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="442a4f83-b738-44b6-8018-d89ba1f03cce" containerName="ceilometer-notification-agent" containerID="cri-o://311f860f33095878ebefd254f861fa960cc5376577ba8d52de935299150f4196" gracePeriod=30 Nov 24 09:10:37 crc kubenswrapper[4886]: I1124 09:10:37.822443 4886 generic.go:334] "Generic (PLEG): container finished" podID="3861616d-d10e-4420-8770-397a0f78e143" containerID="3e2d5dd317c3db1365151c8dc7b2306d617a7c3f1530891a448d4717818cc319" exitCode=0 Nov 24 09:10:37 crc kubenswrapper[4886]: I1124 09:10:37.822505 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3861616d-d10e-4420-8770-397a0f78e143","Type":"ContainerDied","Data":"3e2d5dd317c3db1365151c8dc7b2306d617a7c3f1530891a448d4717818cc319"} Nov 24 09:10:37 crc kubenswrapper[4886]: I1124 09:10:37.822541 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3861616d-d10e-4420-8770-397a0f78e143","Type":"ContainerDied","Data":"4487dfb331ade92398b0c82c2a0bc69e7cbbac93b06d109955bc17055d850750"} Nov 24 09:10:37 crc kubenswrapper[4886]: I1124 09:10:37.822585 4886 scope.go:117] "RemoveContainer" containerID="3e2d5dd317c3db1365151c8dc7b2306d617a7c3f1530891a448d4717818cc319" Nov 24 09:10:37 crc kubenswrapper[4886]: I1124 09:10:37.822800 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 24 09:10:37 crc kubenswrapper[4886]: I1124 09:10:37.851811 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.540617352 podStartE2EDuration="4.851781616s" podCreationTimestamp="2025-11-24 09:10:33 +0000 UTC" firstStartedPulling="2025-11-24 09:10:33.935313915 +0000 UTC m=+1289.822052050" lastFinishedPulling="2025-11-24 09:10:37.246478179 +0000 UTC m=+1293.133216314" observedRunningTime="2025-11-24 09:10:37.844937631 +0000 UTC m=+1293.731675806" watchObservedRunningTime="2025-11-24 09:10:37.851781616 +0000 UTC m=+1293.738519751" Nov 24 09:10:37 crc kubenswrapper[4886]: I1124 09:10:37.879358 4886 scope.go:117] "RemoveContainer" containerID="acb87bfd28f41612d07151291eb5e647cca4c25ae31c44a96323a5203de93d1f" Nov 24 09:10:37 crc kubenswrapper[4886]: I1124 09:10:37.893286 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 24 09:10:37 crc kubenswrapper[4886]: I1124 09:10:37.924102 4886 scope.go:117] "RemoveContainer" containerID="3e2d5dd317c3db1365151c8dc7b2306d617a7c3f1530891a448d4717818cc319" Nov 24 09:10:37 crc kubenswrapper[4886]: E1124 09:10:37.927627 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3e2d5dd317c3db1365151c8dc7b2306d617a7c3f1530891a448d4717818cc319\": container with ID starting with 3e2d5dd317c3db1365151c8dc7b2306d617a7c3f1530891a448d4717818cc319 not found: ID does not exist" containerID="3e2d5dd317c3db1365151c8dc7b2306d617a7c3f1530891a448d4717818cc319" Nov 24 09:10:37 crc kubenswrapper[4886]: I1124 09:10:37.927672 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3e2d5dd317c3db1365151c8dc7b2306d617a7c3f1530891a448d4717818cc319"} err="failed to get container status \"3e2d5dd317c3db1365151c8dc7b2306d617a7c3f1530891a448d4717818cc319\": rpc error: code = NotFound desc = could not find container \"3e2d5dd317c3db1365151c8dc7b2306d617a7c3f1530891a448d4717818cc319\": container with ID starting with 3e2d5dd317c3db1365151c8dc7b2306d617a7c3f1530891a448d4717818cc319 not found: ID does not exist" Nov 24 09:10:37 crc kubenswrapper[4886]: I1124 09:10:37.927699 4886 scope.go:117] "RemoveContainer" containerID="acb87bfd28f41612d07151291eb5e647cca4c25ae31c44a96323a5203de93d1f" Nov 24 09:10:37 crc kubenswrapper[4886]: E1124 09:10:37.932203 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"acb87bfd28f41612d07151291eb5e647cca4c25ae31c44a96323a5203de93d1f\": container with ID starting with acb87bfd28f41612d07151291eb5e647cca4c25ae31c44a96323a5203de93d1f not found: ID does not exist" containerID="acb87bfd28f41612d07151291eb5e647cca4c25ae31c44a96323a5203de93d1f" Nov 24 09:10:37 crc kubenswrapper[4886]: I1124 09:10:37.932271 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"acb87bfd28f41612d07151291eb5e647cca4c25ae31c44a96323a5203de93d1f"} err="failed to get container status \"acb87bfd28f41612d07151291eb5e647cca4c25ae31c44a96323a5203de93d1f\": rpc error: code = NotFound desc = could not find container \"acb87bfd28f41612d07151291eb5e647cca4c25ae31c44a96323a5203de93d1f\": container with ID starting with acb87bfd28f41612d07151291eb5e647cca4c25ae31c44a96323a5203de93d1f not found: ID does not exist" Nov 24 09:10:37 crc kubenswrapper[4886]: I1124 09:10:37.950289 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Nov 24 09:10:37 crc kubenswrapper[4886]: I1124 09:10:37.969828 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Nov 24 09:10:37 crc kubenswrapper[4886]: E1124 09:10:37.970457 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3861616d-d10e-4420-8770-397a0f78e143" containerName="nova-api-log" Nov 24 09:10:37 crc kubenswrapper[4886]: I1124 09:10:37.970481 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="3861616d-d10e-4420-8770-397a0f78e143" containerName="nova-api-log" Nov 24 09:10:37 crc kubenswrapper[4886]: E1124 09:10:37.970496 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3861616d-d10e-4420-8770-397a0f78e143" containerName="nova-api-api" Nov 24 09:10:37 crc kubenswrapper[4886]: I1124 09:10:37.970506 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="3861616d-d10e-4420-8770-397a0f78e143" containerName="nova-api-api" Nov 24 09:10:37 crc kubenswrapper[4886]: I1124 09:10:37.970781 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="3861616d-d10e-4420-8770-397a0f78e143" containerName="nova-api-api" Nov 24 09:10:37 crc kubenswrapper[4886]: I1124 09:10:37.970803 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="3861616d-d10e-4420-8770-397a0f78e143" containerName="nova-api-log" Nov 24 09:10:37 crc kubenswrapper[4886]: I1124 09:10:37.972107 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 24 09:10:37 crc kubenswrapper[4886]: I1124 09:10:37.974467 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Nov 24 09:10:37 crc kubenswrapper[4886]: I1124 09:10:37.974646 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Nov 24 09:10:37 crc kubenswrapper[4886]: I1124 09:10:37.975567 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Nov 24 09:10:37 crc kubenswrapper[4886]: I1124 09:10:37.982841 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 24 09:10:38 crc kubenswrapper[4886]: I1124 09:10:38.041891 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Nov 24 09:10:38 crc kubenswrapper[4886]: I1124 09:10:38.092286 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/55b43fd3-4034-48fd-ae88-d510286a394a-internal-tls-certs\") pod \"nova-api-0\" (UID: \"55b43fd3-4034-48fd-ae88-d510286a394a\") " pod="openstack/nova-api-0" Nov 24 09:10:38 crc kubenswrapper[4886]: I1124 09:10:38.092389 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55b43fd3-4034-48fd-ae88-d510286a394a-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"55b43fd3-4034-48fd-ae88-d510286a394a\") " pod="openstack/nova-api-0" Nov 24 09:10:38 crc kubenswrapper[4886]: I1124 09:10:38.092424 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/55b43fd3-4034-48fd-ae88-d510286a394a-config-data\") pod \"nova-api-0\" (UID: \"55b43fd3-4034-48fd-ae88-d510286a394a\") " pod="openstack/nova-api-0" Nov 24 09:10:38 crc kubenswrapper[4886]: I1124 09:10:38.092468 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/55b43fd3-4034-48fd-ae88-d510286a394a-public-tls-certs\") pod \"nova-api-0\" (UID: \"55b43fd3-4034-48fd-ae88-d510286a394a\") " pod="openstack/nova-api-0" Nov 24 09:10:38 crc kubenswrapper[4886]: I1124 09:10:38.092544 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/55b43fd3-4034-48fd-ae88-d510286a394a-logs\") pod \"nova-api-0\" (UID: \"55b43fd3-4034-48fd-ae88-d510286a394a\") " pod="openstack/nova-api-0" Nov 24 09:10:38 crc kubenswrapper[4886]: I1124 09:10:38.092593 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2fkzl\" (UniqueName: \"kubernetes.io/projected/55b43fd3-4034-48fd-ae88-d510286a394a-kube-api-access-2fkzl\") pod \"nova-api-0\" (UID: \"55b43fd3-4034-48fd-ae88-d510286a394a\") " pod="openstack/nova-api-0" Nov 24 09:10:38 crc kubenswrapper[4886]: I1124 09:10:38.194673 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55b43fd3-4034-48fd-ae88-d510286a394a-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"55b43fd3-4034-48fd-ae88-d510286a394a\") " pod="openstack/nova-api-0" Nov 24 09:10:38 crc kubenswrapper[4886]: I1124 09:10:38.194745 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/55b43fd3-4034-48fd-ae88-d510286a394a-config-data\") pod \"nova-api-0\" (UID: \"55b43fd3-4034-48fd-ae88-d510286a394a\") " pod="openstack/nova-api-0" Nov 24 09:10:38 crc kubenswrapper[4886]: I1124 09:10:38.194783 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/55b43fd3-4034-48fd-ae88-d510286a394a-public-tls-certs\") pod \"nova-api-0\" (UID: \"55b43fd3-4034-48fd-ae88-d510286a394a\") " pod="openstack/nova-api-0" Nov 24 09:10:38 crc kubenswrapper[4886]: I1124 09:10:38.194838 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/55b43fd3-4034-48fd-ae88-d510286a394a-logs\") pod \"nova-api-0\" (UID: \"55b43fd3-4034-48fd-ae88-d510286a394a\") " pod="openstack/nova-api-0" Nov 24 09:10:38 crc kubenswrapper[4886]: I1124 09:10:38.194892 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2fkzl\" (UniqueName: \"kubernetes.io/projected/55b43fd3-4034-48fd-ae88-d510286a394a-kube-api-access-2fkzl\") pod \"nova-api-0\" (UID: \"55b43fd3-4034-48fd-ae88-d510286a394a\") " pod="openstack/nova-api-0" Nov 24 09:10:38 crc kubenswrapper[4886]: I1124 09:10:38.195049 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/55b43fd3-4034-48fd-ae88-d510286a394a-internal-tls-certs\") pod \"nova-api-0\" (UID: \"55b43fd3-4034-48fd-ae88-d510286a394a\") " pod="openstack/nova-api-0" Nov 24 09:10:38 crc kubenswrapper[4886]: I1124 09:10:38.196059 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/55b43fd3-4034-48fd-ae88-d510286a394a-logs\") pod \"nova-api-0\" (UID: \"55b43fd3-4034-48fd-ae88-d510286a394a\") " pod="openstack/nova-api-0" Nov 24 09:10:38 crc kubenswrapper[4886]: I1124 09:10:38.202873 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55b43fd3-4034-48fd-ae88-d510286a394a-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"55b43fd3-4034-48fd-ae88-d510286a394a\") " pod="openstack/nova-api-0" Nov 24 09:10:38 crc kubenswrapper[4886]: I1124 09:10:38.203225 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/55b43fd3-4034-48fd-ae88-d510286a394a-config-data\") pod \"nova-api-0\" (UID: \"55b43fd3-4034-48fd-ae88-d510286a394a\") " pod="openstack/nova-api-0" Nov 24 09:10:38 crc kubenswrapper[4886]: I1124 09:10:38.212735 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/55b43fd3-4034-48fd-ae88-d510286a394a-internal-tls-certs\") pod \"nova-api-0\" (UID: \"55b43fd3-4034-48fd-ae88-d510286a394a\") " pod="openstack/nova-api-0" Nov 24 09:10:38 crc kubenswrapper[4886]: I1124 09:10:38.214703 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/55b43fd3-4034-48fd-ae88-d510286a394a-public-tls-certs\") pod \"nova-api-0\" (UID: \"55b43fd3-4034-48fd-ae88-d510286a394a\") " pod="openstack/nova-api-0" Nov 24 09:10:38 crc kubenswrapper[4886]: I1124 09:10:38.216314 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2fkzl\" (UniqueName: \"kubernetes.io/projected/55b43fd3-4034-48fd-ae88-d510286a394a-kube-api-access-2fkzl\") pod \"nova-api-0\" (UID: \"55b43fd3-4034-48fd-ae88-d510286a394a\") " pod="openstack/nova-api-0" Nov 24 09:10:38 crc kubenswrapper[4886]: I1124 09:10:38.402898 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 24 09:10:38 crc kubenswrapper[4886]: I1124 09:10:38.836486 4886 generic.go:334] "Generic (PLEG): container finished" podID="442a4f83-b738-44b6-8018-d89ba1f03cce" containerID="82343b82905a4aa02d5c2a9fed37be9e6c5fc4943bcf7a0cccf816b9cae45361" exitCode=0 Nov 24 09:10:38 crc kubenswrapper[4886]: I1124 09:10:38.837511 4886 generic.go:334] "Generic (PLEG): container finished" podID="442a4f83-b738-44b6-8018-d89ba1f03cce" containerID="ed57bfbf05bd6612fb15dc8e39f21c9a6f60bb0944661aef6f82b69902fc4ba5" exitCode=2 Nov 24 09:10:38 crc kubenswrapper[4886]: I1124 09:10:38.837526 4886 generic.go:334] "Generic (PLEG): container finished" podID="442a4f83-b738-44b6-8018-d89ba1f03cce" containerID="311f860f33095878ebefd254f861fa960cc5376577ba8d52de935299150f4196" exitCode=0 Nov 24 09:10:38 crc kubenswrapper[4886]: I1124 09:10:38.836506 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"442a4f83-b738-44b6-8018-d89ba1f03cce","Type":"ContainerDied","Data":"82343b82905a4aa02d5c2a9fed37be9e6c5fc4943bcf7a0cccf816b9cae45361"} Nov 24 09:10:38 crc kubenswrapper[4886]: I1124 09:10:38.837650 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"442a4f83-b738-44b6-8018-d89ba1f03cce","Type":"ContainerDied","Data":"ed57bfbf05bd6612fb15dc8e39f21c9a6f60bb0944661aef6f82b69902fc4ba5"} Nov 24 09:10:38 crc kubenswrapper[4886]: I1124 09:10:38.837671 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"442a4f83-b738-44b6-8018-d89ba1f03cce","Type":"ContainerDied","Data":"311f860f33095878ebefd254f861fa960cc5376577ba8d52de935299150f4196"} Nov 24 09:10:38 crc kubenswrapper[4886]: I1124 09:10:38.862393 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3861616d-d10e-4420-8770-397a0f78e143" path="/var/lib/kubelet/pods/3861616d-d10e-4420-8770-397a0f78e143/volumes" Nov 24 09:10:38 crc kubenswrapper[4886]: I1124 09:10:38.984532 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 24 09:10:39 crc kubenswrapper[4886]: I1124 09:10:39.858626 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"55b43fd3-4034-48fd-ae88-d510286a394a","Type":"ContainerStarted","Data":"9553193047ca6795214f5eed76d478f165b1ccd4754ba1518085478b2a2cbb73"} Nov 24 09:10:39 crc kubenswrapper[4886]: I1124 09:10:39.859136 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"55b43fd3-4034-48fd-ae88-d510286a394a","Type":"ContainerStarted","Data":"261f2f6053b09014fb786c6b570d7342ad0cf1454a224f27d9c4b4c7061b991d"} Nov 24 09:10:39 crc kubenswrapper[4886]: I1124 09:10:39.859168 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"55b43fd3-4034-48fd-ae88-d510286a394a","Type":"ContainerStarted","Data":"4f0a6e1191006800562602f65a5656d9331930da53b12834240ec2bc09cd151f"} Nov 24 09:10:39 crc kubenswrapper[4886]: I1124 09:10:39.912399 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.91235469 podStartE2EDuration="2.91235469s" podCreationTimestamp="2025-11-24 09:10:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 09:10:39.881948406 +0000 UTC m=+1295.768686581" watchObservedRunningTime="2025-11-24 09:10:39.91235469 +0000 UTC m=+1295.799092845" Nov 24 09:10:40 crc kubenswrapper[4886]: I1124 09:10:40.184468 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Nov 24 09:10:40 crc kubenswrapper[4886]: I1124 09:10:40.207520 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Nov 24 09:10:40 crc kubenswrapper[4886]: I1124 09:10:40.893931 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Nov 24 09:10:41 crc kubenswrapper[4886]: I1124 09:10:41.156123 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-bx8hx"] Nov 24 09:10:41 crc kubenswrapper[4886]: I1124 09:10:41.157839 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-bx8hx" Nov 24 09:10:41 crc kubenswrapper[4886]: I1124 09:10:41.162891 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Nov 24 09:10:41 crc kubenswrapper[4886]: I1124 09:10:41.163018 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Nov 24 09:10:41 crc kubenswrapper[4886]: I1124 09:10:41.166910 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-bx8hx"] Nov 24 09:10:41 crc kubenswrapper[4886]: I1124 09:10:41.275240 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d1ca2352-9c13-4db2-9c3a-ce2557f39968-scripts\") pod \"nova-cell1-cell-mapping-bx8hx\" (UID: \"d1ca2352-9c13-4db2-9c3a-ce2557f39968\") " pod="openstack/nova-cell1-cell-mapping-bx8hx" Nov 24 09:10:41 crc kubenswrapper[4886]: I1124 09:10:41.275679 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pxsg2\" (UniqueName: \"kubernetes.io/projected/d1ca2352-9c13-4db2-9c3a-ce2557f39968-kube-api-access-pxsg2\") pod \"nova-cell1-cell-mapping-bx8hx\" (UID: \"d1ca2352-9c13-4db2-9c3a-ce2557f39968\") " pod="openstack/nova-cell1-cell-mapping-bx8hx" Nov 24 09:10:41 crc kubenswrapper[4886]: I1124 09:10:41.275710 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d1ca2352-9c13-4db2-9c3a-ce2557f39968-config-data\") pod \"nova-cell1-cell-mapping-bx8hx\" (UID: \"d1ca2352-9c13-4db2-9c3a-ce2557f39968\") " pod="openstack/nova-cell1-cell-mapping-bx8hx" Nov 24 09:10:41 crc kubenswrapper[4886]: I1124 09:10:41.275925 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1ca2352-9c13-4db2-9c3a-ce2557f39968-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-bx8hx\" (UID: \"d1ca2352-9c13-4db2-9c3a-ce2557f39968\") " pod="openstack/nova-cell1-cell-mapping-bx8hx" Nov 24 09:10:41 crc kubenswrapper[4886]: I1124 09:10:41.289303 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-cd5cbd7b9-b9stw" Nov 24 09:10:41 crc kubenswrapper[4886]: I1124 09:10:41.361104 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-bccf8f775-2fd72"] Nov 24 09:10:41 crc kubenswrapper[4886]: I1124 09:10:41.361498 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-bccf8f775-2fd72" podUID="0d0df575-904e-4913-9463-7c776faedd7e" containerName="dnsmasq-dns" containerID="cri-o://8402e4fb47d36c81de4a102f9448cd038464cceec3b425edf13179f975413aec" gracePeriod=10 Nov 24 09:10:41 crc kubenswrapper[4886]: I1124 09:10:41.380888 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d1ca2352-9c13-4db2-9c3a-ce2557f39968-scripts\") pod \"nova-cell1-cell-mapping-bx8hx\" (UID: \"d1ca2352-9c13-4db2-9c3a-ce2557f39968\") " pod="openstack/nova-cell1-cell-mapping-bx8hx" Nov 24 09:10:41 crc kubenswrapper[4886]: I1124 09:10:41.380950 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pxsg2\" (UniqueName: \"kubernetes.io/projected/d1ca2352-9c13-4db2-9c3a-ce2557f39968-kube-api-access-pxsg2\") pod \"nova-cell1-cell-mapping-bx8hx\" (UID: \"d1ca2352-9c13-4db2-9c3a-ce2557f39968\") " pod="openstack/nova-cell1-cell-mapping-bx8hx" Nov 24 09:10:41 crc kubenswrapper[4886]: I1124 09:10:41.380990 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d1ca2352-9c13-4db2-9c3a-ce2557f39968-config-data\") pod \"nova-cell1-cell-mapping-bx8hx\" (UID: \"d1ca2352-9c13-4db2-9c3a-ce2557f39968\") " pod="openstack/nova-cell1-cell-mapping-bx8hx" Nov 24 09:10:41 crc kubenswrapper[4886]: I1124 09:10:41.381135 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1ca2352-9c13-4db2-9c3a-ce2557f39968-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-bx8hx\" (UID: \"d1ca2352-9c13-4db2-9c3a-ce2557f39968\") " pod="openstack/nova-cell1-cell-mapping-bx8hx" Nov 24 09:10:41 crc kubenswrapper[4886]: I1124 09:10:41.388949 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d1ca2352-9c13-4db2-9c3a-ce2557f39968-scripts\") pod \"nova-cell1-cell-mapping-bx8hx\" (UID: \"d1ca2352-9c13-4db2-9c3a-ce2557f39968\") " pod="openstack/nova-cell1-cell-mapping-bx8hx" Nov 24 09:10:41 crc kubenswrapper[4886]: I1124 09:10:41.389122 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1ca2352-9c13-4db2-9c3a-ce2557f39968-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-bx8hx\" (UID: \"d1ca2352-9c13-4db2-9c3a-ce2557f39968\") " pod="openstack/nova-cell1-cell-mapping-bx8hx" Nov 24 09:10:41 crc kubenswrapper[4886]: I1124 09:10:41.393007 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d1ca2352-9c13-4db2-9c3a-ce2557f39968-config-data\") pod \"nova-cell1-cell-mapping-bx8hx\" (UID: \"d1ca2352-9c13-4db2-9c3a-ce2557f39968\") " pod="openstack/nova-cell1-cell-mapping-bx8hx" Nov 24 09:10:41 crc kubenswrapper[4886]: I1124 09:10:41.413005 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pxsg2\" (UniqueName: \"kubernetes.io/projected/d1ca2352-9c13-4db2-9c3a-ce2557f39968-kube-api-access-pxsg2\") pod \"nova-cell1-cell-mapping-bx8hx\" (UID: \"d1ca2352-9c13-4db2-9c3a-ce2557f39968\") " pod="openstack/nova-cell1-cell-mapping-bx8hx" Nov 24 09:10:41 crc kubenswrapper[4886]: I1124 09:10:41.494574 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-bx8hx" Nov 24 09:10:41 crc kubenswrapper[4886]: I1124 09:10:41.603781 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-bccf8f775-2fd72" podUID="0d0df575-904e-4913-9463-7c776faedd7e" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.183:5353: connect: connection refused" Nov 24 09:10:41 crc kubenswrapper[4886]: I1124 09:10:41.899063 4886 generic.go:334] "Generic (PLEG): container finished" podID="0d0df575-904e-4913-9463-7c776faedd7e" containerID="8402e4fb47d36c81de4a102f9448cd038464cceec3b425edf13179f975413aec" exitCode=0 Nov 24 09:10:41 crc kubenswrapper[4886]: I1124 09:10:41.900467 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bccf8f775-2fd72" event={"ID":"0d0df575-904e-4913-9463-7c776faedd7e","Type":"ContainerDied","Data":"8402e4fb47d36c81de4a102f9448cd038464cceec3b425edf13179f975413aec"} Nov 24 09:10:42 crc kubenswrapper[4886]: I1124 09:10:42.027933 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bccf8f775-2fd72" Nov 24 09:10:42 crc kubenswrapper[4886]: I1124 09:10:42.102494 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0d0df575-904e-4913-9463-7c776faedd7e-config\") pod \"0d0df575-904e-4913-9463-7c776faedd7e\" (UID: \"0d0df575-904e-4913-9463-7c776faedd7e\") " Nov 24 09:10:42 crc kubenswrapper[4886]: I1124 09:10:42.102540 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0d0df575-904e-4913-9463-7c776faedd7e-ovsdbserver-sb\") pod \"0d0df575-904e-4913-9463-7c776faedd7e\" (UID: \"0d0df575-904e-4913-9463-7c776faedd7e\") " Nov 24 09:10:42 crc kubenswrapper[4886]: I1124 09:10:42.102577 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0d0df575-904e-4913-9463-7c776faedd7e-ovsdbserver-nb\") pod \"0d0df575-904e-4913-9463-7c776faedd7e\" (UID: \"0d0df575-904e-4913-9463-7c776faedd7e\") " Nov 24 09:10:42 crc kubenswrapper[4886]: I1124 09:10:42.102694 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mqcnt\" (UniqueName: \"kubernetes.io/projected/0d0df575-904e-4913-9463-7c776faedd7e-kube-api-access-mqcnt\") pod \"0d0df575-904e-4913-9463-7c776faedd7e\" (UID: \"0d0df575-904e-4913-9463-7c776faedd7e\") " Nov 24 09:10:42 crc kubenswrapper[4886]: I1124 09:10:42.102739 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0d0df575-904e-4913-9463-7c776faedd7e-dns-svc\") pod \"0d0df575-904e-4913-9463-7c776faedd7e\" (UID: \"0d0df575-904e-4913-9463-7c776faedd7e\") " Nov 24 09:10:42 crc kubenswrapper[4886]: I1124 09:10:42.102841 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0d0df575-904e-4913-9463-7c776faedd7e-dns-swift-storage-0\") pod \"0d0df575-904e-4913-9463-7c776faedd7e\" (UID: \"0d0df575-904e-4913-9463-7c776faedd7e\") " Nov 24 09:10:42 crc kubenswrapper[4886]: I1124 09:10:42.114626 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0d0df575-904e-4913-9463-7c776faedd7e-kube-api-access-mqcnt" (OuterVolumeSpecName: "kube-api-access-mqcnt") pod "0d0df575-904e-4913-9463-7c776faedd7e" (UID: "0d0df575-904e-4913-9463-7c776faedd7e"). InnerVolumeSpecName "kube-api-access-mqcnt". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 09:10:42 crc kubenswrapper[4886]: I1124 09:10:42.143667 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-bx8hx"] Nov 24 09:10:42 crc kubenswrapper[4886]: W1124 09:10:42.179073 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd1ca2352_9c13_4db2_9c3a_ce2557f39968.slice/crio-6ab5bf568bb4c0d0b1bf3be210d0dd710ce017dd03a27545a063c837622c6f9a WatchSource:0}: Error finding container 6ab5bf568bb4c0d0b1bf3be210d0dd710ce017dd03a27545a063c837622c6f9a: Status 404 returned error can't find the container with id 6ab5bf568bb4c0d0b1bf3be210d0dd710ce017dd03a27545a063c837622c6f9a Nov 24 09:10:42 crc kubenswrapper[4886]: I1124 09:10:42.206642 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mqcnt\" (UniqueName: \"kubernetes.io/projected/0d0df575-904e-4913-9463-7c776faedd7e-kube-api-access-mqcnt\") on node \"crc\" DevicePath \"\"" Nov 24 09:10:42 crc kubenswrapper[4886]: I1124 09:10:42.242683 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0d0df575-904e-4913-9463-7c776faedd7e-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "0d0df575-904e-4913-9463-7c776faedd7e" (UID: "0d0df575-904e-4913-9463-7c776faedd7e"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 09:10:42 crc kubenswrapper[4886]: I1124 09:10:42.244383 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0d0df575-904e-4913-9463-7c776faedd7e-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "0d0df575-904e-4913-9463-7c776faedd7e" (UID: "0d0df575-904e-4913-9463-7c776faedd7e"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 09:10:42 crc kubenswrapper[4886]: I1124 09:10:42.245266 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0d0df575-904e-4913-9463-7c776faedd7e-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "0d0df575-904e-4913-9463-7c776faedd7e" (UID: "0d0df575-904e-4913-9463-7c776faedd7e"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 09:10:42 crc kubenswrapper[4886]: I1124 09:10:42.248871 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0d0df575-904e-4913-9463-7c776faedd7e-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "0d0df575-904e-4913-9463-7c776faedd7e" (UID: "0d0df575-904e-4913-9463-7c776faedd7e"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 09:10:42 crc kubenswrapper[4886]: I1124 09:10:42.249337 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0d0df575-904e-4913-9463-7c776faedd7e-config" (OuterVolumeSpecName: "config") pod "0d0df575-904e-4913-9463-7c776faedd7e" (UID: "0d0df575-904e-4913-9463-7c776faedd7e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 09:10:42 crc kubenswrapper[4886]: I1124 09:10:42.309294 4886 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0d0df575-904e-4913-9463-7c776faedd7e-config\") on node \"crc\" DevicePath \"\"" Nov 24 09:10:42 crc kubenswrapper[4886]: I1124 09:10:42.309345 4886 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0d0df575-904e-4913-9463-7c776faedd7e-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 24 09:10:42 crc kubenswrapper[4886]: I1124 09:10:42.309362 4886 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0d0df575-904e-4913-9463-7c776faedd7e-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 24 09:10:42 crc kubenswrapper[4886]: I1124 09:10:42.309376 4886 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0d0df575-904e-4913-9463-7c776faedd7e-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 24 09:10:42 crc kubenswrapper[4886]: I1124 09:10:42.309390 4886 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0d0df575-904e-4913-9463-7c776faedd7e-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 24 09:10:42 crc kubenswrapper[4886]: I1124 09:10:42.353680 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 09:10:42 crc kubenswrapper[4886]: I1124 09:10:42.513406 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tngf9\" (UniqueName: \"kubernetes.io/projected/442a4f83-b738-44b6-8018-d89ba1f03cce-kube-api-access-tngf9\") pod \"442a4f83-b738-44b6-8018-d89ba1f03cce\" (UID: \"442a4f83-b738-44b6-8018-d89ba1f03cce\") " Nov 24 09:10:42 crc kubenswrapper[4886]: I1124 09:10:42.513462 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/442a4f83-b738-44b6-8018-d89ba1f03cce-config-data\") pod \"442a4f83-b738-44b6-8018-d89ba1f03cce\" (UID: \"442a4f83-b738-44b6-8018-d89ba1f03cce\") " Nov 24 09:10:42 crc kubenswrapper[4886]: I1124 09:10:42.513556 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/442a4f83-b738-44b6-8018-d89ba1f03cce-ceilometer-tls-certs\") pod \"442a4f83-b738-44b6-8018-d89ba1f03cce\" (UID: \"442a4f83-b738-44b6-8018-d89ba1f03cce\") " Nov 24 09:10:42 crc kubenswrapper[4886]: I1124 09:10:42.513616 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/442a4f83-b738-44b6-8018-d89ba1f03cce-combined-ca-bundle\") pod \"442a4f83-b738-44b6-8018-d89ba1f03cce\" (UID: \"442a4f83-b738-44b6-8018-d89ba1f03cce\") " Nov 24 09:10:42 crc kubenswrapper[4886]: I1124 09:10:42.513672 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/442a4f83-b738-44b6-8018-d89ba1f03cce-sg-core-conf-yaml\") pod \"442a4f83-b738-44b6-8018-d89ba1f03cce\" (UID: \"442a4f83-b738-44b6-8018-d89ba1f03cce\") " Nov 24 09:10:42 crc kubenswrapper[4886]: I1124 09:10:42.513701 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/442a4f83-b738-44b6-8018-d89ba1f03cce-run-httpd\") pod \"442a4f83-b738-44b6-8018-d89ba1f03cce\" (UID: \"442a4f83-b738-44b6-8018-d89ba1f03cce\") " Nov 24 09:10:42 crc kubenswrapper[4886]: I1124 09:10:42.513846 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/442a4f83-b738-44b6-8018-d89ba1f03cce-log-httpd\") pod \"442a4f83-b738-44b6-8018-d89ba1f03cce\" (UID: \"442a4f83-b738-44b6-8018-d89ba1f03cce\") " Nov 24 09:10:42 crc kubenswrapper[4886]: I1124 09:10:42.513920 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/442a4f83-b738-44b6-8018-d89ba1f03cce-scripts\") pod \"442a4f83-b738-44b6-8018-d89ba1f03cce\" (UID: \"442a4f83-b738-44b6-8018-d89ba1f03cce\") " Nov 24 09:10:42 crc kubenswrapper[4886]: I1124 09:10:42.516095 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/442a4f83-b738-44b6-8018-d89ba1f03cce-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "442a4f83-b738-44b6-8018-d89ba1f03cce" (UID: "442a4f83-b738-44b6-8018-d89ba1f03cce"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 09:10:42 crc kubenswrapper[4886]: I1124 09:10:42.516191 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/442a4f83-b738-44b6-8018-d89ba1f03cce-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "442a4f83-b738-44b6-8018-d89ba1f03cce" (UID: "442a4f83-b738-44b6-8018-d89ba1f03cce"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 09:10:42 crc kubenswrapper[4886]: I1124 09:10:42.521256 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/442a4f83-b738-44b6-8018-d89ba1f03cce-kube-api-access-tngf9" (OuterVolumeSpecName: "kube-api-access-tngf9") pod "442a4f83-b738-44b6-8018-d89ba1f03cce" (UID: "442a4f83-b738-44b6-8018-d89ba1f03cce"). InnerVolumeSpecName "kube-api-access-tngf9". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 09:10:42 crc kubenswrapper[4886]: I1124 09:10:42.523064 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/442a4f83-b738-44b6-8018-d89ba1f03cce-scripts" (OuterVolumeSpecName: "scripts") pod "442a4f83-b738-44b6-8018-d89ba1f03cce" (UID: "442a4f83-b738-44b6-8018-d89ba1f03cce"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:10:42 crc kubenswrapper[4886]: I1124 09:10:42.574890 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/442a4f83-b738-44b6-8018-d89ba1f03cce-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "442a4f83-b738-44b6-8018-d89ba1f03cce" (UID: "442a4f83-b738-44b6-8018-d89ba1f03cce"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:10:42 crc kubenswrapper[4886]: I1124 09:10:42.605614 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/442a4f83-b738-44b6-8018-d89ba1f03cce-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "442a4f83-b738-44b6-8018-d89ba1f03cce" (UID: "442a4f83-b738-44b6-8018-d89ba1f03cce"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:10:42 crc kubenswrapper[4886]: I1124 09:10:42.617258 4886 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/442a4f83-b738-44b6-8018-d89ba1f03cce-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 24 09:10:42 crc kubenswrapper[4886]: I1124 09:10:42.617482 4886 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/442a4f83-b738-44b6-8018-d89ba1f03cce-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 09:10:42 crc kubenswrapper[4886]: I1124 09:10:42.617560 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tngf9\" (UniqueName: \"kubernetes.io/projected/442a4f83-b738-44b6-8018-d89ba1f03cce-kube-api-access-tngf9\") on node \"crc\" DevicePath \"\"" Nov 24 09:10:42 crc kubenswrapper[4886]: I1124 09:10:42.617619 4886 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/442a4f83-b738-44b6-8018-d89ba1f03cce-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 24 09:10:42 crc kubenswrapper[4886]: I1124 09:10:42.617773 4886 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/442a4f83-b738-44b6-8018-d89ba1f03cce-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 24 09:10:42 crc kubenswrapper[4886]: I1124 09:10:42.617839 4886 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/442a4f83-b738-44b6-8018-d89ba1f03cce-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 24 09:10:42 crc kubenswrapper[4886]: I1124 09:10:42.639366 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/442a4f83-b738-44b6-8018-d89ba1f03cce-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "442a4f83-b738-44b6-8018-d89ba1f03cce" (UID: "442a4f83-b738-44b6-8018-d89ba1f03cce"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:10:42 crc kubenswrapper[4886]: I1124 09:10:42.667998 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/442a4f83-b738-44b6-8018-d89ba1f03cce-config-data" (OuterVolumeSpecName: "config-data") pod "442a4f83-b738-44b6-8018-d89ba1f03cce" (UID: "442a4f83-b738-44b6-8018-d89ba1f03cce"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:10:42 crc kubenswrapper[4886]: I1124 09:10:42.720626 4886 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/442a4f83-b738-44b6-8018-d89ba1f03cce-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 09:10:42 crc kubenswrapper[4886]: I1124 09:10:42.720665 4886 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/442a4f83-b738-44b6-8018-d89ba1f03cce-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 09:10:42 crc kubenswrapper[4886]: I1124 09:10:42.921667 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-bx8hx" event={"ID":"d1ca2352-9c13-4db2-9c3a-ce2557f39968","Type":"ContainerStarted","Data":"5766d62296fc2d378e3a02a3368370cf6f39033324586106db432cd09ca6b173"} Nov 24 09:10:42 crc kubenswrapper[4886]: I1124 09:10:42.923811 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-bx8hx" event={"ID":"d1ca2352-9c13-4db2-9c3a-ce2557f39968","Type":"ContainerStarted","Data":"6ab5bf568bb4c0d0b1bf3be210d0dd710ce017dd03a27545a063c837622c6f9a"} Nov 24 09:10:42 crc kubenswrapper[4886]: I1124 09:10:42.944196 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-bx8hx" podStartSLOduration=1.944179608 podStartE2EDuration="1.944179608s" podCreationTimestamp="2025-11-24 09:10:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 09:10:42.942610474 +0000 UTC m=+1298.829348609" watchObservedRunningTime="2025-11-24 09:10:42.944179608 +0000 UTC m=+1298.830917743" Nov 24 09:10:42 crc kubenswrapper[4886]: I1124 09:10:42.946878 4886 generic.go:334] "Generic (PLEG): container finished" podID="442a4f83-b738-44b6-8018-d89ba1f03cce" containerID="6d467f9bbfae23d680ea947cb4cd29445a583e1388be2058f16b700e1bb51635" exitCode=0 Nov 24 09:10:42 crc kubenswrapper[4886]: I1124 09:10:42.947042 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 09:10:42 crc kubenswrapper[4886]: I1124 09:10:42.947090 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"442a4f83-b738-44b6-8018-d89ba1f03cce","Type":"ContainerDied","Data":"6d467f9bbfae23d680ea947cb4cd29445a583e1388be2058f16b700e1bb51635"} Nov 24 09:10:42 crc kubenswrapper[4886]: I1124 09:10:42.947143 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"442a4f83-b738-44b6-8018-d89ba1f03cce","Type":"ContainerDied","Data":"36bcaf0bceb6f01c9f9ec92c2bba889b4a2181cdf8e5ac0790a8dd43a8b3ca3c"} Nov 24 09:10:42 crc kubenswrapper[4886]: I1124 09:10:42.947185 4886 scope.go:117] "RemoveContainer" containerID="82343b82905a4aa02d5c2a9fed37be9e6c5fc4943bcf7a0cccf816b9cae45361" Nov 24 09:10:42 crc kubenswrapper[4886]: I1124 09:10:42.956650 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bccf8f775-2fd72" event={"ID":"0d0df575-904e-4913-9463-7c776faedd7e","Type":"ContainerDied","Data":"3ea60b2f2737e31e5db211755e123229296a9b0c8c5c8573c502025d7b846cd3"} Nov 24 09:10:42 crc kubenswrapper[4886]: I1124 09:10:42.956808 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bccf8f775-2fd72" Nov 24 09:10:43 crc kubenswrapper[4886]: I1124 09:10:42.996254 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 24 09:10:43 crc kubenswrapper[4886]: I1124 09:10:43.008770 4886 scope.go:117] "RemoveContainer" containerID="ed57bfbf05bd6612fb15dc8e39f21c9a6f60bb0944661aef6f82b69902fc4ba5" Nov 24 09:10:43 crc kubenswrapper[4886]: I1124 09:10:43.013718 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 24 09:10:43 crc kubenswrapper[4886]: I1124 09:10:43.027760 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-bccf8f775-2fd72"] Nov 24 09:10:43 crc kubenswrapper[4886]: I1124 09:10:43.042277 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 24 09:10:43 crc kubenswrapper[4886]: E1124 09:10:43.043189 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d0df575-904e-4913-9463-7c776faedd7e" containerName="dnsmasq-dns" Nov 24 09:10:43 crc kubenswrapper[4886]: I1124 09:10:43.043221 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d0df575-904e-4913-9463-7c776faedd7e" containerName="dnsmasq-dns" Nov 24 09:10:43 crc kubenswrapper[4886]: E1124 09:10:43.043242 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="442a4f83-b738-44b6-8018-d89ba1f03cce" containerName="sg-core" Nov 24 09:10:43 crc kubenswrapper[4886]: I1124 09:10:43.043251 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="442a4f83-b738-44b6-8018-d89ba1f03cce" containerName="sg-core" Nov 24 09:10:43 crc kubenswrapper[4886]: E1124 09:10:43.043280 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="442a4f83-b738-44b6-8018-d89ba1f03cce" containerName="ceilometer-notification-agent" Nov 24 09:10:43 crc kubenswrapper[4886]: I1124 09:10:43.043287 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="442a4f83-b738-44b6-8018-d89ba1f03cce" containerName="ceilometer-notification-agent" Nov 24 09:10:43 crc kubenswrapper[4886]: E1124 09:10:43.043302 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="442a4f83-b738-44b6-8018-d89ba1f03cce" containerName="ceilometer-central-agent" Nov 24 09:10:43 crc kubenswrapper[4886]: I1124 09:10:43.043309 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="442a4f83-b738-44b6-8018-d89ba1f03cce" containerName="ceilometer-central-agent" Nov 24 09:10:43 crc kubenswrapper[4886]: E1124 09:10:43.043326 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d0df575-904e-4913-9463-7c776faedd7e" containerName="init" Nov 24 09:10:43 crc kubenswrapper[4886]: I1124 09:10:43.043333 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d0df575-904e-4913-9463-7c776faedd7e" containerName="init" Nov 24 09:10:43 crc kubenswrapper[4886]: E1124 09:10:43.043346 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="442a4f83-b738-44b6-8018-d89ba1f03cce" containerName="proxy-httpd" Nov 24 09:10:43 crc kubenswrapper[4886]: I1124 09:10:43.043353 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="442a4f83-b738-44b6-8018-d89ba1f03cce" containerName="proxy-httpd" Nov 24 09:10:43 crc kubenswrapper[4886]: I1124 09:10:43.043655 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="442a4f83-b738-44b6-8018-d89ba1f03cce" containerName="ceilometer-central-agent" Nov 24 09:10:43 crc kubenswrapper[4886]: I1124 09:10:43.043679 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="0d0df575-904e-4913-9463-7c776faedd7e" containerName="dnsmasq-dns" Nov 24 09:10:43 crc kubenswrapper[4886]: I1124 09:10:43.043692 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="442a4f83-b738-44b6-8018-d89ba1f03cce" containerName="proxy-httpd" Nov 24 09:10:43 crc kubenswrapper[4886]: I1124 09:10:43.043710 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="442a4f83-b738-44b6-8018-d89ba1f03cce" containerName="ceilometer-notification-agent" Nov 24 09:10:43 crc kubenswrapper[4886]: I1124 09:10:43.043724 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="442a4f83-b738-44b6-8018-d89ba1f03cce" containerName="sg-core" Nov 24 09:10:43 crc kubenswrapper[4886]: I1124 09:10:43.047201 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 09:10:43 crc kubenswrapper[4886]: I1124 09:10:43.047411 4886 scope.go:117] "RemoveContainer" containerID="311f860f33095878ebefd254f861fa960cc5376577ba8d52de935299150f4196" Nov 24 09:10:43 crc kubenswrapper[4886]: I1124 09:10:43.053233 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-bccf8f775-2fd72"] Nov 24 09:10:43 crc kubenswrapper[4886]: I1124 09:10:43.055118 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 24 09:10:43 crc kubenswrapper[4886]: I1124 09:10:43.056506 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Nov 24 09:10:43 crc kubenswrapper[4886]: I1124 09:10:43.057719 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 24 09:10:43 crc kubenswrapper[4886]: I1124 09:10:43.062742 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 24 09:10:43 crc kubenswrapper[4886]: I1124 09:10:43.093850 4886 scope.go:117] "RemoveContainer" containerID="6d467f9bbfae23d680ea947cb4cd29445a583e1388be2058f16b700e1bb51635" Nov 24 09:10:43 crc kubenswrapper[4886]: I1124 09:10:43.115050 4886 scope.go:117] "RemoveContainer" containerID="82343b82905a4aa02d5c2a9fed37be9e6c5fc4943bcf7a0cccf816b9cae45361" Nov 24 09:10:43 crc kubenswrapper[4886]: E1124 09:10:43.115558 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"82343b82905a4aa02d5c2a9fed37be9e6c5fc4943bcf7a0cccf816b9cae45361\": container with ID starting with 82343b82905a4aa02d5c2a9fed37be9e6c5fc4943bcf7a0cccf816b9cae45361 not found: ID does not exist" containerID="82343b82905a4aa02d5c2a9fed37be9e6c5fc4943bcf7a0cccf816b9cae45361" Nov 24 09:10:43 crc kubenswrapper[4886]: I1124 09:10:43.115594 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"82343b82905a4aa02d5c2a9fed37be9e6c5fc4943bcf7a0cccf816b9cae45361"} err="failed to get container status \"82343b82905a4aa02d5c2a9fed37be9e6c5fc4943bcf7a0cccf816b9cae45361\": rpc error: code = NotFound desc = could not find container \"82343b82905a4aa02d5c2a9fed37be9e6c5fc4943bcf7a0cccf816b9cae45361\": container with ID starting with 82343b82905a4aa02d5c2a9fed37be9e6c5fc4943bcf7a0cccf816b9cae45361 not found: ID does not exist" Nov 24 09:10:43 crc kubenswrapper[4886]: I1124 09:10:43.115618 4886 scope.go:117] "RemoveContainer" containerID="ed57bfbf05bd6612fb15dc8e39f21c9a6f60bb0944661aef6f82b69902fc4ba5" Nov 24 09:10:43 crc kubenswrapper[4886]: E1124 09:10:43.115914 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ed57bfbf05bd6612fb15dc8e39f21c9a6f60bb0944661aef6f82b69902fc4ba5\": container with ID starting with ed57bfbf05bd6612fb15dc8e39f21c9a6f60bb0944661aef6f82b69902fc4ba5 not found: ID does not exist" containerID="ed57bfbf05bd6612fb15dc8e39f21c9a6f60bb0944661aef6f82b69902fc4ba5" Nov 24 09:10:43 crc kubenswrapper[4886]: I1124 09:10:43.115980 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ed57bfbf05bd6612fb15dc8e39f21c9a6f60bb0944661aef6f82b69902fc4ba5"} err="failed to get container status \"ed57bfbf05bd6612fb15dc8e39f21c9a6f60bb0944661aef6f82b69902fc4ba5\": rpc error: code = NotFound desc = could not find container \"ed57bfbf05bd6612fb15dc8e39f21c9a6f60bb0944661aef6f82b69902fc4ba5\": container with ID starting with ed57bfbf05bd6612fb15dc8e39f21c9a6f60bb0944661aef6f82b69902fc4ba5 not found: ID does not exist" Nov 24 09:10:43 crc kubenswrapper[4886]: I1124 09:10:43.116016 4886 scope.go:117] "RemoveContainer" containerID="311f860f33095878ebefd254f861fa960cc5376577ba8d52de935299150f4196" Nov 24 09:10:43 crc kubenswrapper[4886]: E1124 09:10:43.116310 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"311f860f33095878ebefd254f861fa960cc5376577ba8d52de935299150f4196\": container with ID starting with 311f860f33095878ebefd254f861fa960cc5376577ba8d52de935299150f4196 not found: ID does not exist" containerID="311f860f33095878ebefd254f861fa960cc5376577ba8d52de935299150f4196" Nov 24 09:10:43 crc kubenswrapper[4886]: I1124 09:10:43.116341 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"311f860f33095878ebefd254f861fa960cc5376577ba8d52de935299150f4196"} err="failed to get container status \"311f860f33095878ebefd254f861fa960cc5376577ba8d52de935299150f4196\": rpc error: code = NotFound desc = could not find container \"311f860f33095878ebefd254f861fa960cc5376577ba8d52de935299150f4196\": container with ID starting with 311f860f33095878ebefd254f861fa960cc5376577ba8d52de935299150f4196 not found: ID does not exist" Nov 24 09:10:43 crc kubenswrapper[4886]: I1124 09:10:43.116357 4886 scope.go:117] "RemoveContainer" containerID="6d467f9bbfae23d680ea947cb4cd29445a583e1388be2058f16b700e1bb51635" Nov 24 09:10:43 crc kubenswrapper[4886]: E1124 09:10:43.116650 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6d467f9bbfae23d680ea947cb4cd29445a583e1388be2058f16b700e1bb51635\": container with ID starting with 6d467f9bbfae23d680ea947cb4cd29445a583e1388be2058f16b700e1bb51635 not found: ID does not exist" containerID="6d467f9bbfae23d680ea947cb4cd29445a583e1388be2058f16b700e1bb51635" Nov 24 09:10:43 crc kubenswrapper[4886]: I1124 09:10:43.116679 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6d467f9bbfae23d680ea947cb4cd29445a583e1388be2058f16b700e1bb51635"} err="failed to get container status \"6d467f9bbfae23d680ea947cb4cd29445a583e1388be2058f16b700e1bb51635\": rpc error: code = NotFound desc = could not find container \"6d467f9bbfae23d680ea947cb4cd29445a583e1388be2058f16b700e1bb51635\": container with ID starting with 6d467f9bbfae23d680ea947cb4cd29445a583e1388be2058f16b700e1bb51635 not found: ID does not exist" Nov 24 09:10:43 crc kubenswrapper[4886]: I1124 09:10:43.116692 4886 scope.go:117] "RemoveContainer" containerID="8402e4fb47d36c81de4a102f9448cd038464cceec3b425edf13179f975413aec" Nov 24 09:10:43 crc kubenswrapper[4886]: I1124 09:10:43.133180 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6c1ffc60-4954-4d55-800e-00cb24c6cfa4-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"6c1ffc60-4954-4d55-800e-00cb24c6cfa4\") " pod="openstack/ceilometer-0" Nov 24 09:10:43 crc kubenswrapper[4886]: I1124 09:10:43.133246 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c1ffc60-4954-4d55-800e-00cb24c6cfa4-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"6c1ffc60-4954-4d55-800e-00cb24c6cfa4\") " pod="openstack/ceilometer-0" Nov 24 09:10:43 crc kubenswrapper[4886]: I1124 09:10:43.133302 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6c1ffc60-4954-4d55-800e-00cb24c6cfa4-scripts\") pod \"ceilometer-0\" (UID: \"6c1ffc60-4954-4d55-800e-00cb24c6cfa4\") " pod="openstack/ceilometer-0" Nov 24 09:10:43 crc kubenswrapper[4886]: I1124 09:10:43.133339 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dbldn\" (UniqueName: \"kubernetes.io/projected/6c1ffc60-4954-4d55-800e-00cb24c6cfa4-kube-api-access-dbldn\") pod \"ceilometer-0\" (UID: \"6c1ffc60-4954-4d55-800e-00cb24c6cfa4\") " pod="openstack/ceilometer-0" Nov 24 09:10:43 crc kubenswrapper[4886]: I1124 09:10:43.133364 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/6c1ffc60-4954-4d55-800e-00cb24c6cfa4-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"6c1ffc60-4954-4d55-800e-00cb24c6cfa4\") " pod="openstack/ceilometer-0" Nov 24 09:10:43 crc kubenswrapper[4886]: I1124 09:10:43.133386 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6c1ffc60-4954-4d55-800e-00cb24c6cfa4-run-httpd\") pod \"ceilometer-0\" (UID: \"6c1ffc60-4954-4d55-800e-00cb24c6cfa4\") " pod="openstack/ceilometer-0" Nov 24 09:10:43 crc kubenswrapper[4886]: I1124 09:10:43.133427 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6c1ffc60-4954-4d55-800e-00cb24c6cfa4-log-httpd\") pod \"ceilometer-0\" (UID: \"6c1ffc60-4954-4d55-800e-00cb24c6cfa4\") " pod="openstack/ceilometer-0" Nov 24 09:10:43 crc kubenswrapper[4886]: I1124 09:10:43.133470 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c1ffc60-4954-4d55-800e-00cb24c6cfa4-config-data\") pod \"ceilometer-0\" (UID: \"6c1ffc60-4954-4d55-800e-00cb24c6cfa4\") " pod="openstack/ceilometer-0" Nov 24 09:10:43 crc kubenswrapper[4886]: I1124 09:10:43.139650 4886 scope.go:117] "RemoveContainer" containerID="7debdc75d3d3bdd633971c610231669c7ebcf26d75f311aabdd8543db55bd51c" Nov 24 09:10:43 crc kubenswrapper[4886]: I1124 09:10:43.235759 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6c1ffc60-4954-4d55-800e-00cb24c6cfa4-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"6c1ffc60-4954-4d55-800e-00cb24c6cfa4\") " pod="openstack/ceilometer-0" Nov 24 09:10:43 crc kubenswrapper[4886]: I1124 09:10:43.235831 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c1ffc60-4954-4d55-800e-00cb24c6cfa4-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"6c1ffc60-4954-4d55-800e-00cb24c6cfa4\") " pod="openstack/ceilometer-0" Nov 24 09:10:43 crc kubenswrapper[4886]: I1124 09:10:43.235898 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6c1ffc60-4954-4d55-800e-00cb24c6cfa4-scripts\") pod \"ceilometer-0\" (UID: \"6c1ffc60-4954-4d55-800e-00cb24c6cfa4\") " pod="openstack/ceilometer-0" Nov 24 09:10:43 crc kubenswrapper[4886]: I1124 09:10:43.235942 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dbldn\" (UniqueName: \"kubernetes.io/projected/6c1ffc60-4954-4d55-800e-00cb24c6cfa4-kube-api-access-dbldn\") pod \"ceilometer-0\" (UID: \"6c1ffc60-4954-4d55-800e-00cb24c6cfa4\") " pod="openstack/ceilometer-0" Nov 24 09:10:43 crc kubenswrapper[4886]: I1124 09:10:43.235966 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/6c1ffc60-4954-4d55-800e-00cb24c6cfa4-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"6c1ffc60-4954-4d55-800e-00cb24c6cfa4\") " pod="openstack/ceilometer-0" Nov 24 09:10:43 crc kubenswrapper[4886]: I1124 09:10:43.235994 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6c1ffc60-4954-4d55-800e-00cb24c6cfa4-run-httpd\") pod \"ceilometer-0\" (UID: \"6c1ffc60-4954-4d55-800e-00cb24c6cfa4\") " pod="openstack/ceilometer-0" Nov 24 09:10:43 crc kubenswrapper[4886]: I1124 09:10:43.236048 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6c1ffc60-4954-4d55-800e-00cb24c6cfa4-log-httpd\") pod \"ceilometer-0\" (UID: \"6c1ffc60-4954-4d55-800e-00cb24c6cfa4\") " pod="openstack/ceilometer-0" Nov 24 09:10:43 crc kubenswrapper[4886]: I1124 09:10:43.236100 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c1ffc60-4954-4d55-800e-00cb24c6cfa4-config-data\") pod \"ceilometer-0\" (UID: \"6c1ffc60-4954-4d55-800e-00cb24c6cfa4\") " pod="openstack/ceilometer-0" Nov 24 09:10:43 crc kubenswrapper[4886]: I1124 09:10:43.237794 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6c1ffc60-4954-4d55-800e-00cb24c6cfa4-run-httpd\") pod \"ceilometer-0\" (UID: \"6c1ffc60-4954-4d55-800e-00cb24c6cfa4\") " pod="openstack/ceilometer-0" Nov 24 09:10:43 crc kubenswrapper[4886]: I1124 09:10:43.237860 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6c1ffc60-4954-4d55-800e-00cb24c6cfa4-log-httpd\") pod \"ceilometer-0\" (UID: \"6c1ffc60-4954-4d55-800e-00cb24c6cfa4\") " pod="openstack/ceilometer-0" Nov 24 09:10:43 crc kubenswrapper[4886]: I1124 09:10:43.242094 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c1ffc60-4954-4d55-800e-00cb24c6cfa4-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"6c1ffc60-4954-4d55-800e-00cb24c6cfa4\") " pod="openstack/ceilometer-0" Nov 24 09:10:43 crc kubenswrapper[4886]: I1124 09:10:43.242173 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6c1ffc60-4954-4d55-800e-00cb24c6cfa4-scripts\") pod \"ceilometer-0\" (UID: \"6c1ffc60-4954-4d55-800e-00cb24c6cfa4\") " pod="openstack/ceilometer-0" Nov 24 09:10:43 crc kubenswrapper[4886]: I1124 09:10:43.243601 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6c1ffc60-4954-4d55-800e-00cb24c6cfa4-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"6c1ffc60-4954-4d55-800e-00cb24c6cfa4\") " pod="openstack/ceilometer-0" Nov 24 09:10:43 crc kubenswrapper[4886]: I1124 09:10:43.244603 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c1ffc60-4954-4d55-800e-00cb24c6cfa4-config-data\") pod \"ceilometer-0\" (UID: \"6c1ffc60-4954-4d55-800e-00cb24c6cfa4\") " pod="openstack/ceilometer-0" Nov 24 09:10:43 crc kubenswrapper[4886]: I1124 09:10:43.246861 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/6c1ffc60-4954-4d55-800e-00cb24c6cfa4-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"6c1ffc60-4954-4d55-800e-00cb24c6cfa4\") " pod="openstack/ceilometer-0" Nov 24 09:10:43 crc kubenswrapper[4886]: I1124 09:10:43.262702 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dbldn\" (UniqueName: \"kubernetes.io/projected/6c1ffc60-4954-4d55-800e-00cb24c6cfa4-kube-api-access-dbldn\") pod \"ceilometer-0\" (UID: \"6c1ffc60-4954-4d55-800e-00cb24c6cfa4\") " pod="openstack/ceilometer-0" Nov 24 09:10:43 crc kubenswrapper[4886]: I1124 09:10:43.374595 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 09:10:43 crc kubenswrapper[4886]: I1124 09:10:43.895870 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 24 09:10:43 crc kubenswrapper[4886]: I1124 09:10:43.972788 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6c1ffc60-4954-4d55-800e-00cb24c6cfa4","Type":"ContainerStarted","Data":"002d9165de97e353ca904159d14899c9aa2c14223aa989a20bbc362935382028"} Nov 24 09:10:44 crc kubenswrapper[4886]: I1124 09:10:44.899474 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0d0df575-904e-4913-9463-7c776faedd7e" path="/var/lib/kubelet/pods/0d0df575-904e-4913-9463-7c776faedd7e/volumes" Nov 24 09:10:44 crc kubenswrapper[4886]: I1124 09:10:44.900772 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="442a4f83-b738-44b6-8018-d89ba1f03cce" path="/var/lib/kubelet/pods/442a4f83-b738-44b6-8018-d89ba1f03cce/volumes" Nov 24 09:10:45 crc kubenswrapper[4886]: I1124 09:10:45.001251 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6c1ffc60-4954-4d55-800e-00cb24c6cfa4","Type":"ContainerStarted","Data":"749ede4d0b9d7d3bc4ad267bd124c17db87701076aa2de08c1686a2fd629e902"} Nov 24 09:10:46 crc kubenswrapper[4886]: I1124 09:10:46.017685 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6c1ffc60-4954-4d55-800e-00cb24c6cfa4","Type":"ContainerStarted","Data":"e9c557c027292be3c0745bff91e41b9fadc7b40ea473f9090f7f92ec9e0b39a8"} Nov 24 09:10:46 crc kubenswrapper[4886]: E1124 09:10:46.649948 4886 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda7266b1c_bb21_4f54_994c_52ab5db8d4eb.slice\": RecentStats: unable to find data in memory cache]" Nov 24 09:10:47 crc kubenswrapper[4886]: I1124 09:10:47.030575 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6c1ffc60-4954-4d55-800e-00cb24c6cfa4","Type":"ContainerStarted","Data":"03d6dea1227ec7a5fe50e10159db22b93e872753a80c06fc71bab0c24e0928fa"} Nov 24 09:10:48 crc kubenswrapper[4886]: I1124 09:10:48.047706 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6c1ffc60-4954-4d55-800e-00cb24c6cfa4","Type":"ContainerStarted","Data":"f6b993b09668729d901b2cc4cdaaa4e6f5441cf2da1e277234f9562610c7d926"} Nov 24 09:10:48 crc kubenswrapper[4886]: I1124 09:10:48.049298 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 24 09:10:48 crc kubenswrapper[4886]: I1124 09:10:48.052588 4886 generic.go:334] "Generic (PLEG): container finished" podID="d1ca2352-9c13-4db2-9c3a-ce2557f39968" containerID="5766d62296fc2d378e3a02a3368370cf6f39033324586106db432cd09ca6b173" exitCode=0 Nov 24 09:10:48 crc kubenswrapper[4886]: I1124 09:10:48.052792 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-bx8hx" event={"ID":"d1ca2352-9c13-4db2-9c3a-ce2557f39968","Type":"ContainerDied","Data":"5766d62296fc2d378e3a02a3368370cf6f39033324586106db432cd09ca6b173"} Nov 24 09:10:48 crc kubenswrapper[4886]: I1124 09:10:48.088756 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.454575272 podStartE2EDuration="6.088724973s" podCreationTimestamp="2025-11-24 09:10:42 +0000 UTC" firstStartedPulling="2025-11-24 09:10:43.899900472 +0000 UTC m=+1299.786638607" lastFinishedPulling="2025-11-24 09:10:47.534050173 +0000 UTC m=+1303.420788308" observedRunningTime="2025-11-24 09:10:48.077976157 +0000 UTC m=+1303.964714292" watchObservedRunningTime="2025-11-24 09:10:48.088724973 +0000 UTC m=+1303.975463108" Nov 24 09:10:48 crc kubenswrapper[4886]: I1124 09:10:48.403826 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 24 09:10:48 crc kubenswrapper[4886]: I1124 09:10:48.403911 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 24 09:10:49 crc kubenswrapper[4886]: I1124 09:10:49.419559 4886 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="55b43fd3-4034-48fd-ae88-d510286a394a" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.196:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 24 09:10:49 crc kubenswrapper[4886]: I1124 09:10:49.419579 4886 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="55b43fd3-4034-48fd-ae88-d510286a394a" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.196:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 24 09:10:49 crc kubenswrapper[4886]: I1124 09:10:49.492872 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-bx8hx" Nov 24 09:10:49 crc kubenswrapper[4886]: I1124 09:10:49.592955 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pxsg2\" (UniqueName: \"kubernetes.io/projected/d1ca2352-9c13-4db2-9c3a-ce2557f39968-kube-api-access-pxsg2\") pod \"d1ca2352-9c13-4db2-9c3a-ce2557f39968\" (UID: \"d1ca2352-9c13-4db2-9c3a-ce2557f39968\") " Nov 24 09:10:49 crc kubenswrapper[4886]: I1124 09:10:49.593070 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d1ca2352-9c13-4db2-9c3a-ce2557f39968-scripts\") pod \"d1ca2352-9c13-4db2-9c3a-ce2557f39968\" (UID: \"d1ca2352-9c13-4db2-9c3a-ce2557f39968\") " Nov 24 09:10:49 crc kubenswrapper[4886]: I1124 09:10:49.593114 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1ca2352-9c13-4db2-9c3a-ce2557f39968-combined-ca-bundle\") pod \"d1ca2352-9c13-4db2-9c3a-ce2557f39968\" (UID: \"d1ca2352-9c13-4db2-9c3a-ce2557f39968\") " Nov 24 09:10:49 crc kubenswrapper[4886]: I1124 09:10:49.593224 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d1ca2352-9c13-4db2-9c3a-ce2557f39968-config-data\") pod \"d1ca2352-9c13-4db2-9c3a-ce2557f39968\" (UID: \"d1ca2352-9c13-4db2-9c3a-ce2557f39968\") " Nov 24 09:10:49 crc kubenswrapper[4886]: I1124 09:10:49.610479 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d1ca2352-9c13-4db2-9c3a-ce2557f39968-kube-api-access-pxsg2" (OuterVolumeSpecName: "kube-api-access-pxsg2") pod "d1ca2352-9c13-4db2-9c3a-ce2557f39968" (UID: "d1ca2352-9c13-4db2-9c3a-ce2557f39968"). InnerVolumeSpecName "kube-api-access-pxsg2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 09:10:49 crc kubenswrapper[4886]: I1124 09:10:49.611827 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d1ca2352-9c13-4db2-9c3a-ce2557f39968-scripts" (OuterVolumeSpecName: "scripts") pod "d1ca2352-9c13-4db2-9c3a-ce2557f39968" (UID: "d1ca2352-9c13-4db2-9c3a-ce2557f39968"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:10:49 crc kubenswrapper[4886]: I1124 09:10:49.642455 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d1ca2352-9c13-4db2-9c3a-ce2557f39968-config-data" (OuterVolumeSpecName: "config-data") pod "d1ca2352-9c13-4db2-9c3a-ce2557f39968" (UID: "d1ca2352-9c13-4db2-9c3a-ce2557f39968"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:10:49 crc kubenswrapper[4886]: I1124 09:10:49.644156 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d1ca2352-9c13-4db2-9c3a-ce2557f39968-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d1ca2352-9c13-4db2-9c3a-ce2557f39968" (UID: "d1ca2352-9c13-4db2-9c3a-ce2557f39968"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:10:49 crc kubenswrapper[4886]: I1124 09:10:49.695699 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pxsg2\" (UniqueName: \"kubernetes.io/projected/d1ca2352-9c13-4db2-9c3a-ce2557f39968-kube-api-access-pxsg2\") on node \"crc\" DevicePath \"\"" Nov 24 09:10:49 crc kubenswrapper[4886]: I1124 09:10:49.695741 4886 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d1ca2352-9c13-4db2-9c3a-ce2557f39968-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 09:10:49 crc kubenswrapper[4886]: I1124 09:10:49.695752 4886 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1ca2352-9c13-4db2-9c3a-ce2557f39968-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 09:10:49 crc kubenswrapper[4886]: I1124 09:10:49.695761 4886 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d1ca2352-9c13-4db2-9c3a-ce2557f39968-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 09:10:50 crc kubenswrapper[4886]: I1124 09:10:50.081236 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-bx8hx" event={"ID":"d1ca2352-9c13-4db2-9c3a-ce2557f39968","Type":"ContainerDied","Data":"6ab5bf568bb4c0d0b1bf3be210d0dd710ce017dd03a27545a063c837622c6f9a"} Nov 24 09:10:50 crc kubenswrapper[4886]: I1124 09:10:50.081707 4886 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6ab5bf568bb4c0d0b1bf3be210d0dd710ce017dd03a27545a063c837622c6f9a" Nov 24 09:10:50 crc kubenswrapper[4886]: I1124 09:10:50.081569 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-bx8hx" Nov 24 09:10:50 crc kubenswrapper[4886]: I1124 09:10:50.302713 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 24 09:10:50 crc kubenswrapper[4886]: I1124 09:10:50.303064 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="55b43fd3-4034-48fd-ae88-d510286a394a" containerName="nova-api-log" containerID="cri-o://261f2f6053b09014fb786c6b570d7342ad0cf1454a224f27d9c4b4c7061b991d" gracePeriod=30 Nov 24 09:10:50 crc kubenswrapper[4886]: I1124 09:10:50.303155 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="55b43fd3-4034-48fd-ae88-d510286a394a" containerName="nova-api-api" containerID="cri-o://9553193047ca6795214f5eed76d478f165b1ccd4754ba1518085478b2a2cbb73" gracePeriod=30 Nov 24 09:10:50 crc kubenswrapper[4886]: I1124 09:10:50.340191 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Nov 24 09:10:50 crc kubenswrapper[4886]: I1124 09:10:50.340434 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="0402c99c-2124-499a-8682-bc7cab563f47" containerName="nova-scheduler-scheduler" containerID="cri-o://9b5c5ed6eaa4982cd125c0d41e3f4375821f9e410254cab23496e2ee15e87f92" gracePeriod=30 Nov 24 09:10:50 crc kubenswrapper[4886]: I1124 09:10:50.356062 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 24 09:10:50 crc kubenswrapper[4886]: I1124 09:10:50.356360 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="2ce44156-b3a8-4520-aab3-2c829c7d26cb" containerName="nova-metadata-log" containerID="cri-o://e417465e4c231478cf42687cc55cc3ea3f124f708793e33bd266e2a67d28f1fa" gracePeriod=30 Nov 24 09:10:50 crc kubenswrapper[4886]: I1124 09:10:50.357017 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="2ce44156-b3a8-4520-aab3-2c829c7d26cb" containerName="nova-metadata-metadata" containerID="cri-o://b35611f43db7655fcabeb5604ac46c1a7a03ba2fdc1fc83b5e66a41e15d1b2e1" gracePeriod=30 Nov 24 09:10:51 crc kubenswrapper[4886]: I1124 09:10:51.102465 4886 generic.go:334] "Generic (PLEG): container finished" podID="55b43fd3-4034-48fd-ae88-d510286a394a" containerID="261f2f6053b09014fb786c6b570d7342ad0cf1454a224f27d9c4b4c7061b991d" exitCode=143 Nov 24 09:10:51 crc kubenswrapper[4886]: I1124 09:10:51.102571 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"55b43fd3-4034-48fd-ae88-d510286a394a","Type":"ContainerDied","Data":"261f2f6053b09014fb786c6b570d7342ad0cf1454a224f27d9c4b4c7061b991d"} Nov 24 09:10:51 crc kubenswrapper[4886]: I1124 09:10:51.108200 4886 generic.go:334] "Generic (PLEG): container finished" podID="2ce44156-b3a8-4520-aab3-2c829c7d26cb" containerID="e417465e4c231478cf42687cc55cc3ea3f124f708793e33bd266e2a67d28f1fa" exitCode=143 Nov 24 09:10:51 crc kubenswrapper[4886]: I1124 09:10:51.108254 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"2ce44156-b3a8-4520-aab3-2c829c7d26cb","Type":"ContainerDied","Data":"e417465e4c231478cf42687cc55cc3ea3f124f708793e33bd266e2a67d28f1fa"} Nov 24 09:10:52 crc kubenswrapper[4886]: E1124 09:10:52.856231 4886 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 9b5c5ed6eaa4982cd125c0d41e3f4375821f9e410254cab23496e2ee15e87f92 is running failed: container process not found" containerID="9b5c5ed6eaa4982cd125c0d41e3f4375821f9e410254cab23496e2ee15e87f92" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 24 09:10:52 crc kubenswrapper[4886]: E1124 09:10:52.857140 4886 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 9b5c5ed6eaa4982cd125c0d41e3f4375821f9e410254cab23496e2ee15e87f92 is running failed: container process not found" containerID="9b5c5ed6eaa4982cd125c0d41e3f4375821f9e410254cab23496e2ee15e87f92" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 24 09:10:52 crc kubenswrapper[4886]: E1124 09:10:52.857540 4886 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 9b5c5ed6eaa4982cd125c0d41e3f4375821f9e410254cab23496e2ee15e87f92 is running failed: container process not found" containerID="9b5c5ed6eaa4982cd125c0d41e3f4375821f9e410254cab23496e2ee15e87f92" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 24 09:10:52 crc kubenswrapper[4886]: E1124 09:10:52.857585 4886 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 9b5c5ed6eaa4982cd125c0d41e3f4375821f9e410254cab23496e2ee15e87f92 is running failed: container process not found" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="0402c99c-2124-499a-8682-bc7cab563f47" containerName="nova-scheduler-scheduler" Nov 24 09:10:52 crc kubenswrapper[4886]: I1124 09:10:52.979333 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 24 09:10:53 crc kubenswrapper[4886]: I1124 09:10:53.086064 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9bpkl\" (UniqueName: \"kubernetes.io/projected/0402c99c-2124-499a-8682-bc7cab563f47-kube-api-access-9bpkl\") pod \"0402c99c-2124-499a-8682-bc7cab563f47\" (UID: \"0402c99c-2124-499a-8682-bc7cab563f47\") " Nov 24 09:10:53 crc kubenswrapper[4886]: I1124 09:10:53.086212 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0402c99c-2124-499a-8682-bc7cab563f47-config-data\") pod \"0402c99c-2124-499a-8682-bc7cab563f47\" (UID: \"0402c99c-2124-499a-8682-bc7cab563f47\") " Nov 24 09:10:53 crc kubenswrapper[4886]: I1124 09:10:53.086291 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0402c99c-2124-499a-8682-bc7cab563f47-combined-ca-bundle\") pod \"0402c99c-2124-499a-8682-bc7cab563f47\" (UID: \"0402c99c-2124-499a-8682-bc7cab563f47\") " Nov 24 09:10:53 crc kubenswrapper[4886]: I1124 09:10:53.094740 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0402c99c-2124-499a-8682-bc7cab563f47-kube-api-access-9bpkl" (OuterVolumeSpecName: "kube-api-access-9bpkl") pod "0402c99c-2124-499a-8682-bc7cab563f47" (UID: "0402c99c-2124-499a-8682-bc7cab563f47"). InnerVolumeSpecName "kube-api-access-9bpkl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 09:10:53 crc kubenswrapper[4886]: I1124 09:10:53.118471 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0402c99c-2124-499a-8682-bc7cab563f47-config-data" (OuterVolumeSpecName: "config-data") pod "0402c99c-2124-499a-8682-bc7cab563f47" (UID: "0402c99c-2124-499a-8682-bc7cab563f47"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:10:53 crc kubenswrapper[4886]: I1124 09:10:53.127588 4886 generic.go:334] "Generic (PLEG): container finished" podID="0402c99c-2124-499a-8682-bc7cab563f47" containerID="9b5c5ed6eaa4982cd125c0d41e3f4375821f9e410254cab23496e2ee15e87f92" exitCode=0 Nov 24 09:10:53 crc kubenswrapper[4886]: I1124 09:10:53.127629 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"0402c99c-2124-499a-8682-bc7cab563f47","Type":"ContainerDied","Data":"9b5c5ed6eaa4982cd125c0d41e3f4375821f9e410254cab23496e2ee15e87f92"} Nov 24 09:10:53 crc kubenswrapper[4886]: I1124 09:10:53.127659 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"0402c99c-2124-499a-8682-bc7cab563f47","Type":"ContainerDied","Data":"2f172c7edd110cd54853b066061695f0362812ca17546900ab64b895b0bfcff0"} Nov 24 09:10:53 crc kubenswrapper[4886]: I1124 09:10:53.127658 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 24 09:10:53 crc kubenswrapper[4886]: I1124 09:10:53.127708 4886 scope.go:117] "RemoveContainer" containerID="9b5c5ed6eaa4982cd125c0d41e3f4375821f9e410254cab23496e2ee15e87f92" Nov 24 09:10:53 crc kubenswrapper[4886]: I1124 09:10:53.142025 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0402c99c-2124-499a-8682-bc7cab563f47-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0402c99c-2124-499a-8682-bc7cab563f47" (UID: "0402c99c-2124-499a-8682-bc7cab563f47"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:10:53 crc kubenswrapper[4886]: I1124 09:10:53.188665 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9bpkl\" (UniqueName: \"kubernetes.io/projected/0402c99c-2124-499a-8682-bc7cab563f47-kube-api-access-9bpkl\") on node \"crc\" DevicePath \"\"" Nov 24 09:10:53 crc kubenswrapper[4886]: I1124 09:10:53.188719 4886 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0402c99c-2124-499a-8682-bc7cab563f47-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 09:10:53 crc kubenswrapper[4886]: I1124 09:10:53.188730 4886 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0402c99c-2124-499a-8682-bc7cab563f47-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 09:10:53 crc kubenswrapper[4886]: I1124 09:10:53.219872 4886 scope.go:117] "RemoveContainer" containerID="9b5c5ed6eaa4982cd125c0d41e3f4375821f9e410254cab23496e2ee15e87f92" Nov 24 09:10:53 crc kubenswrapper[4886]: E1124 09:10:53.220550 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9b5c5ed6eaa4982cd125c0d41e3f4375821f9e410254cab23496e2ee15e87f92\": container with ID starting with 9b5c5ed6eaa4982cd125c0d41e3f4375821f9e410254cab23496e2ee15e87f92 not found: ID does not exist" containerID="9b5c5ed6eaa4982cd125c0d41e3f4375821f9e410254cab23496e2ee15e87f92" Nov 24 09:10:53 crc kubenswrapper[4886]: I1124 09:10:53.220629 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9b5c5ed6eaa4982cd125c0d41e3f4375821f9e410254cab23496e2ee15e87f92"} err="failed to get container status \"9b5c5ed6eaa4982cd125c0d41e3f4375821f9e410254cab23496e2ee15e87f92\": rpc error: code = NotFound desc = could not find container \"9b5c5ed6eaa4982cd125c0d41e3f4375821f9e410254cab23496e2ee15e87f92\": container with ID starting with 9b5c5ed6eaa4982cd125c0d41e3f4375821f9e410254cab23496e2ee15e87f92 not found: ID does not exist" Nov 24 09:10:53 crc kubenswrapper[4886]: I1124 09:10:53.464016 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Nov 24 09:10:53 crc kubenswrapper[4886]: I1124 09:10:53.488834 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Nov 24 09:10:53 crc kubenswrapper[4886]: I1124 09:10:53.497883 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Nov 24 09:10:53 crc kubenswrapper[4886]: E1124 09:10:53.498556 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0402c99c-2124-499a-8682-bc7cab563f47" containerName="nova-scheduler-scheduler" Nov 24 09:10:53 crc kubenswrapper[4886]: I1124 09:10:53.498578 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="0402c99c-2124-499a-8682-bc7cab563f47" containerName="nova-scheduler-scheduler" Nov 24 09:10:53 crc kubenswrapper[4886]: E1124 09:10:53.498605 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d1ca2352-9c13-4db2-9c3a-ce2557f39968" containerName="nova-manage" Nov 24 09:10:53 crc kubenswrapper[4886]: I1124 09:10:53.498613 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1ca2352-9c13-4db2-9c3a-ce2557f39968" containerName="nova-manage" Nov 24 09:10:53 crc kubenswrapper[4886]: I1124 09:10:53.499269 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="d1ca2352-9c13-4db2-9c3a-ce2557f39968" containerName="nova-manage" Nov 24 09:10:53 crc kubenswrapper[4886]: I1124 09:10:53.499296 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="0402c99c-2124-499a-8682-bc7cab563f47" containerName="nova-scheduler-scheduler" Nov 24 09:10:53 crc kubenswrapper[4886]: I1124 09:10:53.504265 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 24 09:10:53 crc kubenswrapper[4886]: I1124 09:10:53.508750 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Nov 24 09:10:53 crc kubenswrapper[4886]: I1124 09:10:53.512425 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 24 09:10:53 crc kubenswrapper[4886]: I1124 09:10:53.597466 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8288e829-a6d4-4f11-abf2-e9cd50df6c4b-config-data\") pod \"nova-scheduler-0\" (UID: \"8288e829-a6d4-4f11-abf2-e9cd50df6c4b\") " pod="openstack/nova-scheduler-0" Nov 24 09:10:53 crc kubenswrapper[4886]: I1124 09:10:53.598015 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8288e829-a6d4-4f11-abf2-e9cd50df6c4b-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"8288e829-a6d4-4f11-abf2-e9cd50df6c4b\") " pod="openstack/nova-scheduler-0" Nov 24 09:10:53 crc kubenswrapper[4886]: I1124 09:10:53.598201 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ldpdf\" (UniqueName: \"kubernetes.io/projected/8288e829-a6d4-4f11-abf2-e9cd50df6c4b-kube-api-access-ldpdf\") pod \"nova-scheduler-0\" (UID: \"8288e829-a6d4-4f11-abf2-e9cd50df6c4b\") " pod="openstack/nova-scheduler-0" Nov 24 09:10:53 crc kubenswrapper[4886]: I1124 09:10:53.703627 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8288e829-a6d4-4f11-abf2-e9cd50df6c4b-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"8288e829-a6d4-4f11-abf2-e9cd50df6c4b\") " pod="openstack/nova-scheduler-0" Nov 24 09:10:53 crc kubenswrapper[4886]: I1124 09:10:53.703708 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ldpdf\" (UniqueName: \"kubernetes.io/projected/8288e829-a6d4-4f11-abf2-e9cd50df6c4b-kube-api-access-ldpdf\") pod \"nova-scheduler-0\" (UID: \"8288e829-a6d4-4f11-abf2-e9cd50df6c4b\") " pod="openstack/nova-scheduler-0" Nov 24 09:10:53 crc kubenswrapper[4886]: I1124 09:10:53.703757 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8288e829-a6d4-4f11-abf2-e9cd50df6c4b-config-data\") pod \"nova-scheduler-0\" (UID: \"8288e829-a6d4-4f11-abf2-e9cd50df6c4b\") " pod="openstack/nova-scheduler-0" Nov 24 09:10:53 crc kubenswrapper[4886]: I1124 09:10:53.710006 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8288e829-a6d4-4f11-abf2-e9cd50df6c4b-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"8288e829-a6d4-4f11-abf2-e9cd50df6c4b\") " pod="openstack/nova-scheduler-0" Nov 24 09:10:53 crc kubenswrapper[4886]: I1124 09:10:53.711808 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8288e829-a6d4-4f11-abf2-e9cd50df6c4b-config-data\") pod \"nova-scheduler-0\" (UID: \"8288e829-a6d4-4f11-abf2-e9cd50df6c4b\") " pod="openstack/nova-scheduler-0" Nov 24 09:10:53 crc kubenswrapper[4886]: I1124 09:10:53.724718 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ldpdf\" (UniqueName: \"kubernetes.io/projected/8288e829-a6d4-4f11-abf2-e9cd50df6c4b-kube-api-access-ldpdf\") pod \"nova-scheduler-0\" (UID: \"8288e829-a6d4-4f11-abf2-e9cd50df6c4b\") " pod="openstack/nova-scheduler-0" Nov 24 09:10:53 crc kubenswrapper[4886]: I1124 09:10:53.824376 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 24 09:10:53 crc kubenswrapper[4886]: I1124 09:10:53.962212 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 24 09:10:54 crc kubenswrapper[4886]: I1124 09:10:54.014245 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2ce44156-b3a8-4520-aab3-2c829c7d26cb-logs\") pod \"2ce44156-b3a8-4520-aab3-2c829c7d26cb\" (UID: \"2ce44156-b3a8-4520-aab3-2c829c7d26cb\") " Nov 24 09:10:54 crc kubenswrapper[4886]: I1124 09:10:54.014765 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m5dxg\" (UniqueName: \"kubernetes.io/projected/2ce44156-b3a8-4520-aab3-2c829c7d26cb-kube-api-access-m5dxg\") pod \"2ce44156-b3a8-4520-aab3-2c829c7d26cb\" (UID: \"2ce44156-b3a8-4520-aab3-2c829c7d26cb\") " Nov 24 09:10:54 crc kubenswrapper[4886]: I1124 09:10:54.014831 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/2ce44156-b3a8-4520-aab3-2c829c7d26cb-nova-metadata-tls-certs\") pod \"2ce44156-b3a8-4520-aab3-2c829c7d26cb\" (UID: \"2ce44156-b3a8-4520-aab3-2c829c7d26cb\") " Nov 24 09:10:54 crc kubenswrapper[4886]: I1124 09:10:54.014964 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2ce44156-b3a8-4520-aab3-2c829c7d26cb-logs" (OuterVolumeSpecName: "logs") pod "2ce44156-b3a8-4520-aab3-2c829c7d26cb" (UID: "2ce44156-b3a8-4520-aab3-2c829c7d26cb"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 09:10:54 crc kubenswrapper[4886]: I1124 09:10:54.015119 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ce44156-b3a8-4520-aab3-2c829c7d26cb-config-data\") pod \"2ce44156-b3a8-4520-aab3-2c829c7d26cb\" (UID: \"2ce44156-b3a8-4520-aab3-2c829c7d26cb\") " Nov 24 09:10:54 crc kubenswrapper[4886]: I1124 09:10:54.015203 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ce44156-b3a8-4520-aab3-2c829c7d26cb-combined-ca-bundle\") pod \"2ce44156-b3a8-4520-aab3-2c829c7d26cb\" (UID: \"2ce44156-b3a8-4520-aab3-2c829c7d26cb\") " Nov 24 09:10:54 crc kubenswrapper[4886]: I1124 09:10:54.015928 4886 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2ce44156-b3a8-4520-aab3-2c829c7d26cb-logs\") on node \"crc\" DevicePath \"\"" Nov 24 09:10:54 crc kubenswrapper[4886]: I1124 09:10:54.022766 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2ce44156-b3a8-4520-aab3-2c829c7d26cb-kube-api-access-m5dxg" (OuterVolumeSpecName: "kube-api-access-m5dxg") pod "2ce44156-b3a8-4520-aab3-2c829c7d26cb" (UID: "2ce44156-b3a8-4520-aab3-2c829c7d26cb"). InnerVolumeSpecName "kube-api-access-m5dxg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 09:10:54 crc kubenswrapper[4886]: I1124 09:10:54.052923 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ce44156-b3a8-4520-aab3-2c829c7d26cb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2ce44156-b3a8-4520-aab3-2c829c7d26cb" (UID: "2ce44156-b3a8-4520-aab3-2c829c7d26cb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:10:54 crc kubenswrapper[4886]: I1124 09:10:54.063705 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ce44156-b3a8-4520-aab3-2c829c7d26cb-config-data" (OuterVolumeSpecName: "config-data") pod "2ce44156-b3a8-4520-aab3-2c829c7d26cb" (UID: "2ce44156-b3a8-4520-aab3-2c829c7d26cb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:10:54 crc kubenswrapper[4886]: I1124 09:10:54.086206 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ce44156-b3a8-4520-aab3-2c829c7d26cb-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "2ce44156-b3a8-4520-aab3-2c829c7d26cb" (UID: "2ce44156-b3a8-4520-aab3-2c829c7d26cb"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:10:54 crc kubenswrapper[4886]: I1124 09:10:54.118257 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m5dxg\" (UniqueName: \"kubernetes.io/projected/2ce44156-b3a8-4520-aab3-2c829c7d26cb-kube-api-access-m5dxg\") on node \"crc\" DevicePath \"\"" Nov 24 09:10:54 crc kubenswrapper[4886]: I1124 09:10:54.118346 4886 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/2ce44156-b3a8-4520-aab3-2c829c7d26cb-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 24 09:10:54 crc kubenswrapper[4886]: I1124 09:10:54.118360 4886 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ce44156-b3a8-4520-aab3-2c829c7d26cb-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 09:10:54 crc kubenswrapper[4886]: I1124 09:10:54.118375 4886 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ce44156-b3a8-4520-aab3-2c829c7d26cb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 09:10:54 crc kubenswrapper[4886]: I1124 09:10:54.153114 4886 generic.go:334] "Generic (PLEG): container finished" podID="2ce44156-b3a8-4520-aab3-2c829c7d26cb" containerID="b35611f43db7655fcabeb5604ac46c1a7a03ba2fdc1fc83b5e66a41e15d1b2e1" exitCode=0 Nov 24 09:10:54 crc kubenswrapper[4886]: I1124 09:10:54.153189 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 24 09:10:54 crc kubenswrapper[4886]: I1124 09:10:54.153203 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"2ce44156-b3a8-4520-aab3-2c829c7d26cb","Type":"ContainerDied","Data":"b35611f43db7655fcabeb5604ac46c1a7a03ba2fdc1fc83b5e66a41e15d1b2e1"} Nov 24 09:10:54 crc kubenswrapper[4886]: I1124 09:10:54.153273 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"2ce44156-b3a8-4520-aab3-2c829c7d26cb","Type":"ContainerDied","Data":"4309c12fefb1a592f0a9b96345ae3ecca5e5af5c2e5629d7a3696f59fca9a272"} Nov 24 09:10:54 crc kubenswrapper[4886]: I1124 09:10:54.153296 4886 scope.go:117] "RemoveContainer" containerID="b35611f43db7655fcabeb5604ac46c1a7a03ba2fdc1fc83b5e66a41e15d1b2e1" Nov 24 09:10:54 crc kubenswrapper[4886]: I1124 09:10:54.199988 4886 scope.go:117] "RemoveContainer" containerID="e417465e4c231478cf42687cc55cc3ea3f124f708793e33bd266e2a67d28f1fa" Nov 24 09:10:54 crc kubenswrapper[4886]: I1124 09:10:54.209359 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 24 09:10:54 crc kubenswrapper[4886]: I1124 09:10:54.224287 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Nov 24 09:10:54 crc kubenswrapper[4886]: I1124 09:10:54.225321 4886 scope.go:117] "RemoveContainer" containerID="b35611f43db7655fcabeb5604ac46c1a7a03ba2fdc1fc83b5e66a41e15d1b2e1" Nov 24 09:10:54 crc kubenswrapper[4886]: E1124 09:10:54.232105 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b35611f43db7655fcabeb5604ac46c1a7a03ba2fdc1fc83b5e66a41e15d1b2e1\": container with ID starting with b35611f43db7655fcabeb5604ac46c1a7a03ba2fdc1fc83b5e66a41e15d1b2e1 not found: ID does not exist" containerID="b35611f43db7655fcabeb5604ac46c1a7a03ba2fdc1fc83b5e66a41e15d1b2e1" Nov 24 09:10:54 crc kubenswrapper[4886]: I1124 09:10:54.232179 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b35611f43db7655fcabeb5604ac46c1a7a03ba2fdc1fc83b5e66a41e15d1b2e1"} err="failed to get container status \"b35611f43db7655fcabeb5604ac46c1a7a03ba2fdc1fc83b5e66a41e15d1b2e1\": rpc error: code = NotFound desc = could not find container \"b35611f43db7655fcabeb5604ac46c1a7a03ba2fdc1fc83b5e66a41e15d1b2e1\": container with ID starting with b35611f43db7655fcabeb5604ac46c1a7a03ba2fdc1fc83b5e66a41e15d1b2e1 not found: ID does not exist" Nov 24 09:10:54 crc kubenswrapper[4886]: I1124 09:10:54.232218 4886 scope.go:117] "RemoveContainer" containerID="e417465e4c231478cf42687cc55cc3ea3f124f708793e33bd266e2a67d28f1fa" Nov 24 09:10:54 crc kubenswrapper[4886]: E1124 09:10:54.232706 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e417465e4c231478cf42687cc55cc3ea3f124f708793e33bd266e2a67d28f1fa\": container with ID starting with e417465e4c231478cf42687cc55cc3ea3f124f708793e33bd266e2a67d28f1fa not found: ID does not exist" containerID="e417465e4c231478cf42687cc55cc3ea3f124f708793e33bd266e2a67d28f1fa" Nov 24 09:10:54 crc kubenswrapper[4886]: I1124 09:10:54.232740 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e417465e4c231478cf42687cc55cc3ea3f124f708793e33bd266e2a67d28f1fa"} err="failed to get container status \"e417465e4c231478cf42687cc55cc3ea3f124f708793e33bd266e2a67d28f1fa\": rpc error: code = NotFound desc = could not find container \"e417465e4c231478cf42687cc55cc3ea3f124f708793e33bd266e2a67d28f1fa\": container with ID starting with e417465e4c231478cf42687cc55cc3ea3f124f708793e33bd266e2a67d28f1fa not found: ID does not exist" Nov 24 09:10:54 crc kubenswrapper[4886]: I1124 09:10:54.237426 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Nov 24 09:10:54 crc kubenswrapper[4886]: E1124 09:10:54.237947 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ce44156-b3a8-4520-aab3-2c829c7d26cb" containerName="nova-metadata-metadata" Nov 24 09:10:54 crc kubenswrapper[4886]: I1124 09:10:54.237967 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ce44156-b3a8-4520-aab3-2c829c7d26cb" containerName="nova-metadata-metadata" Nov 24 09:10:54 crc kubenswrapper[4886]: E1124 09:10:54.237995 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ce44156-b3a8-4520-aab3-2c829c7d26cb" containerName="nova-metadata-log" Nov 24 09:10:54 crc kubenswrapper[4886]: I1124 09:10:54.238002 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ce44156-b3a8-4520-aab3-2c829c7d26cb" containerName="nova-metadata-log" Nov 24 09:10:54 crc kubenswrapper[4886]: I1124 09:10:54.238205 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="2ce44156-b3a8-4520-aab3-2c829c7d26cb" containerName="nova-metadata-log" Nov 24 09:10:54 crc kubenswrapper[4886]: I1124 09:10:54.238231 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="2ce44156-b3a8-4520-aab3-2c829c7d26cb" containerName="nova-metadata-metadata" Nov 24 09:10:54 crc kubenswrapper[4886]: I1124 09:10:54.239323 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 24 09:10:54 crc kubenswrapper[4886]: I1124 09:10:54.242463 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Nov 24 09:10:54 crc kubenswrapper[4886]: I1124 09:10:54.243247 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Nov 24 09:10:54 crc kubenswrapper[4886]: I1124 09:10:54.254337 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 24 09:10:54 crc kubenswrapper[4886]: W1124 09:10:54.314299 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8288e829_a6d4_4f11_abf2_e9cd50df6c4b.slice/crio-76b0ae8309e80881e26d4d6a5910594017e7a6ea4ef8b95de4e1e4b8df3d823c WatchSource:0}: Error finding container 76b0ae8309e80881e26d4d6a5910594017e7a6ea4ef8b95de4e1e4b8df3d823c: Status 404 returned error can't find the container with id 76b0ae8309e80881e26d4d6a5910594017e7a6ea4ef8b95de4e1e4b8df3d823c Nov 24 09:10:54 crc kubenswrapper[4886]: I1124 09:10:54.315792 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 24 09:10:54 crc kubenswrapper[4886]: I1124 09:10:54.323474 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d1021e4-f165-4881-9bcc-2cc19416ab64-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"6d1021e4-f165-4881-9bcc-2cc19416ab64\") " pod="openstack/nova-metadata-0" Nov 24 09:10:54 crc kubenswrapper[4886]: I1124 09:10:54.323549 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/6d1021e4-f165-4881-9bcc-2cc19416ab64-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"6d1021e4-f165-4881-9bcc-2cc19416ab64\") " pod="openstack/nova-metadata-0" Nov 24 09:10:54 crc kubenswrapper[4886]: I1124 09:10:54.323589 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cptgt\" (UniqueName: \"kubernetes.io/projected/6d1021e4-f165-4881-9bcc-2cc19416ab64-kube-api-access-cptgt\") pod \"nova-metadata-0\" (UID: \"6d1021e4-f165-4881-9bcc-2cc19416ab64\") " pod="openstack/nova-metadata-0" Nov 24 09:10:54 crc kubenswrapper[4886]: I1124 09:10:54.323631 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6d1021e4-f165-4881-9bcc-2cc19416ab64-logs\") pod \"nova-metadata-0\" (UID: \"6d1021e4-f165-4881-9bcc-2cc19416ab64\") " pod="openstack/nova-metadata-0" Nov 24 09:10:54 crc kubenswrapper[4886]: I1124 09:10:54.323860 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6d1021e4-f165-4881-9bcc-2cc19416ab64-config-data\") pod \"nova-metadata-0\" (UID: \"6d1021e4-f165-4881-9bcc-2cc19416ab64\") " pod="openstack/nova-metadata-0" Nov 24 09:10:54 crc kubenswrapper[4886]: I1124 09:10:54.425721 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d1021e4-f165-4881-9bcc-2cc19416ab64-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"6d1021e4-f165-4881-9bcc-2cc19416ab64\") " pod="openstack/nova-metadata-0" Nov 24 09:10:54 crc kubenswrapper[4886]: I1124 09:10:54.425821 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/6d1021e4-f165-4881-9bcc-2cc19416ab64-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"6d1021e4-f165-4881-9bcc-2cc19416ab64\") " pod="openstack/nova-metadata-0" Nov 24 09:10:54 crc kubenswrapper[4886]: I1124 09:10:54.425875 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cptgt\" (UniqueName: \"kubernetes.io/projected/6d1021e4-f165-4881-9bcc-2cc19416ab64-kube-api-access-cptgt\") pod \"nova-metadata-0\" (UID: \"6d1021e4-f165-4881-9bcc-2cc19416ab64\") " pod="openstack/nova-metadata-0" Nov 24 09:10:54 crc kubenswrapper[4886]: I1124 09:10:54.425919 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6d1021e4-f165-4881-9bcc-2cc19416ab64-logs\") pod \"nova-metadata-0\" (UID: \"6d1021e4-f165-4881-9bcc-2cc19416ab64\") " pod="openstack/nova-metadata-0" Nov 24 09:10:54 crc kubenswrapper[4886]: I1124 09:10:54.425978 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6d1021e4-f165-4881-9bcc-2cc19416ab64-config-data\") pod \"nova-metadata-0\" (UID: \"6d1021e4-f165-4881-9bcc-2cc19416ab64\") " pod="openstack/nova-metadata-0" Nov 24 09:10:54 crc kubenswrapper[4886]: I1124 09:10:54.426728 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6d1021e4-f165-4881-9bcc-2cc19416ab64-logs\") pod \"nova-metadata-0\" (UID: \"6d1021e4-f165-4881-9bcc-2cc19416ab64\") " pod="openstack/nova-metadata-0" Nov 24 09:10:54 crc kubenswrapper[4886]: I1124 09:10:54.431599 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/6d1021e4-f165-4881-9bcc-2cc19416ab64-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"6d1021e4-f165-4881-9bcc-2cc19416ab64\") " pod="openstack/nova-metadata-0" Nov 24 09:10:54 crc kubenswrapper[4886]: I1124 09:10:54.433916 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d1021e4-f165-4881-9bcc-2cc19416ab64-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"6d1021e4-f165-4881-9bcc-2cc19416ab64\") " pod="openstack/nova-metadata-0" Nov 24 09:10:54 crc kubenswrapper[4886]: I1124 09:10:54.434258 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6d1021e4-f165-4881-9bcc-2cc19416ab64-config-data\") pod \"nova-metadata-0\" (UID: \"6d1021e4-f165-4881-9bcc-2cc19416ab64\") " pod="openstack/nova-metadata-0" Nov 24 09:10:54 crc kubenswrapper[4886]: I1124 09:10:54.449638 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cptgt\" (UniqueName: \"kubernetes.io/projected/6d1021e4-f165-4881-9bcc-2cc19416ab64-kube-api-access-cptgt\") pod \"nova-metadata-0\" (UID: \"6d1021e4-f165-4881-9bcc-2cc19416ab64\") " pod="openstack/nova-metadata-0" Nov 24 09:10:54 crc kubenswrapper[4886]: I1124 09:10:54.562049 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 24 09:10:54 crc kubenswrapper[4886]: I1124 09:10:54.864855 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0402c99c-2124-499a-8682-bc7cab563f47" path="/var/lib/kubelet/pods/0402c99c-2124-499a-8682-bc7cab563f47/volumes" Nov 24 09:10:54 crc kubenswrapper[4886]: I1124 09:10:54.866358 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2ce44156-b3a8-4520-aab3-2c829c7d26cb" path="/var/lib/kubelet/pods/2ce44156-b3a8-4520-aab3-2c829c7d26cb/volumes" Nov 24 09:10:55 crc kubenswrapper[4886]: W1124 09:10:55.086983 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6d1021e4_f165_4881_9bcc_2cc19416ab64.slice/crio-aa6bb23e5903fda68e315516dc75ea9ed885eb0b76244d8b9f8c47e97325a3e9 WatchSource:0}: Error finding container aa6bb23e5903fda68e315516dc75ea9ed885eb0b76244d8b9f8c47e97325a3e9: Status 404 returned error can't find the container with id aa6bb23e5903fda68e315516dc75ea9ed885eb0b76244d8b9f8c47e97325a3e9 Nov 24 09:10:55 crc kubenswrapper[4886]: I1124 09:10:55.095622 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 24 09:10:55 crc kubenswrapper[4886]: I1124 09:10:55.168988 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"8288e829-a6d4-4f11-abf2-e9cd50df6c4b","Type":"ContainerStarted","Data":"15abaf4296a24b516e3ca7667928c7c276938724f08163b4a4bd425037ab81fc"} Nov 24 09:10:55 crc kubenswrapper[4886]: I1124 09:10:55.169033 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"8288e829-a6d4-4f11-abf2-e9cd50df6c4b","Type":"ContainerStarted","Data":"76b0ae8309e80881e26d4d6a5910594017e7a6ea4ef8b95de4e1e4b8df3d823c"} Nov 24 09:10:55 crc kubenswrapper[4886]: I1124 09:10:55.184018 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"6d1021e4-f165-4881-9bcc-2cc19416ab64","Type":"ContainerStarted","Data":"aa6bb23e5903fda68e315516dc75ea9ed885eb0b76244d8b9f8c47e97325a3e9"} Nov 24 09:10:55 crc kubenswrapper[4886]: I1124 09:10:55.192196 4886 generic.go:334] "Generic (PLEG): container finished" podID="55b43fd3-4034-48fd-ae88-d510286a394a" containerID="9553193047ca6795214f5eed76d478f165b1ccd4754ba1518085478b2a2cbb73" exitCode=0 Nov 24 09:10:55 crc kubenswrapper[4886]: I1124 09:10:55.192254 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"55b43fd3-4034-48fd-ae88-d510286a394a","Type":"ContainerDied","Data":"9553193047ca6795214f5eed76d478f165b1ccd4754ba1518085478b2a2cbb73"} Nov 24 09:10:55 crc kubenswrapper[4886]: I1124 09:10:55.203429 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.20340779 podStartE2EDuration="2.20340779s" podCreationTimestamp="2025-11-24 09:10:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 09:10:55.189753212 +0000 UTC m=+1311.076491347" watchObservedRunningTime="2025-11-24 09:10:55.20340779 +0000 UTC m=+1311.090145945" Nov 24 09:10:55 crc kubenswrapper[4886]: I1124 09:10:55.285013 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 24 09:10:55 crc kubenswrapper[4886]: I1124 09:10:55.343137 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/55b43fd3-4034-48fd-ae88-d510286a394a-public-tls-certs\") pod \"55b43fd3-4034-48fd-ae88-d510286a394a\" (UID: \"55b43fd3-4034-48fd-ae88-d510286a394a\") " Nov 24 09:10:55 crc kubenswrapper[4886]: I1124 09:10:55.343243 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/55b43fd3-4034-48fd-ae88-d510286a394a-internal-tls-certs\") pod \"55b43fd3-4034-48fd-ae88-d510286a394a\" (UID: \"55b43fd3-4034-48fd-ae88-d510286a394a\") " Nov 24 09:10:55 crc kubenswrapper[4886]: I1124 09:10:55.343439 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/55b43fd3-4034-48fd-ae88-d510286a394a-logs\") pod \"55b43fd3-4034-48fd-ae88-d510286a394a\" (UID: \"55b43fd3-4034-48fd-ae88-d510286a394a\") " Nov 24 09:10:55 crc kubenswrapper[4886]: I1124 09:10:55.343494 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/55b43fd3-4034-48fd-ae88-d510286a394a-config-data\") pod \"55b43fd3-4034-48fd-ae88-d510286a394a\" (UID: \"55b43fd3-4034-48fd-ae88-d510286a394a\") " Nov 24 09:10:55 crc kubenswrapper[4886]: I1124 09:10:55.343525 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55b43fd3-4034-48fd-ae88-d510286a394a-combined-ca-bundle\") pod \"55b43fd3-4034-48fd-ae88-d510286a394a\" (UID: \"55b43fd3-4034-48fd-ae88-d510286a394a\") " Nov 24 09:10:55 crc kubenswrapper[4886]: I1124 09:10:55.343589 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2fkzl\" (UniqueName: \"kubernetes.io/projected/55b43fd3-4034-48fd-ae88-d510286a394a-kube-api-access-2fkzl\") pod \"55b43fd3-4034-48fd-ae88-d510286a394a\" (UID: \"55b43fd3-4034-48fd-ae88-d510286a394a\") " Nov 24 09:10:55 crc kubenswrapper[4886]: I1124 09:10:55.344819 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/55b43fd3-4034-48fd-ae88-d510286a394a-logs" (OuterVolumeSpecName: "logs") pod "55b43fd3-4034-48fd-ae88-d510286a394a" (UID: "55b43fd3-4034-48fd-ae88-d510286a394a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 09:10:55 crc kubenswrapper[4886]: I1124 09:10:55.352137 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/55b43fd3-4034-48fd-ae88-d510286a394a-kube-api-access-2fkzl" (OuterVolumeSpecName: "kube-api-access-2fkzl") pod "55b43fd3-4034-48fd-ae88-d510286a394a" (UID: "55b43fd3-4034-48fd-ae88-d510286a394a"). InnerVolumeSpecName "kube-api-access-2fkzl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 09:10:55 crc kubenswrapper[4886]: I1124 09:10:55.390418 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/55b43fd3-4034-48fd-ae88-d510286a394a-config-data" (OuterVolumeSpecName: "config-data") pod "55b43fd3-4034-48fd-ae88-d510286a394a" (UID: "55b43fd3-4034-48fd-ae88-d510286a394a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:10:55 crc kubenswrapper[4886]: I1124 09:10:55.398261 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/55b43fd3-4034-48fd-ae88-d510286a394a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "55b43fd3-4034-48fd-ae88-d510286a394a" (UID: "55b43fd3-4034-48fd-ae88-d510286a394a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:10:55 crc kubenswrapper[4886]: I1124 09:10:55.423715 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/55b43fd3-4034-48fd-ae88-d510286a394a-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "55b43fd3-4034-48fd-ae88-d510286a394a" (UID: "55b43fd3-4034-48fd-ae88-d510286a394a"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:10:55 crc kubenswrapper[4886]: I1124 09:10:55.426462 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/55b43fd3-4034-48fd-ae88-d510286a394a-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "55b43fd3-4034-48fd-ae88-d510286a394a" (UID: "55b43fd3-4034-48fd-ae88-d510286a394a"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:10:55 crc kubenswrapper[4886]: I1124 09:10:55.446061 4886 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/55b43fd3-4034-48fd-ae88-d510286a394a-logs\") on node \"crc\" DevicePath \"\"" Nov 24 09:10:55 crc kubenswrapper[4886]: I1124 09:10:55.446102 4886 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/55b43fd3-4034-48fd-ae88-d510286a394a-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 09:10:55 crc kubenswrapper[4886]: I1124 09:10:55.446118 4886 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55b43fd3-4034-48fd-ae88-d510286a394a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 09:10:55 crc kubenswrapper[4886]: I1124 09:10:55.446157 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2fkzl\" (UniqueName: \"kubernetes.io/projected/55b43fd3-4034-48fd-ae88-d510286a394a-kube-api-access-2fkzl\") on node \"crc\" DevicePath \"\"" Nov 24 09:10:55 crc kubenswrapper[4886]: I1124 09:10:55.446226 4886 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/55b43fd3-4034-48fd-ae88-d510286a394a-public-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 24 09:10:55 crc kubenswrapper[4886]: I1124 09:10:55.446237 4886 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/55b43fd3-4034-48fd-ae88-d510286a394a-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 24 09:10:56 crc kubenswrapper[4886]: I1124 09:10:56.212991 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"55b43fd3-4034-48fd-ae88-d510286a394a","Type":"ContainerDied","Data":"4f0a6e1191006800562602f65a5656d9331930da53b12834240ec2bc09cd151f"} Nov 24 09:10:56 crc kubenswrapper[4886]: I1124 09:10:56.213502 4886 scope.go:117] "RemoveContainer" containerID="9553193047ca6795214f5eed76d478f165b1ccd4754ba1518085478b2a2cbb73" Nov 24 09:10:56 crc kubenswrapper[4886]: I1124 09:10:56.213058 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 24 09:10:56 crc kubenswrapper[4886]: I1124 09:10:56.224973 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"6d1021e4-f165-4881-9bcc-2cc19416ab64","Type":"ContainerStarted","Data":"7f67645ecc776e2fa5ffb21abdfc1d6756510a2c9d68ecd6f78a93e1f84424e5"} Nov 24 09:10:56 crc kubenswrapper[4886]: I1124 09:10:56.225032 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"6d1021e4-f165-4881-9bcc-2cc19416ab64","Type":"ContainerStarted","Data":"e16c70430ba4c1503f34f1c9a28b932976e839f19003ef034130a359487b66b1"} Nov 24 09:10:56 crc kubenswrapper[4886]: I1124 09:10:56.250289 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.250262812 podStartE2EDuration="2.250262812s" podCreationTimestamp="2025-11-24 09:10:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 09:10:56.244130158 +0000 UTC m=+1312.130868303" watchObservedRunningTime="2025-11-24 09:10:56.250262812 +0000 UTC m=+1312.137000937" Nov 24 09:10:56 crc kubenswrapper[4886]: I1124 09:10:56.267636 4886 scope.go:117] "RemoveContainer" containerID="261f2f6053b09014fb786c6b570d7342ad0cf1454a224f27d9c4b4c7061b991d" Nov 24 09:10:56 crc kubenswrapper[4886]: I1124 09:10:56.273240 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 24 09:10:56 crc kubenswrapper[4886]: I1124 09:10:56.289140 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Nov 24 09:10:56 crc kubenswrapper[4886]: I1124 09:10:56.296591 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Nov 24 09:10:56 crc kubenswrapper[4886]: E1124 09:10:56.297136 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55b43fd3-4034-48fd-ae88-d510286a394a" containerName="nova-api-api" Nov 24 09:10:56 crc kubenswrapper[4886]: I1124 09:10:56.297160 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="55b43fd3-4034-48fd-ae88-d510286a394a" containerName="nova-api-api" Nov 24 09:10:56 crc kubenswrapper[4886]: E1124 09:10:56.297208 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55b43fd3-4034-48fd-ae88-d510286a394a" containerName="nova-api-log" Nov 24 09:10:56 crc kubenswrapper[4886]: I1124 09:10:56.297216 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="55b43fd3-4034-48fd-ae88-d510286a394a" containerName="nova-api-log" Nov 24 09:10:56 crc kubenswrapper[4886]: I1124 09:10:56.297451 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="55b43fd3-4034-48fd-ae88-d510286a394a" containerName="nova-api-api" Nov 24 09:10:56 crc kubenswrapper[4886]: I1124 09:10:56.297474 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="55b43fd3-4034-48fd-ae88-d510286a394a" containerName="nova-api-log" Nov 24 09:10:56 crc kubenswrapper[4886]: I1124 09:10:56.298612 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 24 09:10:56 crc kubenswrapper[4886]: I1124 09:10:56.303249 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Nov 24 09:10:56 crc kubenswrapper[4886]: I1124 09:10:56.303288 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Nov 24 09:10:56 crc kubenswrapper[4886]: I1124 09:10:56.303438 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Nov 24 09:10:56 crc kubenswrapper[4886]: I1124 09:10:56.314824 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 24 09:10:56 crc kubenswrapper[4886]: I1124 09:10:56.369984 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f31eee06-9a4d-4956-b314-b4413ac5aba0-public-tls-certs\") pod \"nova-api-0\" (UID: \"f31eee06-9a4d-4956-b314-b4413ac5aba0\") " pod="openstack/nova-api-0" Nov 24 09:10:56 crc kubenswrapper[4886]: I1124 09:10:56.370131 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f31eee06-9a4d-4956-b314-b4413ac5aba0-internal-tls-certs\") pod \"nova-api-0\" (UID: \"f31eee06-9a4d-4956-b314-b4413ac5aba0\") " pod="openstack/nova-api-0" Nov 24 09:10:56 crc kubenswrapper[4886]: I1124 09:10:56.370206 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f31eee06-9a4d-4956-b314-b4413ac5aba0-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"f31eee06-9a4d-4956-b314-b4413ac5aba0\") " pod="openstack/nova-api-0" Nov 24 09:10:56 crc kubenswrapper[4886]: I1124 09:10:56.370246 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-59wg9\" (UniqueName: \"kubernetes.io/projected/f31eee06-9a4d-4956-b314-b4413ac5aba0-kube-api-access-59wg9\") pod \"nova-api-0\" (UID: \"f31eee06-9a4d-4956-b314-b4413ac5aba0\") " pod="openstack/nova-api-0" Nov 24 09:10:56 crc kubenswrapper[4886]: I1124 09:10:56.370264 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f31eee06-9a4d-4956-b314-b4413ac5aba0-logs\") pod \"nova-api-0\" (UID: \"f31eee06-9a4d-4956-b314-b4413ac5aba0\") " pod="openstack/nova-api-0" Nov 24 09:10:56 crc kubenswrapper[4886]: I1124 09:10:56.370300 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f31eee06-9a4d-4956-b314-b4413ac5aba0-config-data\") pod \"nova-api-0\" (UID: \"f31eee06-9a4d-4956-b314-b4413ac5aba0\") " pod="openstack/nova-api-0" Nov 24 09:10:56 crc kubenswrapper[4886]: I1124 09:10:56.472024 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f31eee06-9a4d-4956-b314-b4413ac5aba0-internal-tls-certs\") pod \"nova-api-0\" (UID: \"f31eee06-9a4d-4956-b314-b4413ac5aba0\") " pod="openstack/nova-api-0" Nov 24 09:10:56 crc kubenswrapper[4886]: I1124 09:10:56.472100 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f31eee06-9a4d-4956-b314-b4413ac5aba0-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"f31eee06-9a4d-4956-b314-b4413ac5aba0\") " pod="openstack/nova-api-0" Nov 24 09:10:56 crc kubenswrapper[4886]: I1124 09:10:56.472124 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-59wg9\" (UniqueName: \"kubernetes.io/projected/f31eee06-9a4d-4956-b314-b4413ac5aba0-kube-api-access-59wg9\") pod \"nova-api-0\" (UID: \"f31eee06-9a4d-4956-b314-b4413ac5aba0\") " pod="openstack/nova-api-0" Nov 24 09:10:56 crc kubenswrapper[4886]: I1124 09:10:56.472141 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f31eee06-9a4d-4956-b314-b4413ac5aba0-logs\") pod \"nova-api-0\" (UID: \"f31eee06-9a4d-4956-b314-b4413ac5aba0\") " pod="openstack/nova-api-0" Nov 24 09:10:56 crc kubenswrapper[4886]: I1124 09:10:56.472208 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f31eee06-9a4d-4956-b314-b4413ac5aba0-config-data\") pod \"nova-api-0\" (UID: \"f31eee06-9a4d-4956-b314-b4413ac5aba0\") " pod="openstack/nova-api-0" Nov 24 09:10:56 crc kubenswrapper[4886]: I1124 09:10:56.472299 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f31eee06-9a4d-4956-b314-b4413ac5aba0-public-tls-certs\") pod \"nova-api-0\" (UID: \"f31eee06-9a4d-4956-b314-b4413ac5aba0\") " pod="openstack/nova-api-0" Nov 24 09:10:56 crc kubenswrapper[4886]: I1124 09:10:56.473203 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f31eee06-9a4d-4956-b314-b4413ac5aba0-logs\") pod \"nova-api-0\" (UID: \"f31eee06-9a4d-4956-b314-b4413ac5aba0\") " pod="openstack/nova-api-0" Nov 24 09:10:56 crc kubenswrapper[4886]: I1124 09:10:56.477798 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f31eee06-9a4d-4956-b314-b4413ac5aba0-config-data\") pod \"nova-api-0\" (UID: \"f31eee06-9a4d-4956-b314-b4413ac5aba0\") " pod="openstack/nova-api-0" Nov 24 09:10:56 crc kubenswrapper[4886]: I1124 09:10:56.478619 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f31eee06-9a4d-4956-b314-b4413ac5aba0-public-tls-certs\") pod \"nova-api-0\" (UID: \"f31eee06-9a4d-4956-b314-b4413ac5aba0\") " pod="openstack/nova-api-0" Nov 24 09:10:56 crc kubenswrapper[4886]: I1124 09:10:56.481998 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f31eee06-9a4d-4956-b314-b4413ac5aba0-internal-tls-certs\") pod \"nova-api-0\" (UID: \"f31eee06-9a4d-4956-b314-b4413ac5aba0\") " pod="openstack/nova-api-0" Nov 24 09:10:56 crc kubenswrapper[4886]: I1124 09:10:56.490225 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f31eee06-9a4d-4956-b314-b4413ac5aba0-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"f31eee06-9a4d-4956-b314-b4413ac5aba0\") " pod="openstack/nova-api-0" Nov 24 09:10:56 crc kubenswrapper[4886]: I1124 09:10:56.490916 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-59wg9\" (UniqueName: \"kubernetes.io/projected/f31eee06-9a4d-4956-b314-b4413ac5aba0-kube-api-access-59wg9\") pod \"nova-api-0\" (UID: \"f31eee06-9a4d-4956-b314-b4413ac5aba0\") " pod="openstack/nova-api-0" Nov 24 09:10:56 crc kubenswrapper[4886]: I1124 09:10:56.627739 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 24 09:10:56 crc kubenswrapper[4886]: I1124 09:10:56.863896 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="55b43fd3-4034-48fd-ae88-d510286a394a" path="/var/lib/kubelet/pods/55b43fd3-4034-48fd-ae88-d510286a394a/volumes" Nov 24 09:10:56 crc kubenswrapper[4886]: E1124 09:10:56.923994 4886 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda7266b1c_bb21_4f54_994c_52ab5db8d4eb.slice\": RecentStats: unable to find data in memory cache]" Nov 24 09:10:57 crc kubenswrapper[4886]: W1124 09:10:57.120870 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf31eee06_9a4d_4956_b314_b4413ac5aba0.slice/crio-806ace29eb5548730294272892f593b2c277ae1bbdd96ec474e3fe91ce604038 WatchSource:0}: Error finding container 806ace29eb5548730294272892f593b2c277ae1bbdd96ec474e3fe91ce604038: Status 404 returned error can't find the container with id 806ace29eb5548730294272892f593b2c277ae1bbdd96ec474e3fe91ce604038 Nov 24 09:10:57 crc kubenswrapper[4886]: I1124 09:10:57.124612 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 24 09:10:57 crc kubenswrapper[4886]: I1124 09:10:57.239447 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f31eee06-9a4d-4956-b314-b4413ac5aba0","Type":"ContainerStarted","Data":"806ace29eb5548730294272892f593b2c277ae1bbdd96ec474e3fe91ce604038"} Nov 24 09:10:58 crc kubenswrapper[4886]: I1124 09:10:58.256540 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f31eee06-9a4d-4956-b314-b4413ac5aba0","Type":"ContainerStarted","Data":"941a6d1b2a8cc37fba61332294fdb395d57bcb86a7843e89de597b2f391b5b05"} Nov 24 09:10:58 crc kubenswrapper[4886]: I1124 09:10:58.258084 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f31eee06-9a4d-4956-b314-b4413ac5aba0","Type":"ContainerStarted","Data":"633a4ef379dde645609197d03c40b11ed66526c93cde522733ac643393093da5"} Nov 24 09:10:58 crc kubenswrapper[4886]: I1124 09:10:58.295728 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.295701936 podStartE2EDuration="2.295701936s" podCreationTimestamp="2025-11-24 09:10:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 09:10:58.285684762 +0000 UTC m=+1314.172422897" watchObservedRunningTime="2025-11-24 09:10:58.295701936 +0000 UTC m=+1314.182440071" Nov 24 09:10:58 crc kubenswrapper[4886]: I1124 09:10:58.760184 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="2ce44156-b3a8-4520-aab3-2c829c7d26cb" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.189:8775/\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Nov 24 09:10:58 crc kubenswrapper[4886]: I1124 09:10:58.762414 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="2ce44156-b3a8-4520-aab3-2c829c7d26cb" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.189:8775/\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Nov 24 09:10:58 crc kubenswrapper[4886]: I1124 09:10:58.824864 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Nov 24 09:10:59 crc kubenswrapper[4886]: I1124 09:10:59.563122 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 24 09:10:59 crc kubenswrapper[4886]: I1124 09:10:59.563191 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 24 09:11:03 crc kubenswrapper[4886]: I1124 09:11:03.825363 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Nov 24 09:11:03 crc kubenswrapper[4886]: I1124 09:11:03.856388 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Nov 24 09:11:04 crc kubenswrapper[4886]: I1124 09:11:04.388564 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Nov 24 09:11:04 crc kubenswrapper[4886]: I1124 09:11:04.562333 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Nov 24 09:11:04 crc kubenswrapper[4886]: I1124 09:11:04.562847 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Nov 24 09:11:05 crc kubenswrapper[4886]: I1124 09:11:05.577411 4886 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="6d1021e4-f165-4881-9bcc-2cc19416ab64" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.200:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 24 09:11:05 crc kubenswrapper[4886]: I1124 09:11:05.577450 4886 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="6d1021e4-f165-4881-9bcc-2cc19416ab64" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.200:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 24 09:11:06 crc kubenswrapper[4886]: I1124 09:11:06.629314 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 24 09:11:06 crc kubenswrapper[4886]: I1124 09:11:06.629396 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 24 09:11:07 crc kubenswrapper[4886]: I1124 09:11:07.641349 4886 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="f31eee06-9a4d-4956-b314-b4413ac5aba0" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.201:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 24 09:11:07 crc kubenswrapper[4886]: I1124 09:11:07.641356 4886 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="f31eee06-9a4d-4956-b314-b4413ac5aba0" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.201:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 24 09:11:13 crc kubenswrapper[4886]: I1124 09:11:13.385945 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Nov 24 09:11:14 crc kubenswrapper[4886]: I1124 09:11:14.571389 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Nov 24 09:11:14 crc kubenswrapper[4886]: I1124 09:11:14.571807 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Nov 24 09:11:14 crc kubenswrapper[4886]: I1124 09:11:14.576974 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Nov 24 09:11:14 crc kubenswrapper[4886]: I1124 09:11:14.579904 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Nov 24 09:11:16 crc kubenswrapper[4886]: I1124 09:11:16.638092 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Nov 24 09:11:16 crc kubenswrapper[4886]: I1124 09:11:16.638633 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Nov 24 09:11:16 crc kubenswrapper[4886]: I1124 09:11:16.639147 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Nov 24 09:11:16 crc kubenswrapper[4886]: I1124 09:11:16.639197 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Nov 24 09:11:16 crc kubenswrapper[4886]: I1124 09:11:16.646544 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Nov 24 09:11:16 crc kubenswrapper[4886]: I1124 09:11:16.646918 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Nov 24 09:11:26 crc kubenswrapper[4886]: I1124 09:11:26.099808 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 24 09:11:27 crc kubenswrapper[4886]: I1124 09:11:27.204144 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 24 09:11:30 crc kubenswrapper[4886]: I1124 09:11:30.695227 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="510b7a7a-1206-44f7-bd72-a85590e7a1ac" containerName="rabbitmq" containerID="cri-o://69d84f8ba68cde94097a7d01257c5d06594a351a401687983ff87b10e4d1a745" gracePeriod=604796 Nov 24 09:11:31 crc kubenswrapper[4886]: I1124 09:11:31.803497 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="f10026aa-640c-4f36-9912-cd4177af074d" containerName="rabbitmq" containerID="cri-o://3a43dfdd3bfeead9977c7153e7f0a2e6a287282da1aa52e2dc0d8529402136d7" gracePeriod=604796 Nov 24 09:11:32 crc kubenswrapper[4886]: I1124 09:11:32.790143 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="510b7a7a-1206-44f7-bd72-a85590e7a1ac" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.97:5671: connect: connection refused" Nov 24 09:11:33 crc kubenswrapper[4886]: I1124 09:11:33.189035 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="f10026aa-640c-4f36-9912-cd4177af074d" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.98:5671: connect: connection refused" Nov 24 09:11:37 crc kubenswrapper[4886]: I1124 09:11:37.309700 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 24 09:11:37 crc kubenswrapper[4886]: I1124 09:11:37.481752 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/510b7a7a-1206-44f7-bd72-a85590e7a1ac-plugins-conf\") pod \"510b7a7a-1206-44f7-bd72-a85590e7a1ac\" (UID: \"510b7a7a-1206-44f7-bd72-a85590e7a1ac\") " Nov 24 09:11:37 crc kubenswrapper[4886]: I1124 09:11:37.481852 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/510b7a7a-1206-44f7-bd72-a85590e7a1ac-config-data\") pod \"510b7a7a-1206-44f7-bd72-a85590e7a1ac\" (UID: \"510b7a7a-1206-44f7-bd72-a85590e7a1ac\") " Nov 24 09:11:37 crc kubenswrapper[4886]: I1124 09:11:37.481904 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/510b7a7a-1206-44f7-bd72-a85590e7a1ac-rabbitmq-plugins\") pod \"510b7a7a-1206-44f7-bd72-a85590e7a1ac\" (UID: \"510b7a7a-1206-44f7-bd72-a85590e7a1ac\") " Nov 24 09:11:37 crc kubenswrapper[4886]: I1124 09:11:37.482005 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/510b7a7a-1206-44f7-bd72-a85590e7a1ac-erlang-cookie-secret\") pod \"510b7a7a-1206-44f7-bd72-a85590e7a1ac\" (UID: \"510b7a7a-1206-44f7-bd72-a85590e7a1ac\") " Nov 24 09:11:37 crc kubenswrapper[4886]: I1124 09:11:37.482027 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/510b7a7a-1206-44f7-bd72-a85590e7a1ac-rabbitmq-tls\") pod \"510b7a7a-1206-44f7-bd72-a85590e7a1ac\" (UID: \"510b7a7a-1206-44f7-bd72-a85590e7a1ac\") " Nov 24 09:11:37 crc kubenswrapper[4886]: I1124 09:11:37.482105 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tcngm\" (UniqueName: \"kubernetes.io/projected/510b7a7a-1206-44f7-bd72-a85590e7a1ac-kube-api-access-tcngm\") pod \"510b7a7a-1206-44f7-bd72-a85590e7a1ac\" (UID: \"510b7a7a-1206-44f7-bd72-a85590e7a1ac\") " Nov 24 09:11:37 crc kubenswrapper[4886]: I1124 09:11:37.482125 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/510b7a7a-1206-44f7-bd72-a85590e7a1ac-rabbitmq-confd\") pod \"510b7a7a-1206-44f7-bd72-a85590e7a1ac\" (UID: \"510b7a7a-1206-44f7-bd72-a85590e7a1ac\") " Nov 24 09:11:37 crc kubenswrapper[4886]: I1124 09:11:37.482165 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/510b7a7a-1206-44f7-bd72-a85590e7a1ac-pod-info\") pod \"510b7a7a-1206-44f7-bd72-a85590e7a1ac\" (UID: \"510b7a7a-1206-44f7-bd72-a85590e7a1ac\") " Nov 24 09:11:37 crc kubenswrapper[4886]: I1124 09:11:37.482206 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"510b7a7a-1206-44f7-bd72-a85590e7a1ac\" (UID: \"510b7a7a-1206-44f7-bd72-a85590e7a1ac\") " Nov 24 09:11:37 crc kubenswrapper[4886]: I1124 09:11:37.482454 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/510b7a7a-1206-44f7-bd72-a85590e7a1ac-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "510b7a7a-1206-44f7-bd72-a85590e7a1ac" (UID: "510b7a7a-1206-44f7-bd72-a85590e7a1ac"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 09:11:37 crc kubenswrapper[4886]: I1124 09:11:37.483318 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/510b7a7a-1206-44f7-bd72-a85590e7a1ac-server-conf\") pod \"510b7a7a-1206-44f7-bd72-a85590e7a1ac\" (UID: \"510b7a7a-1206-44f7-bd72-a85590e7a1ac\") " Nov 24 09:11:37 crc kubenswrapper[4886]: I1124 09:11:37.483343 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/510b7a7a-1206-44f7-bd72-a85590e7a1ac-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "510b7a7a-1206-44f7-bd72-a85590e7a1ac" (UID: "510b7a7a-1206-44f7-bd72-a85590e7a1ac"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 09:11:37 crc kubenswrapper[4886]: I1124 09:11:37.483353 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/510b7a7a-1206-44f7-bd72-a85590e7a1ac-rabbitmq-erlang-cookie\") pod \"510b7a7a-1206-44f7-bd72-a85590e7a1ac\" (UID: \"510b7a7a-1206-44f7-bd72-a85590e7a1ac\") " Nov 24 09:11:37 crc kubenswrapper[4886]: I1124 09:11:37.484481 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/510b7a7a-1206-44f7-bd72-a85590e7a1ac-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "510b7a7a-1206-44f7-bd72-a85590e7a1ac" (UID: "510b7a7a-1206-44f7-bd72-a85590e7a1ac"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 09:11:37 crc kubenswrapper[4886]: I1124 09:11:37.484529 4886 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/510b7a7a-1206-44f7-bd72-a85590e7a1ac-plugins-conf\") on node \"crc\" DevicePath \"\"" Nov 24 09:11:37 crc kubenswrapper[4886]: I1124 09:11:37.484546 4886 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/510b7a7a-1206-44f7-bd72-a85590e7a1ac-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Nov 24 09:11:37 crc kubenswrapper[4886]: I1124 09:11:37.492362 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/510b7a7a-1206-44f7-bd72-a85590e7a1ac-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "510b7a7a-1206-44f7-bd72-a85590e7a1ac" (UID: "510b7a7a-1206-44f7-bd72-a85590e7a1ac"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 09:11:37 crc kubenswrapper[4886]: I1124 09:11:37.495132 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage08-crc" (OuterVolumeSpecName: "persistence") pod "510b7a7a-1206-44f7-bd72-a85590e7a1ac" (UID: "510b7a7a-1206-44f7-bd72-a85590e7a1ac"). InnerVolumeSpecName "local-storage08-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 24 09:11:37 crc kubenswrapper[4886]: I1124 09:11:37.497391 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/510b7a7a-1206-44f7-bd72-a85590e7a1ac-pod-info" (OuterVolumeSpecName: "pod-info") pod "510b7a7a-1206-44f7-bd72-a85590e7a1ac" (UID: "510b7a7a-1206-44f7-bd72-a85590e7a1ac"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Nov 24 09:11:37 crc kubenswrapper[4886]: I1124 09:11:37.503400 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/510b7a7a-1206-44f7-bd72-a85590e7a1ac-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "510b7a7a-1206-44f7-bd72-a85590e7a1ac" (UID: "510b7a7a-1206-44f7-bd72-a85590e7a1ac"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:11:37 crc kubenswrapper[4886]: I1124 09:11:37.503463 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/510b7a7a-1206-44f7-bd72-a85590e7a1ac-kube-api-access-tcngm" (OuterVolumeSpecName: "kube-api-access-tcngm") pod "510b7a7a-1206-44f7-bd72-a85590e7a1ac" (UID: "510b7a7a-1206-44f7-bd72-a85590e7a1ac"). InnerVolumeSpecName "kube-api-access-tcngm". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 09:11:37 crc kubenswrapper[4886]: I1124 09:11:37.526202 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/510b7a7a-1206-44f7-bd72-a85590e7a1ac-config-data" (OuterVolumeSpecName: "config-data") pod "510b7a7a-1206-44f7-bd72-a85590e7a1ac" (UID: "510b7a7a-1206-44f7-bd72-a85590e7a1ac"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 09:11:37 crc kubenswrapper[4886]: I1124 09:11:37.579194 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/510b7a7a-1206-44f7-bd72-a85590e7a1ac-server-conf" (OuterVolumeSpecName: "server-conf") pod "510b7a7a-1206-44f7-bd72-a85590e7a1ac" (UID: "510b7a7a-1206-44f7-bd72-a85590e7a1ac"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 09:11:37 crc kubenswrapper[4886]: I1124 09:11:37.586661 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tcngm\" (UniqueName: \"kubernetes.io/projected/510b7a7a-1206-44f7-bd72-a85590e7a1ac-kube-api-access-tcngm\") on node \"crc\" DevicePath \"\"" Nov 24 09:11:37 crc kubenswrapper[4886]: I1124 09:11:37.587051 4886 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/510b7a7a-1206-44f7-bd72-a85590e7a1ac-pod-info\") on node \"crc\" DevicePath \"\"" Nov 24 09:11:37 crc kubenswrapper[4886]: I1124 09:11:37.587178 4886 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" " Nov 24 09:11:37 crc kubenswrapper[4886]: I1124 09:11:37.587251 4886 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/510b7a7a-1206-44f7-bd72-a85590e7a1ac-server-conf\") on node \"crc\" DevicePath \"\"" Nov 24 09:11:37 crc kubenswrapper[4886]: I1124 09:11:37.587311 4886 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/510b7a7a-1206-44f7-bd72-a85590e7a1ac-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Nov 24 09:11:37 crc kubenswrapper[4886]: I1124 09:11:37.587368 4886 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/510b7a7a-1206-44f7-bd72-a85590e7a1ac-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 09:11:37 crc kubenswrapper[4886]: I1124 09:11:37.587429 4886 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/510b7a7a-1206-44f7-bd72-a85590e7a1ac-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Nov 24 09:11:37 crc kubenswrapper[4886]: I1124 09:11:37.587497 4886 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/510b7a7a-1206-44f7-bd72-a85590e7a1ac-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Nov 24 09:11:37 crc kubenswrapper[4886]: I1124 09:11:37.608782 4886 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage08-crc" (UniqueName: "kubernetes.io/local-volume/local-storage08-crc") on node "crc" Nov 24 09:11:37 crc kubenswrapper[4886]: I1124 09:11:37.632317 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/510b7a7a-1206-44f7-bd72-a85590e7a1ac-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "510b7a7a-1206-44f7-bd72-a85590e7a1ac" (UID: "510b7a7a-1206-44f7-bd72-a85590e7a1ac"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 09:11:37 crc kubenswrapper[4886]: I1124 09:11:37.690221 4886 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/510b7a7a-1206-44f7-bd72-a85590e7a1ac-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Nov 24 09:11:37 crc kubenswrapper[4886]: I1124 09:11:37.690279 4886 reconciler_common.go:293] "Volume detached for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" DevicePath \"\"" Nov 24 09:11:37 crc kubenswrapper[4886]: I1124 09:11:37.746304 4886 generic.go:334] "Generic (PLEG): container finished" podID="510b7a7a-1206-44f7-bd72-a85590e7a1ac" containerID="69d84f8ba68cde94097a7d01257c5d06594a351a401687983ff87b10e4d1a745" exitCode=0 Nov 24 09:11:37 crc kubenswrapper[4886]: I1124 09:11:37.746390 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 24 09:11:37 crc kubenswrapper[4886]: I1124 09:11:37.746397 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"510b7a7a-1206-44f7-bd72-a85590e7a1ac","Type":"ContainerDied","Data":"69d84f8ba68cde94097a7d01257c5d06594a351a401687983ff87b10e4d1a745"} Nov 24 09:11:37 crc kubenswrapper[4886]: I1124 09:11:37.746490 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"510b7a7a-1206-44f7-bd72-a85590e7a1ac","Type":"ContainerDied","Data":"be19a30b49d26ec8f50d17b151a8436d3164406eac8524c25fa18c0d620d188d"} Nov 24 09:11:37 crc kubenswrapper[4886]: I1124 09:11:37.746513 4886 scope.go:117] "RemoveContainer" containerID="69d84f8ba68cde94097a7d01257c5d06594a351a401687983ff87b10e4d1a745" Nov 24 09:11:37 crc kubenswrapper[4886]: I1124 09:11:37.788146 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 24 09:11:37 crc kubenswrapper[4886]: I1124 09:11:37.789651 4886 scope.go:117] "RemoveContainer" containerID="63f08eea34ce9c4a5864e0f2001dccb0b8447f6d4df9007985b597db35ca4b60" Nov 24 09:11:37 crc kubenswrapper[4886]: I1124 09:11:37.809742 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 24 09:11:37 crc kubenswrapper[4886]: I1124 09:11:37.839018 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Nov 24 09:11:37 crc kubenswrapper[4886]: E1124 09:11:37.840197 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="510b7a7a-1206-44f7-bd72-a85590e7a1ac" containerName="setup-container" Nov 24 09:11:37 crc kubenswrapper[4886]: I1124 09:11:37.840226 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="510b7a7a-1206-44f7-bd72-a85590e7a1ac" containerName="setup-container" Nov 24 09:11:37 crc kubenswrapper[4886]: E1124 09:11:37.840236 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="510b7a7a-1206-44f7-bd72-a85590e7a1ac" containerName="rabbitmq" Nov 24 09:11:37 crc kubenswrapper[4886]: I1124 09:11:37.840248 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="510b7a7a-1206-44f7-bd72-a85590e7a1ac" containerName="rabbitmq" Nov 24 09:11:37 crc kubenswrapper[4886]: I1124 09:11:37.840510 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="510b7a7a-1206-44f7-bd72-a85590e7a1ac" containerName="rabbitmq" Nov 24 09:11:37 crc kubenswrapper[4886]: I1124 09:11:37.842096 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 24 09:11:37 crc kubenswrapper[4886]: I1124 09:11:37.846016 4886 scope.go:117] "RemoveContainer" containerID="69d84f8ba68cde94097a7d01257c5d06594a351a401687983ff87b10e4d1a745" Nov 24 09:11:37 crc kubenswrapper[4886]: I1124 09:11:37.846357 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Nov 24 09:11:37 crc kubenswrapper[4886]: I1124 09:11:37.846355 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-cljrm" Nov 24 09:11:37 crc kubenswrapper[4886]: E1124 09:11:37.846918 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"69d84f8ba68cde94097a7d01257c5d06594a351a401687983ff87b10e4d1a745\": container with ID starting with 69d84f8ba68cde94097a7d01257c5d06594a351a401687983ff87b10e4d1a745 not found: ID does not exist" containerID="69d84f8ba68cde94097a7d01257c5d06594a351a401687983ff87b10e4d1a745" Nov 24 09:11:37 crc kubenswrapper[4886]: I1124 09:11:37.846953 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"69d84f8ba68cde94097a7d01257c5d06594a351a401687983ff87b10e4d1a745"} err="failed to get container status \"69d84f8ba68cde94097a7d01257c5d06594a351a401687983ff87b10e4d1a745\": rpc error: code = NotFound desc = could not find container \"69d84f8ba68cde94097a7d01257c5d06594a351a401687983ff87b10e4d1a745\": container with ID starting with 69d84f8ba68cde94097a7d01257c5d06594a351a401687983ff87b10e4d1a745 not found: ID does not exist" Nov 24 09:11:37 crc kubenswrapper[4886]: I1124 09:11:37.846977 4886 scope.go:117] "RemoveContainer" containerID="63f08eea34ce9c4a5864e0f2001dccb0b8447f6d4df9007985b597db35ca4b60" Nov 24 09:11:37 crc kubenswrapper[4886]: E1124 09:11:37.847170 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"63f08eea34ce9c4a5864e0f2001dccb0b8447f6d4df9007985b597db35ca4b60\": container with ID starting with 63f08eea34ce9c4a5864e0f2001dccb0b8447f6d4df9007985b597db35ca4b60 not found: ID does not exist" containerID="63f08eea34ce9c4a5864e0f2001dccb0b8447f6d4df9007985b597db35ca4b60" Nov 24 09:11:37 crc kubenswrapper[4886]: I1124 09:11:37.847192 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"63f08eea34ce9c4a5864e0f2001dccb0b8447f6d4df9007985b597db35ca4b60"} err="failed to get container status \"63f08eea34ce9c4a5864e0f2001dccb0b8447f6d4df9007985b597db35ca4b60\": rpc error: code = NotFound desc = could not find container \"63f08eea34ce9c4a5864e0f2001dccb0b8447f6d4df9007985b597db35ca4b60\": container with ID starting with 63f08eea34ce9c4a5864e0f2001dccb0b8447f6d4df9007985b597db35ca4b60 not found: ID does not exist" Nov 24 09:11:37 crc kubenswrapper[4886]: I1124 09:11:37.847858 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Nov 24 09:11:37 crc kubenswrapper[4886]: I1124 09:11:37.848198 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Nov 24 09:11:37 crc kubenswrapper[4886]: I1124 09:11:37.848316 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Nov 24 09:11:37 crc kubenswrapper[4886]: I1124 09:11:37.848383 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Nov 24 09:11:37 crc kubenswrapper[4886]: I1124 09:11:37.848422 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Nov 24 09:11:37 crc kubenswrapper[4886]: I1124 09:11:37.893981 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 24 09:11:37 crc kubenswrapper[4886]: I1124 09:11:37.994911 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f14f0ef7-768e-4fc8-a2d1-b852fe44d773-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"f14f0ef7-768e-4fc8-a2d1-b852fe44d773\") " pod="openstack/rabbitmq-server-0" Nov 24 09:11:37 crc kubenswrapper[4886]: I1124 09:11:37.995029 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"rabbitmq-server-0\" (UID: \"f14f0ef7-768e-4fc8-a2d1-b852fe44d773\") " pod="openstack/rabbitmq-server-0" Nov 24 09:11:37 crc kubenswrapper[4886]: I1124 09:11:37.995071 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f14f0ef7-768e-4fc8-a2d1-b852fe44d773-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"f14f0ef7-768e-4fc8-a2d1-b852fe44d773\") " pod="openstack/rabbitmq-server-0" Nov 24 09:11:37 crc kubenswrapper[4886]: I1124 09:11:37.995106 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-29qdd\" (UniqueName: \"kubernetes.io/projected/f14f0ef7-768e-4fc8-a2d1-b852fe44d773-kube-api-access-29qdd\") pod \"rabbitmq-server-0\" (UID: \"f14f0ef7-768e-4fc8-a2d1-b852fe44d773\") " pod="openstack/rabbitmq-server-0" Nov 24 09:11:37 crc kubenswrapper[4886]: I1124 09:11:37.995214 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f14f0ef7-768e-4fc8-a2d1-b852fe44d773-pod-info\") pod \"rabbitmq-server-0\" (UID: \"f14f0ef7-768e-4fc8-a2d1-b852fe44d773\") " pod="openstack/rabbitmq-server-0" Nov 24 09:11:37 crc kubenswrapper[4886]: I1124 09:11:37.995287 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f14f0ef7-768e-4fc8-a2d1-b852fe44d773-config-data\") pod \"rabbitmq-server-0\" (UID: \"f14f0ef7-768e-4fc8-a2d1-b852fe44d773\") " pod="openstack/rabbitmq-server-0" Nov 24 09:11:37 crc kubenswrapper[4886]: I1124 09:11:37.995303 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f14f0ef7-768e-4fc8-a2d1-b852fe44d773-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"f14f0ef7-768e-4fc8-a2d1-b852fe44d773\") " pod="openstack/rabbitmq-server-0" Nov 24 09:11:37 crc kubenswrapper[4886]: I1124 09:11:37.995360 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f14f0ef7-768e-4fc8-a2d1-b852fe44d773-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"f14f0ef7-768e-4fc8-a2d1-b852fe44d773\") " pod="openstack/rabbitmq-server-0" Nov 24 09:11:37 crc kubenswrapper[4886]: I1124 09:11:37.995382 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f14f0ef7-768e-4fc8-a2d1-b852fe44d773-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"f14f0ef7-768e-4fc8-a2d1-b852fe44d773\") " pod="openstack/rabbitmq-server-0" Nov 24 09:11:37 crc kubenswrapper[4886]: I1124 09:11:37.995406 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f14f0ef7-768e-4fc8-a2d1-b852fe44d773-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"f14f0ef7-768e-4fc8-a2d1-b852fe44d773\") " pod="openstack/rabbitmq-server-0" Nov 24 09:11:37 crc kubenswrapper[4886]: I1124 09:11:37.995472 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f14f0ef7-768e-4fc8-a2d1-b852fe44d773-server-conf\") pod \"rabbitmq-server-0\" (UID: \"f14f0ef7-768e-4fc8-a2d1-b852fe44d773\") " pod="openstack/rabbitmq-server-0" Nov 24 09:11:38 crc kubenswrapper[4886]: I1124 09:11:38.098278 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f14f0ef7-768e-4fc8-a2d1-b852fe44d773-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"f14f0ef7-768e-4fc8-a2d1-b852fe44d773\") " pod="openstack/rabbitmq-server-0" Nov 24 09:11:38 crc kubenswrapper[4886]: I1124 09:11:38.098497 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"rabbitmq-server-0\" (UID: \"f14f0ef7-768e-4fc8-a2d1-b852fe44d773\") " pod="openstack/rabbitmq-server-0" Nov 24 09:11:38 crc kubenswrapper[4886]: I1124 09:11:38.098591 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f14f0ef7-768e-4fc8-a2d1-b852fe44d773-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"f14f0ef7-768e-4fc8-a2d1-b852fe44d773\") " pod="openstack/rabbitmq-server-0" Nov 24 09:11:38 crc kubenswrapper[4886]: I1124 09:11:38.098655 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-29qdd\" (UniqueName: \"kubernetes.io/projected/f14f0ef7-768e-4fc8-a2d1-b852fe44d773-kube-api-access-29qdd\") pod \"rabbitmq-server-0\" (UID: \"f14f0ef7-768e-4fc8-a2d1-b852fe44d773\") " pod="openstack/rabbitmq-server-0" Nov 24 09:11:38 crc kubenswrapper[4886]: I1124 09:11:38.098848 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f14f0ef7-768e-4fc8-a2d1-b852fe44d773-pod-info\") pod \"rabbitmq-server-0\" (UID: \"f14f0ef7-768e-4fc8-a2d1-b852fe44d773\") " pod="openstack/rabbitmq-server-0" Nov 24 09:11:38 crc kubenswrapper[4886]: I1124 09:11:38.098889 4886 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"rabbitmq-server-0\" (UID: \"f14f0ef7-768e-4fc8-a2d1-b852fe44d773\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/rabbitmq-server-0" Nov 24 09:11:38 crc kubenswrapper[4886]: I1124 09:11:38.099459 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f14f0ef7-768e-4fc8-a2d1-b852fe44d773-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"f14f0ef7-768e-4fc8-a2d1-b852fe44d773\") " pod="openstack/rabbitmq-server-0" Nov 24 09:11:38 crc kubenswrapper[4886]: I1124 09:11:38.099689 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f14f0ef7-768e-4fc8-a2d1-b852fe44d773-config-data\") pod \"rabbitmq-server-0\" (UID: \"f14f0ef7-768e-4fc8-a2d1-b852fe44d773\") " pod="openstack/rabbitmq-server-0" Nov 24 09:11:38 crc kubenswrapper[4886]: I1124 09:11:38.099749 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f14f0ef7-768e-4fc8-a2d1-b852fe44d773-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"f14f0ef7-768e-4fc8-a2d1-b852fe44d773\") " pod="openstack/rabbitmq-server-0" Nov 24 09:11:38 crc kubenswrapper[4886]: I1124 09:11:38.099969 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f14f0ef7-768e-4fc8-a2d1-b852fe44d773-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"f14f0ef7-768e-4fc8-a2d1-b852fe44d773\") " pod="openstack/rabbitmq-server-0" Nov 24 09:11:38 crc kubenswrapper[4886]: I1124 09:11:38.100006 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f14f0ef7-768e-4fc8-a2d1-b852fe44d773-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"f14f0ef7-768e-4fc8-a2d1-b852fe44d773\") " pod="openstack/rabbitmq-server-0" Nov 24 09:11:38 crc kubenswrapper[4886]: I1124 09:11:38.100066 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f14f0ef7-768e-4fc8-a2d1-b852fe44d773-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"f14f0ef7-768e-4fc8-a2d1-b852fe44d773\") " pod="openstack/rabbitmq-server-0" Nov 24 09:11:38 crc kubenswrapper[4886]: I1124 09:11:38.100090 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f14f0ef7-768e-4fc8-a2d1-b852fe44d773-server-conf\") pod \"rabbitmq-server-0\" (UID: \"f14f0ef7-768e-4fc8-a2d1-b852fe44d773\") " pod="openstack/rabbitmq-server-0" Nov 24 09:11:38 crc kubenswrapper[4886]: I1124 09:11:38.100770 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f14f0ef7-768e-4fc8-a2d1-b852fe44d773-config-data\") pod \"rabbitmq-server-0\" (UID: \"f14f0ef7-768e-4fc8-a2d1-b852fe44d773\") " pod="openstack/rabbitmq-server-0" Nov 24 09:11:38 crc kubenswrapper[4886]: I1124 09:11:38.101909 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f14f0ef7-768e-4fc8-a2d1-b852fe44d773-server-conf\") pod \"rabbitmq-server-0\" (UID: \"f14f0ef7-768e-4fc8-a2d1-b852fe44d773\") " pod="openstack/rabbitmq-server-0" Nov 24 09:11:38 crc kubenswrapper[4886]: I1124 09:11:38.105091 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f14f0ef7-768e-4fc8-a2d1-b852fe44d773-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"f14f0ef7-768e-4fc8-a2d1-b852fe44d773\") " pod="openstack/rabbitmq-server-0" Nov 24 09:11:38 crc kubenswrapper[4886]: I1124 09:11:38.105752 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f14f0ef7-768e-4fc8-a2d1-b852fe44d773-pod-info\") pod \"rabbitmq-server-0\" (UID: \"f14f0ef7-768e-4fc8-a2d1-b852fe44d773\") " pod="openstack/rabbitmq-server-0" Nov 24 09:11:38 crc kubenswrapper[4886]: I1124 09:11:38.105820 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f14f0ef7-768e-4fc8-a2d1-b852fe44d773-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"f14f0ef7-768e-4fc8-a2d1-b852fe44d773\") " pod="openstack/rabbitmq-server-0" Nov 24 09:11:38 crc kubenswrapper[4886]: I1124 09:11:38.105976 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f14f0ef7-768e-4fc8-a2d1-b852fe44d773-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"f14f0ef7-768e-4fc8-a2d1-b852fe44d773\") " pod="openstack/rabbitmq-server-0" Nov 24 09:11:38 crc kubenswrapper[4886]: I1124 09:11:38.109644 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f14f0ef7-768e-4fc8-a2d1-b852fe44d773-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"f14f0ef7-768e-4fc8-a2d1-b852fe44d773\") " pod="openstack/rabbitmq-server-0" Nov 24 09:11:38 crc kubenswrapper[4886]: I1124 09:11:38.127328 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f14f0ef7-768e-4fc8-a2d1-b852fe44d773-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"f14f0ef7-768e-4fc8-a2d1-b852fe44d773\") " pod="openstack/rabbitmq-server-0" Nov 24 09:11:38 crc kubenswrapper[4886]: I1124 09:11:38.139561 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-29qdd\" (UniqueName: \"kubernetes.io/projected/f14f0ef7-768e-4fc8-a2d1-b852fe44d773-kube-api-access-29qdd\") pod \"rabbitmq-server-0\" (UID: \"f14f0ef7-768e-4fc8-a2d1-b852fe44d773\") " pod="openstack/rabbitmq-server-0" Nov 24 09:11:38 crc kubenswrapper[4886]: I1124 09:11:38.166007 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"rabbitmq-server-0\" (UID: \"f14f0ef7-768e-4fc8-a2d1-b852fe44d773\") " pod="openstack/rabbitmq-server-0" Nov 24 09:11:38 crc kubenswrapper[4886]: I1124 09:11:38.179523 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 24 09:11:38 crc kubenswrapper[4886]: E1124 09:11:38.206774 4886 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf10026aa_640c_4f36_9912_cd4177af074d.slice/crio-3a43dfdd3bfeead9977c7153e7f0a2e6a287282da1aa52e2dc0d8529402136d7.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf10026aa_640c_4f36_9912_cd4177af074d.slice/crio-conmon-3a43dfdd3bfeead9977c7153e7f0a2e6a287282da1aa52e2dc0d8529402136d7.scope\": RecentStats: unable to find data in memory cache]" Nov 24 09:11:38 crc kubenswrapper[4886]: I1124 09:11:38.563383 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 24 09:11:38 crc kubenswrapper[4886]: I1124 09:11:38.723181 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f10026aa-640c-4f36-9912-cd4177af074d-server-conf\") pod \"f10026aa-640c-4f36-9912-cd4177af074d\" (UID: \"f10026aa-640c-4f36-9912-cd4177af074d\") " Nov 24 09:11:38 crc kubenswrapper[4886]: I1124 09:11:38.724581 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f10026aa-640c-4f36-9912-cd4177af074d-rabbitmq-erlang-cookie\") pod \"f10026aa-640c-4f36-9912-cd4177af074d\" (UID: \"f10026aa-640c-4f36-9912-cd4177af074d\") " Nov 24 09:11:38 crc kubenswrapper[4886]: I1124 09:11:38.724741 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f10026aa-640c-4f36-9912-cd4177af074d-erlang-cookie-secret\") pod \"f10026aa-640c-4f36-9912-cd4177af074d\" (UID: \"f10026aa-640c-4f36-9912-cd4177af074d\") " Nov 24 09:11:38 crc kubenswrapper[4886]: I1124 09:11:38.725000 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f10026aa-640c-4f36-9912-cd4177af074d-rabbitmq-confd\") pod \"f10026aa-640c-4f36-9912-cd4177af074d\" (UID: \"f10026aa-640c-4f36-9912-cd4177af074d\") " Nov 24 09:11:38 crc kubenswrapper[4886]: I1124 09:11:38.725091 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f10026aa-640c-4f36-9912-cd4177af074d-pod-info\") pod \"f10026aa-640c-4f36-9912-cd4177af074d\" (UID: \"f10026aa-640c-4f36-9912-cd4177af074d\") " Nov 24 09:11:38 crc kubenswrapper[4886]: I1124 09:11:38.725442 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f10026aa-640c-4f36-9912-cd4177af074d-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "f10026aa-640c-4f36-9912-cd4177af074d" (UID: "f10026aa-640c-4f36-9912-cd4177af074d"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 09:11:38 crc kubenswrapper[4886]: I1124 09:11:38.725673 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f10026aa-640c-4f36-9912-cd4177af074d-rabbitmq-tls\") pod \"f10026aa-640c-4f36-9912-cd4177af074d\" (UID: \"f10026aa-640c-4f36-9912-cd4177af074d\") " Nov 24 09:11:38 crc kubenswrapper[4886]: I1124 09:11:38.726008 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f10026aa-640c-4f36-9912-cd4177af074d-plugins-conf\") pod \"f10026aa-640c-4f36-9912-cd4177af074d\" (UID: \"f10026aa-640c-4f36-9912-cd4177af074d\") " Nov 24 09:11:38 crc kubenswrapper[4886]: I1124 09:11:38.726085 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"f10026aa-640c-4f36-9912-cd4177af074d\" (UID: \"f10026aa-640c-4f36-9912-cd4177af074d\") " Nov 24 09:11:38 crc kubenswrapper[4886]: I1124 09:11:38.726137 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rln56\" (UniqueName: \"kubernetes.io/projected/f10026aa-640c-4f36-9912-cd4177af074d-kube-api-access-rln56\") pod \"f10026aa-640c-4f36-9912-cd4177af074d\" (UID: \"f10026aa-640c-4f36-9912-cd4177af074d\") " Nov 24 09:11:38 crc kubenswrapper[4886]: I1124 09:11:38.726250 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f10026aa-640c-4f36-9912-cd4177af074d-rabbitmq-plugins\") pod \"f10026aa-640c-4f36-9912-cd4177af074d\" (UID: \"f10026aa-640c-4f36-9912-cd4177af074d\") " Nov 24 09:11:38 crc kubenswrapper[4886]: I1124 09:11:38.726274 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f10026aa-640c-4f36-9912-cd4177af074d-config-data\") pod \"f10026aa-640c-4f36-9912-cd4177af074d\" (UID: \"f10026aa-640c-4f36-9912-cd4177af074d\") " Nov 24 09:11:38 crc kubenswrapper[4886]: I1124 09:11:38.727354 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f10026aa-640c-4f36-9912-cd4177af074d-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "f10026aa-640c-4f36-9912-cd4177af074d" (UID: "f10026aa-640c-4f36-9912-cd4177af074d"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 09:11:38 crc kubenswrapper[4886]: I1124 09:11:38.727903 4886 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f10026aa-640c-4f36-9912-cd4177af074d-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Nov 24 09:11:38 crc kubenswrapper[4886]: I1124 09:11:38.727982 4886 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f10026aa-640c-4f36-9912-cd4177af074d-plugins-conf\") on node \"crc\" DevicePath \"\"" Nov 24 09:11:38 crc kubenswrapper[4886]: I1124 09:11:38.728503 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f10026aa-640c-4f36-9912-cd4177af074d-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "f10026aa-640c-4f36-9912-cd4177af074d" (UID: "f10026aa-640c-4f36-9912-cd4177af074d"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 09:11:38 crc kubenswrapper[4886]: I1124 09:11:38.735195 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f10026aa-640c-4f36-9912-cd4177af074d-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "f10026aa-640c-4f36-9912-cd4177af074d" (UID: "f10026aa-640c-4f36-9912-cd4177af074d"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:11:38 crc kubenswrapper[4886]: I1124 09:11:38.739287 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage01-crc" (OuterVolumeSpecName: "persistence") pod "f10026aa-640c-4f36-9912-cd4177af074d" (UID: "f10026aa-640c-4f36-9912-cd4177af074d"). InnerVolumeSpecName "local-storage01-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 24 09:11:38 crc kubenswrapper[4886]: I1124 09:11:38.743326 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f10026aa-640c-4f36-9912-cd4177af074d-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "f10026aa-640c-4f36-9912-cd4177af074d" (UID: "f10026aa-640c-4f36-9912-cd4177af074d"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 09:11:38 crc kubenswrapper[4886]: I1124 09:11:38.746857 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f10026aa-640c-4f36-9912-cd4177af074d-kube-api-access-rln56" (OuterVolumeSpecName: "kube-api-access-rln56") pod "f10026aa-640c-4f36-9912-cd4177af074d" (UID: "f10026aa-640c-4f36-9912-cd4177af074d"). InnerVolumeSpecName "kube-api-access-rln56". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 09:11:38 crc kubenswrapper[4886]: I1124 09:11:38.750534 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/f10026aa-640c-4f36-9912-cd4177af074d-pod-info" (OuterVolumeSpecName: "pod-info") pod "f10026aa-640c-4f36-9912-cd4177af074d" (UID: "f10026aa-640c-4f36-9912-cd4177af074d"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Nov 24 09:11:38 crc kubenswrapper[4886]: I1124 09:11:38.762438 4886 generic.go:334] "Generic (PLEG): container finished" podID="f10026aa-640c-4f36-9912-cd4177af074d" containerID="3a43dfdd3bfeead9977c7153e7f0a2e6a287282da1aa52e2dc0d8529402136d7" exitCode=0 Nov 24 09:11:38 crc kubenswrapper[4886]: I1124 09:11:38.762512 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 24 09:11:38 crc kubenswrapper[4886]: I1124 09:11:38.762502 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"f10026aa-640c-4f36-9912-cd4177af074d","Type":"ContainerDied","Data":"3a43dfdd3bfeead9977c7153e7f0a2e6a287282da1aa52e2dc0d8529402136d7"} Nov 24 09:11:38 crc kubenswrapper[4886]: I1124 09:11:38.762588 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"f10026aa-640c-4f36-9912-cd4177af074d","Type":"ContainerDied","Data":"12d895eeaf622caf75f1f4cf667982a72c621c9b6a710373f6c5d692ef5588be"} Nov 24 09:11:38 crc kubenswrapper[4886]: I1124 09:11:38.762618 4886 scope.go:117] "RemoveContainer" containerID="3a43dfdd3bfeead9977c7153e7f0a2e6a287282da1aa52e2dc0d8529402136d7" Nov 24 09:11:38 crc kubenswrapper[4886]: I1124 09:11:38.779869 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f10026aa-640c-4f36-9912-cd4177af074d-config-data" (OuterVolumeSpecName: "config-data") pod "f10026aa-640c-4f36-9912-cd4177af074d" (UID: "f10026aa-640c-4f36-9912-cd4177af074d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 09:11:38 crc kubenswrapper[4886]: I1124 09:11:38.794472 4886 scope.go:117] "RemoveContainer" containerID="e65c615b4f50616bd7bd98b9b0df7821f9a9633ce9315706f4f12417f031030d" Nov 24 09:11:38 crc kubenswrapper[4886]: I1124 09:11:38.818911 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f10026aa-640c-4f36-9912-cd4177af074d-server-conf" (OuterVolumeSpecName: "server-conf") pod "f10026aa-640c-4f36-9912-cd4177af074d" (UID: "f10026aa-640c-4f36-9912-cd4177af074d"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 09:11:38 crc kubenswrapper[4886]: I1124 09:11:38.830555 4886 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f10026aa-640c-4f36-9912-cd4177af074d-pod-info\") on node \"crc\" DevicePath \"\"" Nov 24 09:11:38 crc kubenswrapper[4886]: I1124 09:11:38.830601 4886 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f10026aa-640c-4f36-9912-cd4177af074d-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Nov 24 09:11:38 crc kubenswrapper[4886]: I1124 09:11:38.830640 4886 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" " Nov 24 09:11:38 crc kubenswrapper[4886]: I1124 09:11:38.830654 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rln56\" (UniqueName: \"kubernetes.io/projected/f10026aa-640c-4f36-9912-cd4177af074d-kube-api-access-rln56\") on node \"crc\" DevicePath \"\"" Nov 24 09:11:38 crc kubenswrapper[4886]: I1124 09:11:38.830668 4886 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f10026aa-640c-4f36-9912-cd4177af074d-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Nov 24 09:11:38 crc kubenswrapper[4886]: I1124 09:11:38.830683 4886 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f10026aa-640c-4f36-9912-cd4177af074d-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 09:11:38 crc kubenswrapper[4886]: I1124 09:11:38.830696 4886 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f10026aa-640c-4f36-9912-cd4177af074d-server-conf\") on node \"crc\" DevicePath \"\"" Nov 24 09:11:38 crc kubenswrapper[4886]: I1124 09:11:38.830708 4886 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f10026aa-640c-4f36-9912-cd4177af074d-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Nov 24 09:11:38 crc kubenswrapper[4886]: I1124 09:11:38.832473 4886 scope.go:117] "RemoveContainer" containerID="3a43dfdd3bfeead9977c7153e7f0a2e6a287282da1aa52e2dc0d8529402136d7" Nov 24 09:11:38 crc kubenswrapper[4886]: E1124 09:11:38.833361 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3a43dfdd3bfeead9977c7153e7f0a2e6a287282da1aa52e2dc0d8529402136d7\": container with ID starting with 3a43dfdd3bfeead9977c7153e7f0a2e6a287282da1aa52e2dc0d8529402136d7 not found: ID does not exist" containerID="3a43dfdd3bfeead9977c7153e7f0a2e6a287282da1aa52e2dc0d8529402136d7" Nov 24 09:11:38 crc kubenswrapper[4886]: I1124 09:11:38.833393 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3a43dfdd3bfeead9977c7153e7f0a2e6a287282da1aa52e2dc0d8529402136d7"} err="failed to get container status \"3a43dfdd3bfeead9977c7153e7f0a2e6a287282da1aa52e2dc0d8529402136d7\": rpc error: code = NotFound desc = could not find container \"3a43dfdd3bfeead9977c7153e7f0a2e6a287282da1aa52e2dc0d8529402136d7\": container with ID starting with 3a43dfdd3bfeead9977c7153e7f0a2e6a287282da1aa52e2dc0d8529402136d7 not found: ID does not exist" Nov 24 09:11:38 crc kubenswrapper[4886]: I1124 09:11:38.833419 4886 scope.go:117] "RemoveContainer" containerID="e65c615b4f50616bd7bd98b9b0df7821f9a9633ce9315706f4f12417f031030d" Nov 24 09:11:38 crc kubenswrapper[4886]: E1124 09:11:38.833704 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e65c615b4f50616bd7bd98b9b0df7821f9a9633ce9315706f4f12417f031030d\": container with ID starting with e65c615b4f50616bd7bd98b9b0df7821f9a9633ce9315706f4f12417f031030d not found: ID does not exist" containerID="e65c615b4f50616bd7bd98b9b0df7821f9a9633ce9315706f4f12417f031030d" Nov 24 09:11:38 crc kubenswrapper[4886]: I1124 09:11:38.833723 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e65c615b4f50616bd7bd98b9b0df7821f9a9633ce9315706f4f12417f031030d"} err="failed to get container status \"e65c615b4f50616bd7bd98b9b0df7821f9a9633ce9315706f4f12417f031030d\": rpc error: code = NotFound desc = could not find container \"e65c615b4f50616bd7bd98b9b0df7821f9a9633ce9315706f4f12417f031030d\": container with ID starting with e65c615b4f50616bd7bd98b9b0df7821f9a9633ce9315706f4f12417f031030d not found: ID does not exist" Nov 24 09:11:38 crc kubenswrapper[4886]: I1124 09:11:38.858369 4886 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage01-crc" (UniqueName: "kubernetes.io/local-volume/local-storage01-crc") on node "crc" Nov 24 09:11:38 crc kubenswrapper[4886]: I1124 09:11:38.865821 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="510b7a7a-1206-44f7-bd72-a85590e7a1ac" path="/var/lib/kubelet/pods/510b7a7a-1206-44f7-bd72-a85590e7a1ac/volumes" Nov 24 09:11:38 crc kubenswrapper[4886]: I1124 09:11:38.892916 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f10026aa-640c-4f36-9912-cd4177af074d-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "f10026aa-640c-4f36-9912-cd4177af074d" (UID: "f10026aa-640c-4f36-9912-cd4177af074d"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 09:11:38 crc kubenswrapper[4886]: I1124 09:11:38.934179 4886 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f10026aa-640c-4f36-9912-cd4177af074d-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Nov 24 09:11:38 crc kubenswrapper[4886]: I1124 09:11:38.934241 4886 reconciler_common.go:293] "Volume detached for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" DevicePath \"\"" Nov 24 09:11:38 crc kubenswrapper[4886]: I1124 09:11:38.948585 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 24 09:11:39 crc kubenswrapper[4886]: I1124 09:11:39.150913 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 24 09:11:39 crc kubenswrapper[4886]: I1124 09:11:39.160965 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 24 09:11:39 crc kubenswrapper[4886]: I1124 09:11:39.178285 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 24 09:11:39 crc kubenswrapper[4886]: E1124 09:11:39.178959 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f10026aa-640c-4f36-9912-cd4177af074d" containerName="setup-container" Nov 24 09:11:39 crc kubenswrapper[4886]: I1124 09:11:39.178983 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="f10026aa-640c-4f36-9912-cd4177af074d" containerName="setup-container" Nov 24 09:11:39 crc kubenswrapper[4886]: E1124 09:11:39.178995 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f10026aa-640c-4f36-9912-cd4177af074d" containerName="rabbitmq" Nov 24 09:11:39 crc kubenswrapper[4886]: I1124 09:11:39.179003 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="f10026aa-640c-4f36-9912-cd4177af074d" containerName="rabbitmq" Nov 24 09:11:39 crc kubenswrapper[4886]: I1124 09:11:39.179358 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="f10026aa-640c-4f36-9912-cd4177af074d" containerName="rabbitmq" Nov 24 09:11:39 crc kubenswrapper[4886]: I1124 09:11:39.180891 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 24 09:11:39 crc kubenswrapper[4886]: I1124 09:11:39.183534 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Nov 24 09:11:39 crc kubenswrapper[4886]: I1124 09:11:39.183679 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-dsj2b" Nov 24 09:11:39 crc kubenswrapper[4886]: I1124 09:11:39.183816 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Nov 24 09:11:39 crc kubenswrapper[4886]: I1124 09:11:39.184050 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Nov 24 09:11:39 crc kubenswrapper[4886]: I1124 09:11:39.184572 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Nov 24 09:11:39 crc kubenswrapper[4886]: I1124 09:11:39.184802 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Nov 24 09:11:39 crc kubenswrapper[4886]: I1124 09:11:39.184828 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Nov 24 09:11:39 crc kubenswrapper[4886]: I1124 09:11:39.193506 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 24 09:11:39 crc kubenswrapper[4886]: I1124 09:11:39.345232 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/533cb212-964b-4427-ac3f-ebafca6d8787-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"533cb212-964b-4427-ac3f-ebafca6d8787\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 09:11:39 crc kubenswrapper[4886]: I1124 09:11:39.345297 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/533cb212-964b-4427-ac3f-ebafca6d8787-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"533cb212-964b-4427-ac3f-ebafca6d8787\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 09:11:39 crc kubenswrapper[4886]: I1124 09:11:39.345332 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-stj7h\" (UniqueName: \"kubernetes.io/projected/533cb212-964b-4427-ac3f-ebafca6d8787-kube-api-access-stj7h\") pod \"rabbitmq-cell1-server-0\" (UID: \"533cb212-964b-4427-ac3f-ebafca6d8787\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 09:11:39 crc kubenswrapper[4886]: I1124 09:11:39.345397 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/533cb212-964b-4427-ac3f-ebafca6d8787-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"533cb212-964b-4427-ac3f-ebafca6d8787\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 09:11:39 crc kubenswrapper[4886]: I1124 09:11:39.345432 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"533cb212-964b-4427-ac3f-ebafca6d8787\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 09:11:39 crc kubenswrapper[4886]: I1124 09:11:39.345449 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/533cb212-964b-4427-ac3f-ebafca6d8787-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"533cb212-964b-4427-ac3f-ebafca6d8787\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 09:11:39 crc kubenswrapper[4886]: I1124 09:11:39.345475 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/533cb212-964b-4427-ac3f-ebafca6d8787-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"533cb212-964b-4427-ac3f-ebafca6d8787\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 09:11:39 crc kubenswrapper[4886]: I1124 09:11:39.345502 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/533cb212-964b-4427-ac3f-ebafca6d8787-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"533cb212-964b-4427-ac3f-ebafca6d8787\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 09:11:39 crc kubenswrapper[4886]: I1124 09:11:39.345528 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/533cb212-964b-4427-ac3f-ebafca6d8787-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"533cb212-964b-4427-ac3f-ebafca6d8787\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 09:11:39 crc kubenswrapper[4886]: I1124 09:11:39.345591 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/533cb212-964b-4427-ac3f-ebafca6d8787-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"533cb212-964b-4427-ac3f-ebafca6d8787\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 09:11:39 crc kubenswrapper[4886]: I1124 09:11:39.345617 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/533cb212-964b-4427-ac3f-ebafca6d8787-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"533cb212-964b-4427-ac3f-ebafca6d8787\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 09:11:39 crc kubenswrapper[4886]: I1124 09:11:39.447230 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/533cb212-964b-4427-ac3f-ebafca6d8787-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"533cb212-964b-4427-ac3f-ebafca6d8787\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 09:11:39 crc kubenswrapper[4886]: I1124 09:11:39.447301 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/533cb212-964b-4427-ac3f-ebafca6d8787-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"533cb212-964b-4427-ac3f-ebafca6d8787\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 09:11:39 crc kubenswrapper[4886]: I1124 09:11:39.447348 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/533cb212-964b-4427-ac3f-ebafca6d8787-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"533cb212-964b-4427-ac3f-ebafca6d8787\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 09:11:39 crc kubenswrapper[4886]: I1124 09:11:39.447379 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/533cb212-964b-4427-ac3f-ebafca6d8787-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"533cb212-964b-4427-ac3f-ebafca6d8787\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 09:11:39 crc kubenswrapper[4886]: I1124 09:11:39.447417 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-stj7h\" (UniqueName: \"kubernetes.io/projected/533cb212-964b-4427-ac3f-ebafca6d8787-kube-api-access-stj7h\") pod \"rabbitmq-cell1-server-0\" (UID: \"533cb212-964b-4427-ac3f-ebafca6d8787\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 09:11:39 crc kubenswrapper[4886]: I1124 09:11:39.447491 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/533cb212-964b-4427-ac3f-ebafca6d8787-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"533cb212-964b-4427-ac3f-ebafca6d8787\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 09:11:39 crc kubenswrapper[4886]: I1124 09:11:39.447534 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"533cb212-964b-4427-ac3f-ebafca6d8787\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 09:11:39 crc kubenswrapper[4886]: I1124 09:11:39.447562 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/533cb212-964b-4427-ac3f-ebafca6d8787-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"533cb212-964b-4427-ac3f-ebafca6d8787\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 09:11:39 crc kubenswrapper[4886]: I1124 09:11:39.447592 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/533cb212-964b-4427-ac3f-ebafca6d8787-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"533cb212-964b-4427-ac3f-ebafca6d8787\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 09:11:39 crc kubenswrapper[4886]: I1124 09:11:39.447615 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/533cb212-964b-4427-ac3f-ebafca6d8787-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"533cb212-964b-4427-ac3f-ebafca6d8787\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 09:11:39 crc kubenswrapper[4886]: I1124 09:11:39.447643 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/533cb212-964b-4427-ac3f-ebafca6d8787-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"533cb212-964b-4427-ac3f-ebafca6d8787\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 09:11:39 crc kubenswrapper[4886]: I1124 09:11:39.449814 4886 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"533cb212-964b-4427-ac3f-ebafca6d8787\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/rabbitmq-cell1-server-0" Nov 24 09:11:39 crc kubenswrapper[4886]: I1124 09:11:39.449937 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/533cb212-964b-4427-ac3f-ebafca6d8787-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"533cb212-964b-4427-ac3f-ebafca6d8787\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 09:11:39 crc kubenswrapper[4886]: I1124 09:11:39.450021 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/533cb212-964b-4427-ac3f-ebafca6d8787-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"533cb212-964b-4427-ac3f-ebafca6d8787\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 09:11:39 crc kubenswrapper[4886]: I1124 09:11:39.450268 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/533cb212-964b-4427-ac3f-ebafca6d8787-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"533cb212-964b-4427-ac3f-ebafca6d8787\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 09:11:39 crc kubenswrapper[4886]: I1124 09:11:39.450951 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/533cb212-964b-4427-ac3f-ebafca6d8787-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"533cb212-964b-4427-ac3f-ebafca6d8787\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 09:11:39 crc kubenswrapper[4886]: I1124 09:11:39.454829 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/533cb212-964b-4427-ac3f-ebafca6d8787-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"533cb212-964b-4427-ac3f-ebafca6d8787\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 09:11:39 crc kubenswrapper[4886]: I1124 09:11:39.455310 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/533cb212-964b-4427-ac3f-ebafca6d8787-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"533cb212-964b-4427-ac3f-ebafca6d8787\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 09:11:39 crc kubenswrapper[4886]: I1124 09:11:39.455828 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/533cb212-964b-4427-ac3f-ebafca6d8787-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"533cb212-964b-4427-ac3f-ebafca6d8787\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 09:11:39 crc kubenswrapper[4886]: I1124 09:11:39.458515 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/533cb212-964b-4427-ac3f-ebafca6d8787-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"533cb212-964b-4427-ac3f-ebafca6d8787\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 09:11:39 crc kubenswrapper[4886]: I1124 09:11:39.461695 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/533cb212-964b-4427-ac3f-ebafca6d8787-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"533cb212-964b-4427-ac3f-ebafca6d8787\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 09:11:39 crc kubenswrapper[4886]: I1124 09:11:39.478194 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-stj7h\" (UniqueName: \"kubernetes.io/projected/533cb212-964b-4427-ac3f-ebafca6d8787-kube-api-access-stj7h\") pod \"rabbitmq-cell1-server-0\" (UID: \"533cb212-964b-4427-ac3f-ebafca6d8787\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 09:11:39 crc kubenswrapper[4886]: I1124 09:11:39.506641 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"533cb212-964b-4427-ac3f-ebafca6d8787\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 09:11:39 crc kubenswrapper[4886]: I1124 09:11:39.776099 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"f14f0ef7-768e-4fc8-a2d1-b852fe44d773","Type":"ContainerStarted","Data":"d9f7cd24b6cf1cc81dee52e7d028af1e1c4c5686a1351babcd403d28248cfd51"} Nov 24 09:11:39 crc kubenswrapper[4886]: I1124 09:11:39.799711 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 24 09:11:40 crc kubenswrapper[4886]: I1124 09:11:40.329930 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 24 09:11:40 crc kubenswrapper[4886]: I1124 09:11:40.459534 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-d558885bc-6ww2l"] Nov 24 09:11:40 crc kubenswrapper[4886]: I1124 09:11:40.463083 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-d558885bc-6ww2l" Nov 24 09:11:40 crc kubenswrapper[4886]: I1124 09:11:40.468092 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-edpm-ipam" Nov 24 09:11:40 crc kubenswrapper[4886]: I1124 09:11:40.489404 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-d558885bc-6ww2l"] Nov 24 09:11:40 crc kubenswrapper[4886]: I1124 09:11:40.576761 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2d6f4659-e108-4485-ba8e-fc3c7c4b92a7-dns-swift-storage-0\") pod \"dnsmasq-dns-d558885bc-6ww2l\" (UID: \"2d6f4659-e108-4485-ba8e-fc3c7c4b92a7\") " pod="openstack/dnsmasq-dns-d558885bc-6ww2l" Nov 24 09:11:40 crc kubenswrapper[4886]: I1124 09:11:40.576960 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4xn8n\" (UniqueName: \"kubernetes.io/projected/2d6f4659-e108-4485-ba8e-fc3c7c4b92a7-kube-api-access-4xn8n\") pod \"dnsmasq-dns-d558885bc-6ww2l\" (UID: \"2d6f4659-e108-4485-ba8e-fc3c7c4b92a7\") " pod="openstack/dnsmasq-dns-d558885bc-6ww2l" Nov 24 09:11:40 crc kubenswrapper[4886]: I1124 09:11:40.577251 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2d6f4659-e108-4485-ba8e-fc3c7c4b92a7-config\") pod \"dnsmasq-dns-d558885bc-6ww2l\" (UID: \"2d6f4659-e108-4485-ba8e-fc3c7c4b92a7\") " pod="openstack/dnsmasq-dns-d558885bc-6ww2l" Nov 24 09:11:40 crc kubenswrapper[4886]: I1124 09:11:40.577284 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/2d6f4659-e108-4485-ba8e-fc3c7c4b92a7-openstack-edpm-ipam\") pod \"dnsmasq-dns-d558885bc-6ww2l\" (UID: \"2d6f4659-e108-4485-ba8e-fc3c7c4b92a7\") " pod="openstack/dnsmasq-dns-d558885bc-6ww2l" Nov 24 09:11:40 crc kubenswrapper[4886]: I1124 09:11:40.577326 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2d6f4659-e108-4485-ba8e-fc3c7c4b92a7-dns-svc\") pod \"dnsmasq-dns-d558885bc-6ww2l\" (UID: \"2d6f4659-e108-4485-ba8e-fc3c7c4b92a7\") " pod="openstack/dnsmasq-dns-d558885bc-6ww2l" Nov 24 09:11:40 crc kubenswrapper[4886]: I1124 09:11:40.577346 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2d6f4659-e108-4485-ba8e-fc3c7c4b92a7-ovsdbserver-sb\") pod \"dnsmasq-dns-d558885bc-6ww2l\" (UID: \"2d6f4659-e108-4485-ba8e-fc3c7c4b92a7\") " pod="openstack/dnsmasq-dns-d558885bc-6ww2l" Nov 24 09:11:40 crc kubenswrapper[4886]: I1124 09:11:40.577550 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2d6f4659-e108-4485-ba8e-fc3c7c4b92a7-ovsdbserver-nb\") pod \"dnsmasq-dns-d558885bc-6ww2l\" (UID: \"2d6f4659-e108-4485-ba8e-fc3c7c4b92a7\") " pod="openstack/dnsmasq-dns-d558885bc-6ww2l" Nov 24 09:11:40 crc kubenswrapper[4886]: I1124 09:11:40.680374 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2d6f4659-e108-4485-ba8e-fc3c7c4b92a7-dns-swift-storage-0\") pod \"dnsmasq-dns-d558885bc-6ww2l\" (UID: \"2d6f4659-e108-4485-ba8e-fc3c7c4b92a7\") " pod="openstack/dnsmasq-dns-d558885bc-6ww2l" Nov 24 09:11:40 crc kubenswrapper[4886]: I1124 09:11:40.680838 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4xn8n\" (UniqueName: \"kubernetes.io/projected/2d6f4659-e108-4485-ba8e-fc3c7c4b92a7-kube-api-access-4xn8n\") pod \"dnsmasq-dns-d558885bc-6ww2l\" (UID: \"2d6f4659-e108-4485-ba8e-fc3c7c4b92a7\") " pod="openstack/dnsmasq-dns-d558885bc-6ww2l" Nov 24 09:11:40 crc kubenswrapper[4886]: I1124 09:11:40.680919 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2d6f4659-e108-4485-ba8e-fc3c7c4b92a7-config\") pod \"dnsmasq-dns-d558885bc-6ww2l\" (UID: \"2d6f4659-e108-4485-ba8e-fc3c7c4b92a7\") " pod="openstack/dnsmasq-dns-d558885bc-6ww2l" Nov 24 09:11:40 crc kubenswrapper[4886]: I1124 09:11:40.680946 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/2d6f4659-e108-4485-ba8e-fc3c7c4b92a7-openstack-edpm-ipam\") pod \"dnsmasq-dns-d558885bc-6ww2l\" (UID: \"2d6f4659-e108-4485-ba8e-fc3c7c4b92a7\") " pod="openstack/dnsmasq-dns-d558885bc-6ww2l" Nov 24 09:11:40 crc kubenswrapper[4886]: I1124 09:11:40.680973 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2d6f4659-e108-4485-ba8e-fc3c7c4b92a7-dns-svc\") pod \"dnsmasq-dns-d558885bc-6ww2l\" (UID: \"2d6f4659-e108-4485-ba8e-fc3c7c4b92a7\") " pod="openstack/dnsmasq-dns-d558885bc-6ww2l" Nov 24 09:11:40 crc kubenswrapper[4886]: I1124 09:11:40.680994 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2d6f4659-e108-4485-ba8e-fc3c7c4b92a7-ovsdbserver-sb\") pod \"dnsmasq-dns-d558885bc-6ww2l\" (UID: \"2d6f4659-e108-4485-ba8e-fc3c7c4b92a7\") " pod="openstack/dnsmasq-dns-d558885bc-6ww2l" Nov 24 09:11:40 crc kubenswrapper[4886]: I1124 09:11:40.681064 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2d6f4659-e108-4485-ba8e-fc3c7c4b92a7-ovsdbserver-nb\") pod \"dnsmasq-dns-d558885bc-6ww2l\" (UID: \"2d6f4659-e108-4485-ba8e-fc3c7c4b92a7\") " pod="openstack/dnsmasq-dns-d558885bc-6ww2l" Nov 24 09:11:40 crc kubenswrapper[4886]: I1124 09:11:40.681567 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2d6f4659-e108-4485-ba8e-fc3c7c4b92a7-dns-swift-storage-0\") pod \"dnsmasq-dns-d558885bc-6ww2l\" (UID: \"2d6f4659-e108-4485-ba8e-fc3c7c4b92a7\") " pod="openstack/dnsmasq-dns-d558885bc-6ww2l" Nov 24 09:11:40 crc kubenswrapper[4886]: I1124 09:11:40.681901 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2d6f4659-e108-4485-ba8e-fc3c7c4b92a7-ovsdbserver-nb\") pod \"dnsmasq-dns-d558885bc-6ww2l\" (UID: \"2d6f4659-e108-4485-ba8e-fc3c7c4b92a7\") " pod="openstack/dnsmasq-dns-d558885bc-6ww2l" Nov 24 09:11:40 crc kubenswrapper[4886]: I1124 09:11:40.682276 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/2d6f4659-e108-4485-ba8e-fc3c7c4b92a7-openstack-edpm-ipam\") pod \"dnsmasq-dns-d558885bc-6ww2l\" (UID: \"2d6f4659-e108-4485-ba8e-fc3c7c4b92a7\") " pod="openstack/dnsmasq-dns-d558885bc-6ww2l" Nov 24 09:11:40 crc kubenswrapper[4886]: I1124 09:11:40.682726 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2d6f4659-e108-4485-ba8e-fc3c7c4b92a7-dns-svc\") pod \"dnsmasq-dns-d558885bc-6ww2l\" (UID: \"2d6f4659-e108-4485-ba8e-fc3c7c4b92a7\") " pod="openstack/dnsmasq-dns-d558885bc-6ww2l" Nov 24 09:11:40 crc kubenswrapper[4886]: I1124 09:11:40.682812 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2d6f4659-e108-4485-ba8e-fc3c7c4b92a7-ovsdbserver-sb\") pod \"dnsmasq-dns-d558885bc-6ww2l\" (UID: \"2d6f4659-e108-4485-ba8e-fc3c7c4b92a7\") " pod="openstack/dnsmasq-dns-d558885bc-6ww2l" Nov 24 09:11:40 crc kubenswrapper[4886]: I1124 09:11:40.683055 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2d6f4659-e108-4485-ba8e-fc3c7c4b92a7-config\") pod \"dnsmasq-dns-d558885bc-6ww2l\" (UID: \"2d6f4659-e108-4485-ba8e-fc3c7c4b92a7\") " pod="openstack/dnsmasq-dns-d558885bc-6ww2l" Nov 24 09:11:40 crc kubenswrapper[4886]: I1124 09:11:40.745030 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4xn8n\" (UniqueName: \"kubernetes.io/projected/2d6f4659-e108-4485-ba8e-fc3c7c4b92a7-kube-api-access-4xn8n\") pod \"dnsmasq-dns-d558885bc-6ww2l\" (UID: \"2d6f4659-e108-4485-ba8e-fc3c7c4b92a7\") " pod="openstack/dnsmasq-dns-d558885bc-6ww2l" Nov 24 09:11:40 crc kubenswrapper[4886]: I1124 09:11:40.787810 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"533cb212-964b-4427-ac3f-ebafca6d8787","Type":"ContainerStarted","Data":"bc9bdc5cfc4a617c7191b2cdbcdef83ce9e4a405522c99799b154484c43678b0"} Nov 24 09:11:40 crc kubenswrapper[4886]: I1124 09:11:40.863297 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f10026aa-640c-4f36-9912-cd4177af074d" path="/var/lib/kubelet/pods/f10026aa-640c-4f36-9912-cd4177af074d/volumes" Nov 24 09:11:40 crc kubenswrapper[4886]: I1124 09:11:40.872846 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-d558885bc-6ww2l" Nov 24 09:11:41 crc kubenswrapper[4886]: I1124 09:11:41.424202 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-d558885bc-6ww2l"] Nov 24 09:11:41 crc kubenswrapper[4886]: I1124 09:11:41.801315 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"f14f0ef7-768e-4fc8-a2d1-b852fe44d773","Type":"ContainerStarted","Data":"d35aec1463262d5341e2a5a1d48982c868fcb56752ce7b32c0c2067af4c5692e"} Nov 24 09:11:41 crc kubenswrapper[4886]: I1124 09:11:41.806403 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-d558885bc-6ww2l" event={"ID":"2d6f4659-e108-4485-ba8e-fc3c7c4b92a7","Type":"ContainerStarted","Data":"b1bd723f41dddf772f5a916004578caac6e740b97d7a2c497ecb3deeea8e85df"} Nov 24 09:11:42 crc kubenswrapper[4886]: I1124 09:11:42.819200 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"533cb212-964b-4427-ac3f-ebafca6d8787","Type":"ContainerStarted","Data":"1e2a27dab14d71e60f236cc0f2876de451c79881417c3f5923d30acc8105e262"} Nov 24 09:11:42 crc kubenswrapper[4886]: I1124 09:11:42.821979 4886 generic.go:334] "Generic (PLEG): container finished" podID="2d6f4659-e108-4485-ba8e-fc3c7c4b92a7" containerID="99a2f6cc7aec3093a6cf8357505de43b658d7f589f2b52dc20da6f14ddc2c2be" exitCode=0 Nov 24 09:11:42 crc kubenswrapper[4886]: I1124 09:11:42.822275 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-d558885bc-6ww2l" event={"ID":"2d6f4659-e108-4485-ba8e-fc3c7c4b92a7","Type":"ContainerDied","Data":"99a2f6cc7aec3093a6cf8357505de43b658d7f589f2b52dc20da6f14ddc2c2be"} Nov 24 09:11:43 crc kubenswrapper[4886]: I1124 09:11:43.838669 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-d558885bc-6ww2l" event={"ID":"2d6f4659-e108-4485-ba8e-fc3c7c4b92a7","Type":"ContainerStarted","Data":"e7c90bfce80bd72997ac8db268efac80caffba5b5bcd16a25b2e234e99902e3e"} Nov 24 09:11:43 crc kubenswrapper[4886]: I1124 09:11:43.839829 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-d558885bc-6ww2l" Nov 24 09:11:43 crc kubenswrapper[4886]: I1124 09:11:43.877978 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-d558885bc-6ww2l" podStartSLOduration=3.877945836 podStartE2EDuration="3.877945836s" podCreationTimestamp="2025-11-24 09:11:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 09:11:43.858243216 +0000 UTC m=+1359.744981361" watchObservedRunningTime="2025-11-24 09:11:43.877945836 +0000 UTC m=+1359.764683971" Nov 24 09:11:50 crc kubenswrapper[4886]: I1124 09:11:50.875383 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-d558885bc-6ww2l" Nov 24 09:11:50 crc kubenswrapper[4886]: I1124 09:11:50.969058 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-cd5cbd7b9-b9stw"] Nov 24 09:11:50 crc kubenswrapper[4886]: I1124 09:11:50.970483 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-cd5cbd7b9-b9stw" podUID="e564c004-2962-49a2-84d2-bf67161bcea5" containerName="dnsmasq-dns" containerID="cri-o://facf18a5cb2de9714641a9cc922379afba6e64a7a9358fb303cf1c952b52fba0" gracePeriod=10 Nov 24 09:11:51 crc kubenswrapper[4886]: I1124 09:11:51.163963 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78c64bc9c5-bw54t"] Nov 24 09:11:51 crc kubenswrapper[4886]: I1124 09:11:51.167013 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78c64bc9c5-bw54t" Nov 24 09:11:51 crc kubenswrapper[4886]: I1124 09:11:51.180046 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78c64bc9c5-bw54t"] Nov 24 09:11:51 crc kubenswrapper[4886]: I1124 09:11:51.240313 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b6b3b0d2-a417-4be6-a6b0-8b30f1a25a06-dns-svc\") pod \"dnsmasq-dns-78c64bc9c5-bw54t\" (UID: \"b6b3b0d2-a417-4be6-a6b0-8b30f1a25a06\") " pod="openstack/dnsmasq-dns-78c64bc9c5-bw54t" Nov 24 09:11:51 crc kubenswrapper[4886]: I1124 09:11:51.240411 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-456pv\" (UniqueName: \"kubernetes.io/projected/b6b3b0d2-a417-4be6-a6b0-8b30f1a25a06-kube-api-access-456pv\") pod \"dnsmasq-dns-78c64bc9c5-bw54t\" (UID: \"b6b3b0d2-a417-4be6-a6b0-8b30f1a25a06\") " pod="openstack/dnsmasq-dns-78c64bc9c5-bw54t" Nov 24 09:11:51 crc kubenswrapper[4886]: I1124 09:11:51.240439 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b6b3b0d2-a417-4be6-a6b0-8b30f1a25a06-dns-swift-storage-0\") pod \"dnsmasq-dns-78c64bc9c5-bw54t\" (UID: \"b6b3b0d2-a417-4be6-a6b0-8b30f1a25a06\") " pod="openstack/dnsmasq-dns-78c64bc9c5-bw54t" Nov 24 09:11:51 crc kubenswrapper[4886]: I1124 09:11:51.240485 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b6b3b0d2-a417-4be6-a6b0-8b30f1a25a06-config\") pod \"dnsmasq-dns-78c64bc9c5-bw54t\" (UID: \"b6b3b0d2-a417-4be6-a6b0-8b30f1a25a06\") " pod="openstack/dnsmasq-dns-78c64bc9c5-bw54t" Nov 24 09:11:51 crc kubenswrapper[4886]: I1124 09:11:51.240519 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/b6b3b0d2-a417-4be6-a6b0-8b30f1a25a06-openstack-edpm-ipam\") pod \"dnsmasq-dns-78c64bc9c5-bw54t\" (UID: \"b6b3b0d2-a417-4be6-a6b0-8b30f1a25a06\") " pod="openstack/dnsmasq-dns-78c64bc9c5-bw54t" Nov 24 09:11:51 crc kubenswrapper[4886]: I1124 09:11:51.240661 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b6b3b0d2-a417-4be6-a6b0-8b30f1a25a06-ovsdbserver-sb\") pod \"dnsmasq-dns-78c64bc9c5-bw54t\" (UID: \"b6b3b0d2-a417-4be6-a6b0-8b30f1a25a06\") " pod="openstack/dnsmasq-dns-78c64bc9c5-bw54t" Nov 24 09:11:51 crc kubenswrapper[4886]: I1124 09:11:51.240768 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b6b3b0d2-a417-4be6-a6b0-8b30f1a25a06-ovsdbserver-nb\") pod \"dnsmasq-dns-78c64bc9c5-bw54t\" (UID: \"b6b3b0d2-a417-4be6-a6b0-8b30f1a25a06\") " pod="openstack/dnsmasq-dns-78c64bc9c5-bw54t" Nov 24 09:11:51 crc kubenswrapper[4886]: I1124 09:11:51.343246 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-456pv\" (UniqueName: \"kubernetes.io/projected/b6b3b0d2-a417-4be6-a6b0-8b30f1a25a06-kube-api-access-456pv\") pod \"dnsmasq-dns-78c64bc9c5-bw54t\" (UID: \"b6b3b0d2-a417-4be6-a6b0-8b30f1a25a06\") " pod="openstack/dnsmasq-dns-78c64bc9c5-bw54t" Nov 24 09:11:51 crc kubenswrapper[4886]: I1124 09:11:51.345797 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b6b3b0d2-a417-4be6-a6b0-8b30f1a25a06-dns-swift-storage-0\") pod \"dnsmasq-dns-78c64bc9c5-bw54t\" (UID: \"b6b3b0d2-a417-4be6-a6b0-8b30f1a25a06\") " pod="openstack/dnsmasq-dns-78c64bc9c5-bw54t" Nov 24 09:11:51 crc kubenswrapper[4886]: I1124 09:11:51.345870 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b6b3b0d2-a417-4be6-a6b0-8b30f1a25a06-config\") pod \"dnsmasq-dns-78c64bc9c5-bw54t\" (UID: \"b6b3b0d2-a417-4be6-a6b0-8b30f1a25a06\") " pod="openstack/dnsmasq-dns-78c64bc9c5-bw54t" Nov 24 09:11:51 crc kubenswrapper[4886]: I1124 09:11:51.346925 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b6b3b0d2-a417-4be6-a6b0-8b30f1a25a06-dns-swift-storage-0\") pod \"dnsmasq-dns-78c64bc9c5-bw54t\" (UID: \"b6b3b0d2-a417-4be6-a6b0-8b30f1a25a06\") " pod="openstack/dnsmasq-dns-78c64bc9c5-bw54t" Nov 24 09:11:51 crc kubenswrapper[4886]: I1124 09:11:51.347017 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b6b3b0d2-a417-4be6-a6b0-8b30f1a25a06-config\") pod \"dnsmasq-dns-78c64bc9c5-bw54t\" (UID: \"b6b3b0d2-a417-4be6-a6b0-8b30f1a25a06\") " pod="openstack/dnsmasq-dns-78c64bc9c5-bw54t" Nov 24 09:11:51 crc kubenswrapper[4886]: I1124 09:11:51.347506 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/b6b3b0d2-a417-4be6-a6b0-8b30f1a25a06-openstack-edpm-ipam\") pod \"dnsmasq-dns-78c64bc9c5-bw54t\" (UID: \"b6b3b0d2-a417-4be6-a6b0-8b30f1a25a06\") " pod="openstack/dnsmasq-dns-78c64bc9c5-bw54t" Nov 24 09:11:51 crc kubenswrapper[4886]: I1124 09:11:51.347547 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b6b3b0d2-a417-4be6-a6b0-8b30f1a25a06-ovsdbserver-sb\") pod \"dnsmasq-dns-78c64bc9c5-bw54t\" (UID: \"b6b3b0d2-a417-4be6-a6b0-8b30f1a25a06\") " pod="openstack/dnsmasq-dns-78c64bc9c5-bw54t" Nov 24 09:11:51 crc kubenswrapper[4886]: I1124 09:11:51.347755 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b6b3b0d2-a417-4be6-a6b0-8b30f1a25a06-ovsdbserver-nb\") pod \"dnsmasq-dns-78c64bc9c5-bw54t\" (UID: \"b6b3b0d2-a417-4be6-a6b0-8b30f1a25a06\") " pod="openstack/dnsmasq-dns-78c64bc9c5-bw54t" Nov 24 09:11:51 crc kubenswrapper[4886]: I1124 09:11:51.347893 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b6b3b0d2-a417-4be6-a6b0-8b30f1a25a06-dns-svc\") pod \"dnsmasq-dns-78c64bc9c5-bw54t\" (UID: \"b6b3b0d2-a417-4be6-a6b0-8b30f1a25a06\") " pod="openstack/dnsmasq-dns-78c64bc9c5-bw54t" Nov 24 09:11:51 crc kubenswrapper[4886]: I1124 09:11:51.348688 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b6b3b0d2-a417-4be6-a6b0-8b30f1a25a06-dns-svc\") pod \"dnsmasq-dns-78c64bc9c5-bw54t\" (UID: \"b6b3b0d2-a417-4be6-a6b0-8b30f1a25a06\") " pod="openstack/dnsmasq-dns-78c64bc9c5-bw54t" Nov 24 09:11:51 crc kubenswrapper[4886]: I1124 09:11:51.349549 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/b6b3b0d2-a417-4be6-a6b0-8b30f1a25a06-openstack-edpm-ipam\") pod \"dnsmasq-dns-78c64bc9c5-bw54t\" (UID: \"b6b3b0d2-a417-4be6-a6b0-8b30f1a25a06\") " pod="openstack/dnsmasq-dns-78c64bc9c5-bw54t" Nov 24 09:11:51 crc kubenswrapper[4886]: I1124 09:11:51.349769 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b6b3b0d2-a417-4be6-a6b0-8b30f1a25a06-ovsdbserver-sb\") pod \"dnsmasq-dns-78c64bc9c5-bw54t\" (UID: \"b6b3b0d2-a417-4be6-a6b0-8b30f1a25a06\") " pod="openstack/dnsmasq-dns-78c64bc9c5-bw54t" Nov 24 09:11:51 crc kubenswrapper[4886]: I1124 09:11:51.350279 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b6b3b0d2-a417-4be6-a6b0-8b30f1a25a06-ovsdbserver-nb\") pod \"dnsmasq-dns-78c64bc9c5-bw54t\" (UID: \"b6b3b0d2-a417-4be6-a6b0-8b30f1a25a06\") " pod="openstack/dnsmasq-dns-78c64bc9c5-bw54t" Nov 24 09:11:51 crc kubenswrapper[4886]: I1124 09:11:51.391551 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-456pv\" (UniqueName: \"kubernetes.io/projected/b6b3b0d2-a417-4be6-a6b0-8b30f1a25a06-kube-api-access-456pv\") pod \"dnsmasq-dns-78c64bc9c5-bw54t\" (UID: \"b6b3b0d2-a417-4be6-a6b0-8b30f1a25a06\") " pod="openstack/dnsmasq-dns-78c64bc9c5-bw54t" Nov 24 09:11:51 crc kubenswrapper[4886]: I1124 09:11:51.504851 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78c64bc9c5-bw54t" Nov 24 09:11:51 crc kubenswrapper[4886]: I1124 09:11:51.656814 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cd5cbd7b9-b9stw" Nov 24 09:11:51 crc kubenswrapper[4886]: I1124 09:11:51.756945 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e564c004-2962-49a2-84d2-bf67161bcea5-dns-svc\") pod \"e564c004-2962-49a2-84d2-bf67161bcea5\" (UID: \"e564c004-2962-49a2-84d2-bf67161bcea5\") " Nov 24 09:11:51 crc kubenswrapper[4886]: I1124 09:11:51.757075 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e564c004-2962-49a2-84d2-bf67161bcea5-ovsdbserver-nb\") pod \"e564c004-2962-49a2-84d2-bf67161bcea5\" (UID: \"e564c004-2962-49a2-84d2-bf67161bcea5\") " Nov 24 09:11:51 crc kubenswrapper[4886]: I1124 09:11:51.757249 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e564c004-2962-49a2-84d2-bf67161bcea5-config\") pod \"e564c004-2962-49a2-84d2-bf67161bcea5\" (UID: \"e564c004-2962-49a2-84d2-bf67161bcea5\") " Nov 24 09:11:51 crc kubenswrapper[4886]: I1124 09:11:51.757320 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xfwqp\" (UniqueName: \"kubernetes.io/projected/e564c004-2962-49a2-84d2-bf67161bcea5-kube-api-access-xfwqp\") pod \"e564c004-2962-49a2-84d2-bf67161bcea5\" (UID: \"e564c004-2962-49a2-84d2-bf67161bcea5\") " Nov 24 09:11:51 crc kubenswrapper[4886]: I1124 09:11:51.757378 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e564c004-2962-49a2-84d2-bf67161bcea5-dns-swift-storage-0\") pod \"e564c004-2962-49a2-84d2-bf67161bcea5\" (UID: \"e564c004-2962-49a2-84d2-bf67161bcea5\") " Nov 24 09:11:51 crc kubenswrapper[4886]: I1124 09:11:51.757403 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e564c004-2962-49a2-84d2-bf67161bcea5-ovsdbserver-sb\") pod \"e564c004-2962-49a2-84d2-bf67161bcea5\" (UID: \"e564c004-2962-49a2-84d2-bf67161bcea5\") " Nov 24 09:11:51 crc kubenswrapper[4886]: I1124 09:11:51.770049 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e564c004-2962-49a2-84d2-bf67161bcea5-kube-api-access-xfwqp" (OuterVolumeSpecName: "kube-api-access-xfwqp") pod "e564c004-2962-49a2-84d2-bf67161bcea5" (UID: "e564c004-2962-49a2-84d2-bf67161bcea5"). InnerVolumeSpecName "kube-api-access-xfwqp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 09:11:51 crc kubenswrapper[4886]: I1124 09:11:51.860114 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e564c004-2962-49a2-84d2-bf67161bcea5-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "e564c004-2962-49a2-84d2-bf67161bcea5" (UID: "e564c004-2962-49a2-84d2-bf67161bcea5"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 09:11:51 crc kubenswrapper[4886]: I1124 09:11:51.860174 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xfwqp\" (UniqueName: \"kubernetes.io/projected/e564c004-2962-49a2-84d2-bf67161bcea5-kube-api-access-xfwqp\") on node \"crc\" DevicePath \"\"" Nov 24 09:11:51 crc kubenswrapper[4886]: I1124 09:11:51.866038 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e564c004-2962-49a2-84d2-bf67161bcea5-config" (OuterVolumeSpecName: "config") pod "e564c004-2962-49a2-84d2-bf67161bcea5" (UID: "e564c004-2962-49a2-84d2-bf67161bcea5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 09:11:51 crc kubenswrapper[4886]: I1124 09:11:51.869918 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e564c004-2962-49a2-84d2-bf67161bcea5-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "e564c004-2962-49a2-84d2-bf67161bcea5" (UID: "e564c004-2962-49a2-84d2-bf67161bcea5"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 09:11:51 crc kubenswrapper[4886]: I1124 09:11:51.876162 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e564c004-2962-49a2-84d2-bf67161bcea5-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "e564c004-2962-49a2-84d2-bf67161bcea5" (UID: "e564c004-2962-49a2-84d2-bf67161bcea5"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 09:11:51 crc kubenswrapper[4886]: I1124 09:11:51.884772 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e564c004-2962-49a2-84d2-bf67161bcea5-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "e564c004-2962-49a2-84d2-bf67161bcea5" (UID: "e564c004-2962-49a2-84d2-bf67161bcea5"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 09:11:51 crc kubenswrapper[4886]: I1124 09:11:51.962638 4886 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e564c004-2962-49a2-84d2-bf67161bcea5-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 24 09:11:51 crc kubenswrapper[4886]: I1124 09:11:51.963292 4886 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e564c004-2962-49a2-84d2-bf67161bcea5-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 24 09:11:51 crc kubenswrapper[4886]: I1124 09:11:51.963365 4886 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e564c004-2962-49a2-84d2-bf67161bcea5-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 24 09:11:51 crc kubenswrapper[4886]: I1124 09:11:51.963384 4886 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e564c004-2962-49a2-84d2-bf67161bcea5-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 24 09:11:51 crc kubenswrapper[4886]: I1124 09:11:51.963400 4886 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e564c004-2962-49a2-84d2-bf67161bcea5-config\") on node \"crc\" DevicePath \"\"" Nov 24 09:11:51 crc kubenswrapper[4886]: I1124 09:11:51.975859 4886 generic.go:334] "Generic (PLEG): container finished" podID="e564c004-2962-49a2-84d2-bf67161bcea5" containerID="facf18a5cb2de9714641a9cc922379afba6e64a7a9358fb303cf1c952b52fba0" exitCode=0 Nov 24 09:11:51 crc kubenswrapper[4886]: I1124 09:11:51.975910 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cd5cbd7b9-b9stw" event={"ID":"e564c004-2962-49a2-84d2-bf67161bcea5","Type":"ContainerDied","Data":"facf18a5cb2de9714641a9cc922379afba6e64a7a9358fb303cf1c952b52fba0"} Nov 24 09:11:51 crc kubenswrapper[4886]: I1124 09:11:51.975926 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cd5cbd7b9-b9stw" Nov 24 09:11:51 crc kubenswrapper[4886]: I1124 09:11:51.975942 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cd5cbd7b9-b9stw" event={"ID":"e564c004-2962-49a2-84d2-bf67161bcea5","Type":"ContainerDied","Data":"0336fb4795107360d4c9afe4c40716bdd22683e9cc6397b664f8c3bf2c69e62b"} Nov 24 09:11:51 crc kubenswrapper[4886]: I1124 09:11:51.975964 4886 scope.go:117] "RemoveContainer" containerID="facf18a5cb2de9714641a9cc922379afba6e64a7a9358fb303cf1c952b52fba0" Nov 24 09:11:51 crc kubenswrapper[4886]: I1124 09:11:51.992600 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78c64bc9c5-bw54t"] Nov 24 09:11:52 crc kubenswrapper[4886]: I1124 09:11:52.020727 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-cd5cbd7b9-b9stw"] Nov 24 09:11:52 crc kubenswrapper[4886]: I1124 09:11:52.028596 4886 scope.go:117] "RemoveContainer" containerID="08df42843a65b38061288da35baee011b1000818a6160a75a43fe53c0ee8acef" Nov 24 09:11:52 crc kubenswrapper[4886]: I1124 09:11:52.031022 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-cd5cbd7b9-b9stw"] Nov 24 09:11:52 crc kubenswrapper[4886]: W1124 09:11:52.036476 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb6b3b0d2_a417_4be6_a6b0_8b30f1a25a06.slice/crio-1c4bbbee8984dcd0cd075b7f4c56537f9d885a6add95b08c09995a84348b10cf WatchSource:0}: Error finding container 1c4bbbee8984dcd0cd075b7f4c56537f9d885a6add95b08c09995a84348b10cf: Status 404 returned error can't find the container with id 1c4bbbee8984dcd0cd075b7f4c56537f9d885a6add95b08c09995a84348b10cf Nov 24 09:11:52 crc kubenswrapper[4886]: I1124 09:11:52.063754 4886 scope.go:117] "RemoveContainer" containerID="facf18a5cb2de9714641a9cc922379afba6e64a7a9358fb303cf1c952b52fba0" Nov 24 09:11:52 crc kubenswrapper[4886]: E1124 09:11:52.064335 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"facf18a5cb2de9714641a9cc922379afba6e64a7a9358fb303cf1c952b52fba0\": container with ID starting with facf18a5cb2de9714641a9cc922379afba6e64a7a9358fb303cf1c952b52fba0 not found: ID does not exist" containerID="facf18a5cb2de9714641a9cc922379afba6e64a7a9358fb303cf1c952b52fba0" Nov 24 09:11:52 crc kubenswrapper[4886]: I1124 09:11:52.064368 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"facf18a5cb2de9714641a9cc922379afba6e64a7a9358fb303cf1c952b52fba0"} err="failed to get container status \"facf18a5cb2de9714641a9cc922379afba6e64a7a9358fb303cf1c952b52fba0\": rpc error: code = NotFound desc = could not find container \"facf18a5cb2de9714641a9cc922379afba6e64a7a9358fb303cf1c952b52fba0\": container with ID starting with facf18a5cb2de9714641a9cc922379afba6e64a7a9358fb303cf1c952b52fba0 not found: ID does not exist" Nov 24 09:11:52 crc kubenswrapper[4886]: I1124 09:11:52.064395 4886 scope.go:117] "RemoveContainer" containerID="08df42843a65b38061288da35baee011b1000818a6160a75a43fe53c0ee8acef" Nov 24 09:11:52 crc kubenswrapper[4886]: E1124 09:11:52.064667 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"08df42843a65b38061288da35baee011b1000818a6160a75a43fe53c0ee8acef\": container with ID starting with 08df42843a65b38061288da35baee011b1000818a6160a75a43fe53c0ee8acef not found: ID does not exist" containerID="08df42843a65b38061288da35baee011b1000818a6160a75a43fe53c0ee8acef" Nov 24 09:11:52 crc kubenswrapper[4886]: I1124 09:11:52.064692 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"08df42843a65b38061288da35baee011b1000818a6160a75a43fe53c0ee8acef"} err="failed to get container status \"08df42843a65b38061288da35baee011b1000818a6160a75a43fe53c0ee8acef\": rpc error: code = NotFound desc = could not find container \"08df42843a65b38061288da35baee011b1000818a6160a75a43fe53c0ee8acef\": container with ID starting with 08df42843a65b38061288da35baee011b1000818a6160a75a43fe53c0ee8acef not found: ID does not exist" Nov 24 09:11:52 crc kubenswrapper[4886]: I1124 09:11:52.878555 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e564c004-2962-49a2-84d2-bf67161bcea5" path="/var/lib/kubelet/pods/e564c004-2962-49a2-84d2-bf67161bcea5/volumes" Nov 24 09:11:52 crc kubenswrapper[4886]: I1124 09:11:52.992724 4886 generic.go:334] "Generic (PLEG): container finished" podID="b6b3b0d2-a417-4be6-a6b0-8b30f1a25a06" containerID="85e0c99a779c6f8a4f8a7553a3b405a3fade8940b8d20b25f0e508c480a904b7" exitCode=0 Nov 24 09:11:52 crc kubenswrapper[4886]: I1124 09:11:52.992808 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78c64bc9c5-bw54t" event={"ID":"b6b3b0d2-a417-4be6-a6b0-8b30f1a25a06","Type":"ContainerDied","Data":"85e0c99a779c6f8a4f8a7553a3b405a3fade8940b8d20b25f0e508c480a904b7"} Nov 24 09:11:52 crc kubenswrapper[4886]: I1124 09:11:52.992898 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78c64bc9c5-bw54t" event={"ID":"b6b3b0d2-a417-4be6-a6b0-8b30f1a25a06","Type":"ContainerStarted","Data":"1c4bbbee8984dcd0cd075b7f4c56537f9d885a6add95b08c09995a84348b10cf"} Nov 24 09:11:54 crc kubenswrapper[4886]: I1124 09:11:54.006496 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78c64bc9c5-bw54t" event={"ID":"b6b3b0d2-a417-4be6-a6b0-8b30f1a25a06","Type":"ContainerStarted","Data":"2cb67cfbd8e568c0f89b0c3ddb2b9ad0096bf2cbb1f8c66d2d7e6943c50da154"} Nov 24 09:11:54 crc kubenswrapper[4886]: I1124 09:11:54.006837 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-78c64bc9c5-bw54t" Nov 24 09:11:54 crc kubenswrapper[4886]: I1124 09:11:54.069348 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-78c64bc9c5-bw54t" podStartSLOduration=3.069322514 podStartE2EDuration="3.069322514s" podCreationTimestamp="2025-11-24 09:11:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 09:11:54.06494927 +0000 UTC m=+1369.951687405" watchObservedRunningTime="2025-11-24 09:11:54.069322514 +0000 UTC m=+1369.956060649" Nov 24 09:11:56 crc kubenswrapper[4886]: I1124 09:11:56.289071 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-cd5cbd7b9-b9stw" podUID="e564c004-2962-49a2-84d2-bf67161bcea5" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.194:5353: i/o timeout" Nov 24 09:12:01 crc kubenswrapper[4886]: I1124 09:12:01.669664 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-78c64bc9c5-bw54t" Nov 24 09:12:01 crc kubenswrapper[4886]: I1124 09:12:01.773544 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-d558885bc-6ww2l"] Nov 24 09:12:01 crc kubenswrapper[4886]: I1124 09:12:01.780761 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-d558885bc-6ww2l" podUID="2d6f4659-e108-4485-ba8e-fc3c7c4b92a7" containerName="dnsmasq-dns" containerID="cri-o://e7c90bfce80bd72997ac8db268efac80caffba5b5bcd16a25b2e234e99902e3e" gracePeriod=10 Nov 24 09:12:02 crc kubenswrapper[4886]: I1124 09:12:02.136457 4886 generic.go:334] "Generic (PLEG): container finished" podID="2d6f4659-e108-4485-ba8e-fc3c7c4b92a7" containerID="e7c90bfce80bd72997ac8db268efac80caffba5b5bcd16a25b2e234e99902e3e" exitCode=0 Nov 24 09:12:02 crc kubenswrapper[4886]: I1124 09:12:02.136532 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-d558885bc-6ww2l" event={"ID":"2d6f4659-e108-4485-ba8e-fc3c7c4b92a7","Type":"ContainerDied","Data":"e7c90bfce80bd72997ac8db268efac80caffba5b5bcd16a25b2e234e99902e3e"} Nov 24 09:12:02 crc kubenswrapper[4886]: I1124 09:12:02.564202 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-d558885bc-6ww2l" Nov 24 09:12:02 crc kubenswrapper[4886]: I1124 09:12:02.700867 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2d6f4659-e108-4485-ba8e-fc3c7c4b92a7-dns-svc\") pod \"2d6f4659-e108-4485-ba8e-fc3c7c4b92a7\" (UID: \"2d6f4659-e108-4485-ba8e-fc3c7c4b92a7\") " Nov 24 09:12:02 crc kubenswrapper[4886]: I1124 09:12:02.700956 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2d6f4659-e108-4485-ba8e-fc3c7c4b92a7-config\") pod \"2d6f4659-e108-4485-ba8e-fc3c7c4b92a7\" (UID: \"2d6f4659-e108-4485-ba8e-fc3c7c4b92a7\") " Nov 24 09:12:02 crc kubenswrapper[4886]: I1124 09:12:02.701106 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4xn8n\" (UniqueName: \"kubernetes.io/projected/2d6f4659-e108-4485-ba8e-fc3c7c4b92a7-kube-api-access-4xn8n\") pod \"2d6f4659-e108-4485-ba8e-fc3c7c4b92a7\" (UID: \"2d6f4659-e108-4485-ba8e-fc3c7c4b92a7\") " Nov 24 09:12:02 crc kubenswrapper[4886]: I1124 09:12:02.701191 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2d6f4659-e108-4485-ba8e-fc3c7c4b92a7-ovsdbserver-nb\") pod \"2d6f4659-e108-4485-ba8e-fc3c7c4b92a7\" (UID: \"2d6f4659-e108-4485-ba8e-fc3c7c4b92a7\") " Nov 24 09:12:02 crc kubenswrapper[4886]: I1124 09:12:02.701255 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/2d6f4659-e108-4485-ba8e-fc3c7c4b92a7-openstack-edpm-ipam\") pod \"2d6f4659-e108-4485-ba8e-fc3c7c4b92a7\" (UID: \"2d6f4659-e108-4485-ba8e-fc3c7c4b92a7\") " Nov 24 09:12:02 crc kubenswrapper[4886]: I1124 09:12:02.701368 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2d6f4659-e108-4485-ba8e-fc3c7c4b92a7-dns-swift-storage-0\") pod \"2d6f4659-e108-4485-ba8e-fc3c7c4b92a7\" (UID: \"2d6f4659-e108-4485-ba8e-fc3c7c4b92a7\") " Nov 24 09:12:02 crc kubenswrapper[4886]: I1124 09:12:02.701450 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2d6f4659-e108-4485-ba8e-fc3c7c4b92a7-ovsdbserver-sb\") pod \"2d6f4659-e108-4485-ba8e-fc3c7c4b92a7\" (UID: \"2d6f4659-e108-4485-ba8e-fc3c7c4b92a7\") " Nov 24 09:12:02 crc kubenswrapper[4886]: I1124 09:12:02.714533 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2d6f4659-e108-4485-ba8e-fc3c7c4b92a7-kube-api-access-4xn8n" (OuterVolumeSpecName: "kube-api-access-4xn8n") pod "2d6f4659-e108-4485-ba8e-fc3c7c4b92a7" (UID: "2d6f4659-e108-4485-ba8e-fc3c7c4b92a7"). InnerVolumeSpecName "kube-api-access-4xn8n". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 09:12:02 crc kubenswrapper[4886]: I1124 09:12:02.784111 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2d6f4659-e108-4485-ba8e-fc3c7c4b92a7-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "2d6f4659-e108-4485-ba8e-fc3c7c4b92a7" (UID: "2d6f4659-e108-4485-ba8e-fc3c7c4b92a7"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 09:12:02 crc kubenswrapper[4886]: I1124 09:12:02.784181 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2d6f4659-e108-4485-ba8e-fc3c7c4b92a7-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "2d6f4659-e108-4485-ba8e-fc3c7c4b92a7" (UID: "2d6f4659-e108-4485-ba8e-fc3c7c4b92a7"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 09:12:02 crc kubenswrapper[4886]: I1124 09:12:02.787183 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2d6f4659-e108-4485-ba8e-fc3c7c4b92a7-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "2d6f4659-e108-4485-ba8e-fc3c7c4b92a7" (UID: "2d6f4659-e108-4485-ba8e-fc3c7c4b92a7"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 09:12:02 crc kubenswrapper[4886]: I1124 09:12:02.788926 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2d6f4659-e108-4485-ba8e-fc3c7c4b92a7-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "2d6f4659-e108-4485-ba8e-fc3c7c4b92a7" (UID: "2d6f4659-e108-4485-ba8e-fc3c7c4b92a7"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 09:12:02 crc kubenswrapper[4886]: I1124 09:12:02.797861 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2d6f4659-e108-4485-ba8e-fc3c7c4b92a7-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "2d6f4659-e108-4485-ba8e-fc3c7c4b92a7" (UID: "2d6f4659-e108-4485-ba8e-fc3c7c4b92a7"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 09:12:02 crc kubenswrapper[4886]: I1124 09:12:02.798745 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2d6f4659-e108-4485-ba8e-fc3c7c4b92a7-config" (OuterVolumeSpecName: "config") pod "2d6f4659-e108-4485-ba8e-fc3c7c4b92a7" (UID: "2d6f4659-e108-4485-ba8e-fc3c7c4b92a7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 09:12:02 crc kubenswrapper[4886]: I1124 09:12:02.803980 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4xn8n\" (UniqueName: \"kubernetes.io/projected/2d6f4659-e108-4485-ba8e-fc3c7c4b92a7-kube-api-access-4xn8n\") on node \"crc\" DevicePath \"\"" Nov 24 09:12:02 crc kubenswrapper[4886]: I1124 09:12:02.804007 4886 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2d6f4659-e108-4485-ba8e-fc3c7c4b92a7-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 24 09:12:02 crc kubenswrapper[4886]: I1124 09:12:02.804019 4886 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/2d6f4659-e108-4485-ba8e-fc3c7c4b92a7-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Nov 24 09:12:02 crc kubenswrapper[4886]: I1124 09:12:02.804030 4886 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2d6f4659-e108-4485-ba8e-fc3c7c4b92a7-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 24 09:12:02 crc kubenswrapper[4886]: I1124 09:12:02.804039 4886 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2d6f4659-e108-4485-ba8e-fc3c7c4b92a7-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 24 09:12:02 crc kubenswrapper[4886]: I1124 09:12:02.804051 4886 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2d6f4659-e108-4485-ba8e-fc3c7c4b92a7-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 24 09:12:02 crc kubenswrapper[4886]: I1124 09:12:02.804063 4886 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2d6f4659-e108-4485-ba8e-fc3c7c4b92a7-config\") on node \"crc\" DevicePath \"\"" Nov 24 09:12:03 crc kubenswrapper[4886]: I1124 09:12:03.153000 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-d558885bc-6ww2l" event={"ID":"2d6f4659-e108-4485-ba8e-fc3c7c4b92a7","Type":"ContainerDied","Data":"b1bd723f41dddf772f5a916004578caac6e740b97d7a2c497ecb3deeea8e85df"} Nov 24 09:12:03 crc kubenswrapper[4886]: I1124 09:12:03.153046 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-d558885bc-6ww2l" Nov 24 09:12:03 crc kubenswrapper[4886]: I1124 09:12:03.153073 4886 scope.go:117] "RemoveContainer" containerID="e7c90bfce80bd72997ac8db268efac80caffba5b5bcd16a25b2e234e99902e3e" Nov 24 09:12:03 crc kubenswrapper[4886]: I1124 09:12:03.185956 4886 scope.go:117] "RemoveContainer" containerID="99a2f6cc7aec3093a6cf8357505de43b658d7f589f2b52dc20da6f14ddc2c2be" Nov 24 09:12:03 crc kubenswrapper[4886]: I1124 09:12:03.185996 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-d558885bc-6ww2l"] Nov 24 09:12:03 crc kubenswrapper[4886]: I1124 09:12:03.196708 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-d558885bc-6ww2l"] Nov 24 09:12:04 crc kubenswrapper[4886]: I1124 09:12:04.868183 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2d6f4659-e108-4485-ba8e-fc3c7c4b92a7" path="/var/lib/kubelet/pods/2d6f4659-e108-4485-ba8e-fc3c7c4b92a7/volumes" Nov 24 09:12:10 crc kubenswrapper[4886]: I1124 09:12:10.614062 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-4z2nz"] Nov 24 09:12:10 crc kubenswrapper[4886]: E1124 09:12:10.615354 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d6f4659-e108-4485-ba8e-fc3c7c4b92a7" containerName="init" Nov 24 09:12:10 crc kubenswrapper[4886]: I1124 09:12:10.615372 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d6f4659-e108-4485-ba8e-fc3c7c4b92a7" containerName="init" Nov 24 09:12:10 crc kubenswrapper[4886]: E1124 09:12:10.615412 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e564c004-2962-49a2-84d2-bf67161bcea5" containerName="init" Nov 24 09:12:10 crc kubenswrapper[4886]: I1124 09:12:10.615420 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="e564c004-2962-49a2-84d2-bf67161bcea5" containerName="init" Nov 24 09:12:10 crc kubenswrapper[4886]: E1124 09:12:10.615429 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d6f4659-e108-4485-ba8e-fc3c7c4b92a7" containerName="dnsmasq-dns" Nov 24 09:12:10 crc kubenswrapper[4886]: I1124 09:12:10.615436 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d6f4659-e108-4485-ba8e-fc3c7c4b92a7" containerName="dnsmasq-dns" Nov 24 09:12:10 crc kubenswrapper[4886]: E1124 09:12:10.615451 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e564c004-2962-49a2-84d2-bf67161bcea5" containerName="dnsmasq-dns" Nov 24 09:12:10 crc kubenswrapper[4886]: I1124 09:12:10.615457 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="e564c004-2962-49a2-84d2-bf67161bcea5" containerName="dnsmasq-dns" Nov 24 09:12:10 crc kubenswrapper[4886]: I1124 09:12:10.615664 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d6f4659-e108-4485-ba8e-fc3c7c4b92a7" containerName="dnsmasq-dns" Nov 24 09:12:10 crc kubenswrapper[4886]: I1124 09:12:10.615677 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="e564c004-2962-49a2-84d2-bf67161bcea5" containerName="dnsmasq-dns" Nov 24 09:12:10 crc kubenswrapper[4886]: I1124 09:12:10.616905 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-4z2nz" Nov 24 09:12:10 crc kubenswrapper[4886]: I1124 09:12:10.623811 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 24 09:12:10 crc kubenswrapper[4886]: I1124 09:12:10.623959 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 24 09:12:10 crc kubenswrapper[4886]: I1124 09:12:10.623964 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 24 09:12:10 crc kubenswrapper[4886]: I1124 09:12:10.624561 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-n8v49" Nov 24 09:12:10 crc kubenswrapper[4886]: I1124 09:12:10.626929 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-4z2nz"] Nov 24 09:12:10 crc kubenswrapper[4886]: I1124 09:12:10.783627 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b715926a-c856-44c7-b863-95bd080cbe24-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-4z2nz\" (UID: \"b715926a-c856-44c7-b863-95bd080cbe24\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-4z2nz" Nov 24 09:12:10 crc kubenswrapper[4886]: I1124 09:12:10.783719 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jbc6n\" (UniqueName: \"kubernetes.io/projected/b715926a-c856-44c7-b863-95bd080cbe24-kube-api-access-jbc6n\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-4z2nz\" (UID: \"b715926a-c856-44c7-b863-95bd080cbe24\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-4z2nz" Nov 24 09:12:10 crc kubenswrapper[4886]: I1124 09:12:10.783764 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b715926a-c856-44c7-b863-95bd080cbe24-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-4z2nz\" (UID: \"b715926a-c856-44c7-b863-95bd080cbe24\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-4z2nz" Nov 24 09:12:10 crc kubenswrapper[4886]: I1124 09:12:10.784191 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b715926a-c856-44c7-b863-95bd080cbe24-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-4z2nz\" (UID: \"b715926a-c856-44c7-b863-95bd080cbe24\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-4z2nz" Nov 24 09:12:10 crc kubenswrapper[4886]: I1124 09:12:10.887906 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b715926a-c856-44c7-b863-95bd080cbe24-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-4z2nz\" (UID: \"b715926a-c856-44c7-b863-95bd080cbe24\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-4z2nz" Nov 24 09:12:10 crc kubenswrapper[4886]: I1124 09:12:10.888003 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jbc6n\" (UniqueName: \"kubernetes.io/projected/b715926a-c856-44c7-b863-95bd080cbe24-kube-api-access-jbc6n\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-4z2nz\" (UID: \"b715926a-c856-44c7-b863-95bd080cbe24\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-4z2nz" Nov 24 09:12:10 crc kubenswrapper[4886]: I1124 09:12:10.888044 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b715926a-c856-44c7-b863-95bd080cbe24-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-4z2nz\" (UID: \"b715926a-c856-44c7-b863-95bd080cbe24\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-4z2nz" Nov 24 09:12:10 crc kubenswrapper[4886]: I1124 09:12:10.888111 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b715926a-c856-44c7-b863-95bd080cbe24-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-4z2nz\" (UID: \"b715926a-c856-44c7-b863-95bd080cbe24\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-4z2nz" Nov 24 09:12:10 crc kubenswrapper[4886]: I1124 09:12:10.896270 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b715926a-c856-44c7-b863-95bd080cbe24-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-4z2nz\" (UID: \"b715926a-c856-44c7-b863-95bd080cbe24\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-4z2nz" Nov 24 09:12:10 crc kubenswrapper[4886]: I1124 09:12:10.896777 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b715926a-c856-44c7-b863-95bd080cbe24-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-4z2nz\" (UID: \"b715926a-c856-44c7-b863-95bd080cbe24\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-4z2nz" Nov 24 09:12:10 crc kubenswrapper[4886]: I1124 09:12:10.897274 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b715926a-c856-44c7-b863-95bd080cbe24-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-4z2nz\" (UID: \"b715926a-c856-44c7-b863-95bd080cbe24\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-4z2nz" Nov 24 09:12:10 crc kubenswrapper[4886]: I1124 09:12:10.906332 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jbc6n\" (UniqueName: \"kubernetes.io/projected/b715926a-c856-44c7-b863-95bd080cbe24-kube-api-access-jbc6n\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-4z2nz\" (UID: \"b715926a-c856-44c7-b863-95bd080cbe24\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-4z2nz" Nov 24 09:12:10 crc kubenswrapper[4886]: I1124 09:12:10.995483 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-4z2nz" Nov 24 09:12:11 crc kubenswrapper[4886]: I1124 09:12:11.693268 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-4z2nz"] Nov 24 09:12:12 crc kubenswrapper[4886]: I1124 09:12:12.264830 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-4z2nz" event={"ID":"b715926a-c856-44c7-b863-95bd080cbe24","Type":"ContainerStarted","Data":"1f5e64dc90358d3d59e833fa52e3da2615b8698f8c8d15b76c7ebbd0d26a125f"} Nov 24 09:12:14 crc kubenswrapper[4886]: I1124 09:12:14.292889 4886 generic.go:334] "Generic (PLEG): container finished" podID="f14f0ef7-768e-4fc8-a2d1-b852fe44d773" containerID="d35aec1463262d5341e2a5a1d48982c868fcb56752ce7b32c0c2067af4c5692e" exitCode=0 Nov 24 09:12:14 crc kubenswrapper[4886]: I1124 09:12:14.293071 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"f14f0ef7-768e-4fc8-a2d1-b852fe44d773","Type":"ContainerDied","Data":"d35aec1463262d5341e2a5a1d48982c868fcb56752ce7b32c0c2067af4c5692e"} Nov 24 09:12:15 crc kubenswrapper[4886]: I1124 09:12:15.309843 4886 generic.go:334] "Generic (PLEG): container finished" podID="533cb212-964b-4427-ac3f-ebafca6d8787" containerID="1e2a27dab14d71e60f236cc0f2876de451c79881417c3f5923d30acc8105e262" exitCode=0 Nov 24 09:12:15 crc kubenswrapper[4886]: I1124 09:12:15.309931 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"533cb212-964b-4427-ac3f-ebafca6d8787","Type":"ContainerDied","Data":"1e2a27dab14d71e60f236cc0f2876de451c79881417c3f5923d30acc8105e262"} Nov 24 09:12:15 crc kubenswrapper[4886]: I1124 09:12:15.314300 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"f14f0ef7-768e-4fc8-a2d1-b852fe44d773","Type":"ContainerStarted","Data":"d291f6ac95fd81b743e167164bb15fa8682b08ecd10dc8556aea4092d5212e51"} Nov 24 09:12:15 crc kubenswrapper[4886]: I1124 09:12:15.314542 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Nov 24 09:12:15 crc kubenswrapper[4886]: I1124 09:12:15.380417 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=38.380387025 podStartE2EDuration="38.380387025s" podCreationTimestamp="2025-11-24 09:11:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 09:12:15.372297916 +0000 UTC m=+1391.259036071" watchObservedRunningTime="2025-11-24 09:12:15.380387025 +0000 UTC m=+1391.267125160" Nov 24 09:12:22 crc kubenswrapper[4886]: I1124 09:12:22.403254 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-4z2nz" event={"ID":"b715926a-c856-44c7-b863-95bd080cbe24","Type":"ContainerStarted","Data":"c9168496d67ee45193ab046a88999917007df88639bff7c80298f3a56c73606f"} Nov 24 09:12:22 crc kubenswrapper[4886]: I1124 09:12:22.405673 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"533cb212-964b-4427-ac3f-ebafca6d8787","Type":"ContainerStarted","Data":"bf14c59906f79b440d362c672c9b3ba2a76402a0a350656d2f74e2d86cef3ba8"} Nov 24 09:12:22 crc kubenswrapper[4886]: I1124 09:12:22.406138 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Nov 24 09:12:22 crc kubenswrapper[4886]: I1124 09:12:22.428963 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-4z2nz" podStartSLOduration=2.042537708 podStartE2EDuration="12.428941381s" podCreationTimestamp="2025-11-24 09:12:10 +0000 UTC" firstStartedPulling="2025-11-24 09:12:11.702018503 +0000 UTC m=+1387.588756638" lastFinishedPulling="2025-11-24 09:12:22.088422176 +0000 UTC m=+1397.975160311" observedRunningTime="2025-11-24 09:12:22.421987704 +0000 UTC m=+1398.308725849" watchObservedRunningTime="2025-11-24 09:12:22.428941381 +0000 UTC m=+1398.315679516" Nov 24 09:12:22 crc kubenswrapper[4886]: I1124 09:12:22.458962 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=43.458939762 podStartE2EDuration="43.458939762s" podCreationTimestamp="2025-11-24 09:11:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 09:12:22.449037991 +0000 UTC m=+1398.335776126" watchObservedRunningTime="2025-11-24 09:12:22.458939762 +0000 UTC m=+1398.345677897" Nov 24 09:12:28 crc kubenswrapper[4886]: I1124 09:12:28.182448 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Nov 24 09:12:29 crc kubenswrapper[4886]: I1124 09:12:29.068782 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-6h9jh"] Nov 24 09:12:29 crc kubenswrapper[4886]: I1124 09:12:29.073718 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6h9jh" Nov 24 09:12:29 crc kubenswrapper[4886]: I1124 09:12:29.091176 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-6h9jh"] Nov 24 09:12:29 crc kubenswrapper[4886]: I1124 09:12:29.144460 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/41c2cb5d-2907-4a91-8415-d9da08ea4687-utilities\") pod \"community-operators-6h9jh\" (UID: \"41c2cb5d-2907-4a91-8415-d9da08ea4687\") " pod="openshift-marketplace/community-operators-6h9jh" Nov 24 09:12:29 crc kubenswrapper[4886]: I1124 09:12:29.144603 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2t7q9\" (UniqueName: \"kubernetes.io/projected/41c2cb5d-2907-4a91-8415-d9da08ea4687-kube-api-access-2t7q9\") pod \"community-operators-6h9jh\" (UID: \"41c2cb5d-2907-4a91-8415-d9da08ea4687\") " pod="openshift-marketplace/community-operators-6h9jh" Nov 24 09:12:29 crc kubenswrapper[4886]: I1124 09:12:29.144876 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/41c2cb5d-2907-4a91-8415-d9da08ea4687-catalog-content\") pod \"community-operators-6h9jh\" (UID: \"41c2cb5d-2907-4a91-8415-d9da08ea4687\") " pod="openshift-marketplace/community-operators-6h9jh" Nov 24 09:12:29 crc kubenswrapper[4886]: I1124 09:12:29.247728 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/41c2cb5d-2907-4a91-8415-d9da08ea4687-catalog-content\") pod \"community-operators-6h9jh\" (UID: \"41c2cb5d-2907-4a91-8415-d9da08ea4687\") " pod="openshift-marketplace/community-operators-6h9jh" Nov 24 09:12:29 crc kubenswrapper[4886]: I1124 09:12:29.247896 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/41c2cb5d-2907-4a91-8415-d9da08ea4687-utilities\") pod \"community-operators-6h9jh\" (UID: \"41c2cb5d-2907-4a91-8415-d9da08ea4687\") " pod="openshift-marketplace/community-operators-6h9jh" Nov 24 09:12:29 crc kubenswrapper[4886]: I1124 09:12:29.247934 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2t7q9\" (UniqueName: \"kubernetes.io/projected/41c2cb5d-2907-4a91-8415-d9da08ea4687-kube-api-access-2t7q9\") pod \"community-operators-6h9jh\" (UID: \"41c2cb5d-2907-4a91-8415-d9da08ea4687\") " pod="openshift-marketplace/community-operators-6h9jh" Nov 24 09:12:29 crc kubenswrapper[4886]: I1124 09:12:29.248692 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/41c2cb5d-2907-4a91-8415-d9da08ea4687-catalog-content\") pod \"community-operators-6h9jh\" (UID: \"41c2cb5d-2907-4a91-8415-d9da08ea4687\") " pod="openshift-marketplace/community-operators-6h9jh" Nov 24 09:12:29 crc kubenswrapper[4886]: I1124 09:12:29.248880 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/41c2cb5d-2907-4a91-8415-d9da08ea4687-utilities\") pod \"community-operators-6h9jh\" (UID: \"41c2cb5d-2907-4a91-8415-d9da08ea4687\") " pod="openshift-marketplace/community-operators-6h9jh" Nov 24 09:12:29 crc kubenswrapper[4886]: I1124 09:12:29.286568 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2t7q9\" (UniqueName: \"kubernetes.io/projected/41c2cb5d-2907-4a91-8415-d9da08ea4687-kube-api-access-2t7q9\") pod \"community-operators-6h9jh\" (UID: \"41c2cb5d-2907-4a91-8415-d9da08ea4687\") " pod="openshift-marketplace/community-operators-6h9jh" Nov 24 09:12:29 crc kubenswrapper[4886]: I1124 09:12:29.405604 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6h9jh" Nov 24 09:12:30 crc kubenswrapper[4886]: I1124 09:12:30.061479 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-6h9jh"] Nov 24 09:12:30 crc kubenswrapper[4886]: I1124 09:12:30.491087 4886 generic.go:334] "Generic (PLEG): container finished" podID="41c2cb5d-2907-4a91-8415-d9da08ea4687" containerID="faee8fd6e8a13d1c5fdc25a6969695f6e72fe3b2d7890288acbf5cd7c172ca0f" exitCode=0 Nov 24 09:12:30 crc kubenswrapper[4886]: I1124 09:12:30.491211 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6h9jh" event={"ID":"41c2cb5d-2907-4a91-8415-d9da08ea4687","Type":"ContainerDied","Data":"faee8fd6e8a13d1c5fdc25a6969695f6e72fe3b2d7890288acbf5cd7c172ca0f"} Nov 24 09:12:30 crc kubenswrapper[4886]: I1124 09:12:30.491866 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6h9jh" event={"ID":"41c2cb5d-2907-4a91-8415-d9da08ea4687","Type":"ContainerStarted","Data":"33bda0de7294af53024594892ee8ede6a390a81f6d2a282759d11fbd31df8f2f"} Nov 24 09:12:31 crc kubenswrapper[4886]: I1124 09:12:31.522977 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6h9jh" event={"ID":"41c2cb5d-2907-4a91-8415-d9da08ea4687","Type":"ContainerStarted","Data":"25a9e60a43b7d30cca0ca6b02999e463b77a90793dc3ea8cd974b38589f391db"} Nov 24 09:12:31 crc kubenswrapper[4886]: I1124 09:12:31.784507 4886 patch_prober.go:28] interesting pod/machine-config-daemon-zc46q container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 09:12:31 crc kubenswrapper[4886]: I1124 09:12:31.784617 4886 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zc46q" podUID="23cb993e-0360-4449-b604-8ddd825a6502" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 09:12:32 crc kubenswrapper[4886]: I1124 09:12:32.536036 4886 generic.go:334] "Generic (PLEG): container finished" podID="41c2cb5d-2907-4a91-8415-d9da08ea4687" containerID="25a9e60a43b7d30cca0ca6b02999e463b77a90793dc3ea8cd974b38589f391db" exitCode=0 Nov 24 09:12:32 crc kubenswrapper[4886]: I1124 09:12:32.536141 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6h9jh" event={"ID":"41c2cb5d-2907-4a91-8415-d9da08ea4687","Type":"ContainerDied","Data":"25a9e60a43b7d30cca0ca6b02999e463b77a90793dc3ea8cd974b38589f391db"} Nov 24 09:12:34 crc kubenswrapper[4886]: I1124 09:12:34.567607 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6h9jh" event={"ID":"41c2cb5d-2907-4a91-8415-d9da08ea4687","Type":"ContainerStarted","Data":"8d02e1c0ac40aa5a7d8fa698efc7e8c6decc271b12b184edb76b7cd9ee339642"} Nov 24 09:12:34 crc kubenswrapper[4886]: I1124 09:12:34.591361 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-6h9jh" podStartSLOduration=2.104023142 podStartE2EDuration="5.591333491s" podCreationTimestamp="2025-11-24 09:12:29 +0000 UTC" firstStartedPulling="2025-11-24 09:12:30.494372288 +0000 UTC m=+1406.381110423" lastFinishedPulling="2025-11-24 09:12:33.981682637 +0000 UTC m=+1409.868420772" observedRunningTime="2025-11-24 09:12:34.588821969 +0000 UTC m=+1410.475560124" watchObservedRunningTime="2025-11-24 09:12:34.591333491 +0000 UTC m=+1410.478071626" Nov 24 09:12:39 crc kubenswrapper[4886]: I1124 09:12:39.406897 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-6h9jh" Nov 24 09:12:39 crc kubenswrapper[4886]: I1124 09:12:39.407779 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-6h9jh" Nov 24 09:12:39 crc kubenswrapper[4886]: I1124 09:12:39.464045 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-6h9jh" Nov 24 09:12:39 crc kubenswrapper[4886]: I1124 09:12:39.669328 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-6h9jh" Nov 24 09:12:39 crc kubenswrapper[4886]: I1124 09:12:39.743452 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-6h9jh"] Nov 24 09:12:39 crc kubenswrapper[4886]: I1124 09:12:39.806489 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Nov 24 09:12:40 crc kubenswrapper[4886]: I1124 09:12:40.625780 4886 generic.go:334] "Generic (PLEG): container finished" podID="b715926a-c856-44c7-b863-95bd080cbe24" containerID="c9168496d67ee45193ab046a88999917007df88639bff7c80298f3a56c73606f" exitCode=0 Nov 24 09:12:40 crc kubenswrapper[4886]: I1124 09:12:40.625877 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-4z2nz" event={"ID":"b715926a-c856-44c7-b863-95bd080cbe24","Type":"ContainerDied","Data":"c9168496d67ee45193ab046a88999917007df88639bff7c80298f3a56c73606f"} Nov 24 09:12:41 crc kubenswrapper[4886]: I1124 09:12:41.635608 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-6h9jh" podUID="41c2cb5d-2907-4a91-8415-d9da08ea4687" containerName="registry-server" containerID="cri-o://8d02e1c0ac40aa5a7d8fa698efc7e8c6decc271b12b184edb76b7cd9ee339642" gracePeriod=2 Nov 24 09:12:42 crc kubenswrapper[4886]: I1124 09:12:42.124370 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-zrrfv"] Nov 24 09:12:42 crc kubenswrapper[4886]: I1124 09:12:42.127740 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zrrfv" Nov 24 09:12:42 crc kubenswrapper[4886]: I1124 09:12:42.135593 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-zrrfv"] Nov 24 09:12:42 crc kubenswrapper[4886]: I1124 09:12:42.174735 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-48tdj\" (UniqueName: \"kubernetes.io/projected/d1b3051c-49c1-4a3d-b767-7f7d0c2780d7-kube-api-access-48tdj\") pod \"redhat-marketplace-zrrfv\" (UID: \"d1b3051c-49c1-4a3d-b767-7f7d0c2780d7\") " pod="openshift-marketplace/redhat-marketplace-zrrfv" Nov 24 09:12:42 crc kubenswrapper[4886]: I1124 09:12:42.174818 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d1b3051c-49c1-4a3d-b767-7f7d0c2780d7-catalog-content\") pod \"redhat-marketplace-zrrfv\" (UID: \"d1b3051c-49c1-4a3d-b767-7f7d0c2780d7\") " pod="openshift-marketplace/redhat-marketplace-zrrfv" Nov 24 09:12:42 crc kubenswrapper[4886]: I1124 09:12:42.174865 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d1b3051c-49c1-4a3d-b767-7f7d0c2780d7-utilities\") pod \"redhat-marketplace-zrrfv\" (UID: \"d1b3051c-49c1-4a3d-b767-7f7d0c2780d7\") " pod="openshift-marketplace/redhat-marketplace-zrrfv" Nov 24 09:12:42 crc kubenswrapper[4886]: I1124 09:12:42.196396 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-4z2nz" Nov 24 09:12:42 crc kubenswrapper[4886]: I1124 09:12:42.209069 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6h9jh" Nov 24 09:12:42 crc kubenswrapper[4886]: I1124 09:12:42.277300 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b715926a-c856-44c7-b863-95bd080cbe24-ssh-key\") pod \"b715926a-c856-44c7-b863-95bd080cbe24\" (UID: \"b715926a-c856-44c7-b863-95bd080cbe24\") " Nov 24 09:12:42 crc kubenswrapper[4886]: I1124 09:12:42.277414 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b715926a-c856-44c7-b863-95bd080cbe24-repo-setup-combined-ca-bundle\") pod \"b715926a-c856-44c7-b863-95bd080cbe24\" (UID: \"b715926a-c856-44c7-b863-95bd080cbe24\") " Nov 24 09:12:42 crc kubenswrapper[4886]: I1124 09:12:42.277462 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/41c2cb5d-2907-4a91-8415-d9da08ea4687-catalog-content\") pod \"41c2cb5d-2907-4a91-8415-d9da08ea4687\" (UID: \"41c2cb5d-2907-4a91-8415-d9da08ea4687\") " Nov 24 09:12:42 crc kubenswrapper[4886]: I1124 09:12:42.277517 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jbc6n\" (UniqueName: \"kubernetes.io/projected/b715926a-c856-44c7-b863-95bd080cbe24-kube-api-access-jbc6n\") pod \"b715926a-c856-44c7-b863-95bd080cbe24\" (UID: \"b715926a-c856-44c7-b863-95bd080cbe24\") " Nov 24 09:12:42 crc kubenswrapper[4886]: I1124 09:12:42.277562 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b715926a-c856-44c7-b863-95bd080cbe24-inventory\") pod \"b715926a-c856-44c7-b863-95bd080cbe24\" (UID: \"b715926a-c856-44c7-b863-95bd080cbe24\") " Nov 24 09:12:42 crc kubenswrapper[4886]: I1124 09:12:42.277703 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d1b3051c-49c1-4a3d-b767-7f7d0c2780d7-catalog-content\") pod \"redhat-marketplace-zrrfv\" (UID: \"d1b3051c-49c1-4a3d-b767-7f7d0c2780d7\") " pod="openshift-marketplace/redhat-marketplace-zrrfv" Nov 24 09:12:42 crc kubenswrapper[4886]: I1124 09:12:42.277826 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d1b3051c-49c1-4a3d-b767-7f7d0c2780d7-utilities\") pod \"redhat-marketplace-zrrfv\" (UID: \"d1b3051c-49c1-4a3d-b767-7f7d0c2780d7\") " pod="openshift-marketplace/redhat-marketplace-zrrfv" Nov 24 09:12:42 crc kubenswrapper[4886]: I1124 09:12:42.278055 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-48tdj\" (UniqueName: \"kubernetes.io/projected/d1b3051c-49c1-4a3d-b767-7f7d0c2780d7-kube-api-access-48tdj\") pod \"redhat-marketplace-zrrfv\" (UID: \"d1b3051c-49c1-4a3d-b767-7f7d0c2780d7\") " pod="openshift-marketplace/redhat-marketplace-zrrfv" Nov 24 09:12:42 crc kubenswrapper[4886]: I1124 09:12:42.279969 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d1b3051c-49c1-4a3d-b767-7f7d0c2780d7-catalog-content\") pod \"redhat-marketplace-zrrfv\" (UID: \"d1b3051c-49c1-4a3d-b767-7f7d0c2780d7\") " pod="openshift-marketplace/redhat-marketplace-zrrfv" Nov 24 09:12:42 crc kubenswrapper[4886]: I1124 09:12:42.280705 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d1b3051c-49c1-4a3d-b767-7f7d0c2780d7-utilities\") pod \"redhat-marketplace-zrrfv\" (UID: \"d1b3051c-49c1-4a3d-b767-7f7d0c2780d7\") " pod="openshift-marketplace/redhat-marketplace-zrrfv" Nov 24 09:12:42 crc kubenswrapper[4886]: I1124 09:12:42.304203 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b715926a-c856-44c7-b863-95bd080cbe24-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "b715926a-c856-44c7-b863-95bd080cbe24" (UID: "b715926a-c856-44c7-b863-95bd080cbe24"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:12:42 crc kubenswrapper[4886]: I1124 09:12:42.307401 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b715926a-c856-44c7-b863-95bd080cbe24-kube-api-access-jbc6n" (OuterVolumeSpecName: "kube-api-access-jbc6n") pod "b715926a-c856-44c7-b863-95bd080cbe24" (UID: "b715926a-c856-44c7-b863-95bd080cbe24"). InnerVolumeSpecName "kube-api-access-jbc6n". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 09:12:42 crc kubenswrapper[4886]: I1124 09:12:42.309115 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-48tdj\" (UniqueName: \"kubernetes.io/projected/d1b3051c-49c1-4a3d-b767-7f7d0c2780d7-kube-api-access-48tdj\") pod \"redhat-marketplace-zrrfv\" (UID: \"d1b3051c-49c1-4a3d-b767-7f7d0c2780d7\") " pod="openshift-marketplace/redhat-marketplace-zrrfv" Nov 24 09:12:42 crc kubenswrapper[4886]: I1124 09:12:42.323141 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b715926a-c856-44c7-b863-95bd080cbe24-inventory" (OuterVolumeSpecName: "inventory") pod "b715926a-c856-44c7-b863-95bd080cbe24" (UID: "b715926a-c856-44c7-b863-95bd080cbe24"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:12:42 crc kubenswrapper[4886]: I1124 09:12:42.327392 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b715926a-c856-44c7-b863-95bd080cbe24-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "b715926a-c856-44c7-b863-95bd080cbe24" (UID: "b715926a-c856-44c7-b863-95bd080cbe24"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:12:42 crc kubenswrapper[4886]: I1124 09:12:42.366020 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/41c2cb5d-2907-4a91-8415-d9da08ea4687-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "41c2cb5d-2907-4a91-8415-d9da08ea4687" (UID: "41c2cb5d-2907-4a91-8415-d9da08ea4687"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 09:12:42 crc kubenswrapper[4886]: I1124 09:12:42.379250 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2t7q9\" (UniqueName: \"kubernetes.io/projected/41c2cb5d-2907-4a91-8415-d9da08ea4687-kube-api-access-2t7q9\") pod \"41c2cb5d-2907-4a91-8415-d9da08ea4687\" (UID: \"41c2cb5d-2907-4a91-8415-d9da08ea4687\") " Nov 24 09:12:42 crc kubenswrapper[4886]: I1124 09:12:42.379338 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/41c2cb5d-2907-4a91-8415-d9da08ea4687-utilities\") pod \"41c2cb5d-2907-4a91-8415-d9da08ea4687\" (UID: \"41c2cb5d-2907-4a91-8415-d9da08ea4687\") " Nov 24 09:12:42 crc kubenswrapper[4886]: I1124 09:12:42.379632 4886 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b715926a-c856-44c7-b863-95bd080cbe24-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 24 09:12:42 crc kubenswrapper[4886]: I1124 09:12:42.379652 4886 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b715926a-c856-44c7-b863-95bd080cbe24-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 09:12:42 crc kubenswrapper[4886]: I1124 09:12:42.379663 4886 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/41c2cb5d-2907-4a91-8415-d9da08ea4687-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 09:12:42 crc kubenswrapper[4886]: I1124 09:12:42.379676 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jbc6n\" (UniqueName: \"kubernetes.io/projected/b715926a-c856-44c7-b863-95bd080cbe24-kube-api-access-jbc6n\") on node \"crc\" DevicePath \"\"" Nov 24 09:12:42 crc kubenswrapper[4886]: I1124 09:12:42.379687 4886 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b715926a-c856-44c7-b863-95bd080cbe24-inventory\") on node \"crc\" DevicePath \"\"" Nov 24 09:12:42 crc kubenswrapper[4886]: I1124 09:12:42.380544 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/41c2cb5d-2907-4a91-8415-d9da08ea4687-utilities" (OuterVolumeSpecName: "utilities") pod "41c2cb5d-2907-4a91-8415-d9da08ea4687" (UID: "41c2cb5d-2907-4a91-8415-d9da08ea4687"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 09:12:42 crc kubenswrapper[4886]: I1124 09:12:42.385540 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/41c2cb5d-2907-4a91-8415-d9da08ea4687-kube-api-access-2t7q9" (OuterVolumeSpecName: "kube-api-access-2t7q9") pod "41c2cb5d-2907-4a91-8415-d9da08ea4687" (UID: "41c2cb5d-2907-4a91-8415-d9da08ea4687"). InnerVolumeSpecName "kube-api-access-2t7q9". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 09:12:42 crc kubenswrapper[4886]: I1124 09:12:42.482197 4886 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/41c2cb5d-2907-4a91-8415-d9da08ea4687-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 09:12:42 crc kubenswrapper[4886]: I1124 09:12:42.482257 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2t7q9\" (UniqueName: \"kubernetes.io/projected/41c2cb5d-2907-4a91-8415-d9da08ea4687-kube-api-access-2t7q9\") on node \"crc\" DevicePath \"\"" Nov 24 09:12:42 crc kubenswrapper[4886]: I1124 09:12:42.527328 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zrrfv" Nov 24 09:12:42 crc kubenswrapper[4886]: I1124 09:12:42.663977 4886 generic.go:334] "Generic (PLEG): container finished" podID="41c2cb5d-2907-4a91-8415-d9da08ea4687" containerID="8d02e1c0ac40aa5a7d8fa698efc7e8c6decc271b12b184edb76b7cd9ee339642" exitCode=0 Nov 24 09:12:42 crc kubenswrapper[4886]: I1124 09:12:42.664051 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6h9jh" event={"ID":"41c2cb5d-2907-4a91-8415-d9da08ea4687","Type":"ContainerDied","Data":"8d02e1c0ac40aa5a7d8fa698efc7e8c6decc271b12b184edb76b7cd9ee339642"} Nov 24 09:12:42 crc kubenswrapper[4886]: I1124 09:12:42.664092 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6h9jh" Nov 24 09:12:42 crc kubenswrapper[4886]: I1124 09:12:42.664670 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6h9jh" event={"ID":"41c2cb5d-2907-4a91-8415-d9da08ea4687","Type":"ContainerDied","Data":"33bda0de7294af53024594892ee8ede6a390a81f6d2a282759d11fbd31df8f2f"} Nov 24 09:12:42 crc kubenswrapper[4886]: I1124 09:12:42.664737 4886 scope.go:117] "RemoveContainer" containerID="8d02e1c0ac40aa5a7d8fa698efc7e8c6decc271b12b184edb76b7cd9ee339642" Nov 24 09:12:42 crc kubenswrapper[4886]: I1124 09:12:42.714925 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-4z2nz" event={"ID":"b715926a-c856-44c7-b863-95bd080cbe24","Type":"ContainerDied","Data":"1f5e64dc90358d3d59e833fa52e3da2615b8698f8c8d15b76c7ebbd0d26a125f"} Nov 24 09:12:42 crc kubenswrapper[4886]: I1124 09:12:42.715085 4886 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1f5e64dc90358d3d59e833fa52e3da2615b8698f8c8d15b76c7ebbd0d26a125f" Nov 24 09:12:42 crc kubenswrapper[4886]: I1124 09:12:42.715309 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-4z2nz" Nov 24 09:12:42 crc kubenswrapper[4886]: I1124 09:12:42.770757 4886 scope.go:117] "RemoveContainer" containerID="25a9e60a43b7d30cca0ca6b02999e463b77a90793dc3ea8cd974b38589f391db" Nov 24 09:12:42 crc kubenswrapper[4886]: I1124 09:12:42.773401 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-6h9jh"] Nov 24 09:12:42 crc kubenswrapper[4886]: I1124 09:12:42.844061 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-6h9jh"] Nov 24 09:12:42 crc kubenswrapper[4886]: I1124 09:12:42.850453 4886 scope.go:117] "RemoveContainer" containerID="faee8fd6e8a13d1c5fdc25a6969695f6e72fe3b2d7890288acbf5cd7c172ca0f" Nov 24 09:12:42 crc kubenswrapper[4886]: I1124 09:12:42.872975 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="41c2cb5d-2907-4a91-8415-d9da08ea4687" path="/var/lib/kubelet/pods/41c2cb5d-2907-4a91-8415-d9da08ea4687/volumes" Nov 24 09:12:42 crc kubenswrapper[4886]: I1124 09:12:42.874039 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-g9sf7"] Nov 24 09:12:42 crc kubenswrapper[4886]: E1124 09:12:42.874726 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41c2cb5d-2907-4a91-8415-d9da08ea4687" containerName="extract-content" Nov 24 09:12:42 crc kubenswrapper[4886]: I1124 09:12:42.874766 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="41c2cb5d-2907-4a91-8415-d9da08ea4687" containerName="extract-content" Nov 24 09:12:42 crc kubenswrapper[4886]: E1124 09:12:42.874787 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b715926a-c856-44c7-b863-95bd080cbe24" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Nov 24 09:12:42 crc kubenswrapper[4886]: I1124 09:12:42.874793 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="b715926a-c856-44c7-b863-95bd080cbe24" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Nov 24 09:12:42 crc kubenswrapper[4886]: E1124 09:12:42.874843 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41c2cb5d-2907-4a91-8415-d9da08ea4687" containerName="registry-server" Nov 24 09:12:42 crc kubenswrapper[4886]: I1124 09:12:42.874850 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="41c2cb5d-2907-4a91-8415-d9da08ea4687" containerName="registry-server" Nov 24 09:12:42 crc kubenswrapper[4886]: E1124 09:12:42.874868 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41c2cb5d-2907-4a91-8415-d9da08ea4687" containerName="extract-utilities" Nov 24 09:12:42 crc kubenswrapper[4886]: I1124 09:12:42.874874 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="41c2cb5d-2907-4a91-8415-d9da08ea4687" containerName="extract-utilities" Nov 24 09:12:42 crc kubenswrapper[4886]: I1124 09:12:42.875122 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="b715926a-c856-44c7-b863-95bd080cbe24" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Nov 24 09:12:42 crc kubenswrapper[4886]: I1124 09:12:42.875139 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="41c2cb5d-2907-4a91-8415-d9da08ea4687" containerName="registry-server" Nov 24 09:12:42 crc kubenswrapper[4886]: I1124 09:12:42.876083 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-g9sf7" Nov 24 09:12:42 crc kubenswrapper[4886]: I1124 09:12:42.879916 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 24 09:12:42 crc kubenswrapper[4886]: I1124 09:12:42.880173 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-n8v49" Nov 24 09:12:42 crc kubenswrapper[4886]: I1124 09:12:42.880432 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 24 09:12:42 crc kubenswrapper[4886]: I1124 09:12:42.881162 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 24 09:12:42 crc kubenswrapper[4886]: I1124 09:12:42.887423 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-g9sf7"] Nov 24 09:12:42 crc kubenswrapper[4886]: I1124 09:12:42.901064 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-zrrfv"] Nov 24 09:12:42 crc kubenswrapper[4886]: I1124 09:12:42.912527 4886 scope.go:117] "RemoveContainer" containerID="8d02e1c0ac40aa5a7d8fa698efc7e8c6decc271b12b184edb76b7cd9ee339642" Nov 24 09:12:42 crc kubenswrapper[4886]: E1124 09:12:42.913785 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8d02e1c0ac40aa5a7d8fa698efc7e8c6decc271b12b184edb76b7cd9ee339642\": container with ID starting with 8d02e1c0ac40aa5a7d8fa698efc7e8c6decc271b12b184edb76b7cd9ee339642 not found: ID does not exist" containerID="8d02e1c0ac40aa5a7d8fa698efc7e8c6decc271b12b184edb76b7cd9ee339642" Nov 24 09:12:42 crc kubenswrapper[4886]: I1124 09:12:42.913851 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8d02e1c0ac40aa5a7d8fa698efc7e8c6decc271b12b184edb76b7cd9ee339642"} err="failed to get container status \"8d02e1c0ac40aa5a7d8fa698efc7e8c6decc271b12b184edb76b7cd9ee339642\": rpc error: code = NotFound desc = could not find container \"8d02e1c0ac40aa5a7d8fa698efc7e8c6decc271b12b184edb76b7cd9ee339642\": container with ID starting with 8d02e1c0ac40aa5a7d8fa698efc7e8c6decc271b12b184edb76b7cd9ee339642 not found: ID does not exist" Nov 24 09:12:42 crc kubenswrapper[4886]: I1124 09:12:42.913874 4886 scope.go:117] "RemoveContainer" containerID="25a9e60a43b7d30cca0ca6b02999e463b77a90793dc3ea8cd974b38589f391db" Nov 24 09:12:42 crc kubenswrapper[4886]: E1124 09:12:42.914520 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"25a9e60a43b7d30cca0ca6b02999e463b77a90793dc3ea8cd974b38589f391db\": container with ID starting with 25a9e60a43b7d30cca0ca6b02999e463b77a90793dc3ea8cd974b38589f391db not found: ID does not exist" containerID="25a9e60a43b7d30cca0ca6b02999e463b77a90793dc3ea8cd974b38589f391db" Nov 24 09:12:42 crc kubenswrapper[4886]: I1124 09:12:42.914546 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"25a9e60a43b7d30cca0ca6b02999e463b77a90793dc3ea8cd974b38589f391db"} err="failed to get container status \"25a9e60a43b7d30cca0ca6b02999e463b77a90793dc3ea8cd974b38589f391db\": rpc error: code = NotFound desc = could not find container \"25a9e60a43b7d30cca0ca6b02999e463b77a90793dc3ea8cd974b38589f391db\": container with ID starting with 25a9e60a43b7d30cca0ca6b02999e463b77a90793dc3ea8cd974b38589f391db not found: ID does not exist" Nov 24 09:12:42 crc kubenswrapper[4886]: I1124 09:12:42.914588 4886 scope.go:117] "RemoveContainer" containerID="faee8fd6e8a13d1c5fdc25a6969695f6e72fe3b2d7890288acbf5cd7c172ca0f" Nov 24 09:12:42 crc kubenswrapper[4886]: E1124 09:12:42.917392 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"faee8fd6e8a13d1c5fdc25a6969695f6e72fe3b2d7890288acbf5cd7c172ca0f\": container with ID starting with faee8fd6e8a13d1c5fdc25a6969695f6e72fe3b2d7890288acbf5cd7c172ca0f not found: ID does not exist" containerID="faee8fd6e8a13d1c5fdc25a6969695f6e72fe3b2d7890288acbf5cd7c172ca0f" Nov 24 09:12:42 crc kubenswrapper[4886]: I1124 09:12:42.917434 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"faee8fd6e8a13d1c5fdc25a6969695f6e72fe3b2d7890288acbf5cd7c172ca0f"} err="failed to get container status \"faee8fd6e8a13d1c5fdc25a6969695f6e72fe3b2d7890288acbf5cd7c172ca0f\": rpc error: code = NotFound desc = could not find container \"faee8fd6e8a13d1c5fdc25a6969695f6e72fe3b2d7890288acbf5cd7c172ca0f\": container with ID starting with faee8fd6e8a13d1c5fdc25a6969695f6e72fe3b2d7890288acbf5cd7c172ca0f not found: ID does not exist" Nov 24 09:12:42 crc kubenswrapper[4886]: I1124 09:12:42.953013 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8fwpm\" (UniqueName: \"kubernetes.io/projected/21022c6d-8637-4952-b0c1-33b80b316a3a-kube-api-access-8fwpm\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-g9sf7\" (UID: \"21022c6d-8637-4952-b0c1-33b80b316a3a\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-g9sf7" Nov 24 09:12:42 crc kubenswrapper[4886]: I1124 09:12:42.953095 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/21022c6d-8637-4952-b0c1-33b80b316a3a-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-g9sf7\" (UID: \"21022c6d-8637-4952-b0c1-33b80b316a3a\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-g9sf7" Nov 24 09:12:42 crc kubenswrapper[4886]: I1124 09:12:42.953347 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/21022c6d-8637-4952-b0c1-33b80b316a3a-ssh-key\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-g9sf7\" (UID: \"21022c6d-8637-4952-b0c1-33b80b316a3a\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-g9sf7" Nov 24 09:12:43 crc kubenswrapper[4886]: I1124 09:12:43.055547 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8fwpm\" (UniqueName: \"kubernetes.io/projected/21022c6d-8637-4952-b0c1-33b80b316a3a-kube-api-access-8fwpm\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-g9sf7\" (UID: \"21022c6d-8637-4952-b0c1-33b80b316a3a\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-g9sf7" Nov 24 09:12:43 crc kubenswrapper[4886]: I1124 09:12:43.055932 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/21022c6d-8637-4952-b0c1-33b80b316a3a-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-g9sf7\" (UID: \"21022c6d-8637-4952-b0c1-33b80b316a3a\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-g9sf7" Nov 24 09:12:43 crc kubenswrapper[4886]: I1124 09:12:43.056357 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/21022c6d-8637-4952-b0c1-33b80b316a3a-ssh-key\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-g9sf7\" (UID: \"21022c6d-8637-4952-b0c1-33b80b316a3a\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-g9sf7" Nov 24 09:12:43 crc kubenswrapper[4886]: I1124 09:12:43.063861 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/21022c6d-8637-4952-b0c1-33b80b316a3a-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-g9sf7\" (UID: \"21022c6d-8637-4952-b0c1-33b80b316a3a\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-g9sf7" Nov 24 09:12:43 crc kubenswrapper[4886]: I1124 09:12:43.064720 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/21022c6d-8637-4952-b0c1-33b80b316a3a-ssh-key\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-g9sf7\" (UID: \"21022c6d-8637-4952-b0c1-33b80b316a3a\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-g9sf7" Nov 24 09:12:43 crc kubenswrapper[4886]: I1124 09:12:43.080078 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8fwpm\" (UniqueName: \"kubernetes.io/projected/21022c6d-8637-4952-b0c1-33b80b316a3a-kube-api-access-8fwpm\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-g9sf7\" (UID: \"21022c6d-8637-4952-b0c1-33b80b316a3a\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-g9sf7" Nov 24 09:12:43 crc kubenswrapper[4886]: I1124 09:12:43.282461 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-g9sf7" Nov 24 09:12:43 crc kubenswrapper[4886]: I1124 09:12:43.727674 4886 generic.go:334] "Generic (PLEG): container finished" podID="d1b3051c-49c1-4a3d-b767-7f7d0c2780d7" containerID="395055cc76e3663de4733045ead2b7c3bec910aef3f758c1cc9ba5c62ea9204a" exitCode=0 Nov 24 09:12:43 crc kubenswrapper[4886]: I1124 09:12:43.727813 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zrrfv" event={"ID":"d1b3051c-49c1-4a3d-b767-7f7d0c2780d7","Type":"ContainerDied","Data":"395055cc76e3663de4733045ead2b7c3bec910aef3f758c1cc9ba5c62ea9204a"} Nov 24 09:12:43 crc kubenswrapper[4886]: I1124 09:12:43.728241 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zrrfv" event={"ID":"d1b3051c-49c1-4a3d-b767-7f7d0c2780d7","Type":"ContainerStarted","Data":"71faa966fe5eb91ec42041ec054fd014997425fca0b980ca2921a965aad8b5b1"} Nov 24 09:12:43 crc kubenswrapper[4886]: I1124 09:12:43.827402 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-g9sf7"] Nov 24 09:12:44 crc kubenswrapper[4886]: I1124 09:12:44.740853 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-g9sf7" event={"ID":"21022c6d-8637-4952-b0c1-33b80b316a3a","Type":"ContainerStarted","Data":"9b97941dbb9f1e9eaf3d921d4580784d1eed833385a57c8646adaec384a99895"} Nov 24 09:12:44 crc kubenswrapper[4886]: I1124 09:12:44.741363 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-g9sf7" event={"ID":"21022c6d-8637-4952-b0c1-33b80b316a3a","Type":"ContainerStarted","Data":"1f87bb85d539db527fe7617bc201e634c60a91c436c2a55c76f736db1cb2d3ec"} Nov 24 09:12:44 crc kubenswrapper[4886]: I1124 09:12:44.764977 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-g9sf7" podStartSLOduration=2.332939632 podStartE2EDuration="2.764947773s" podCreationTimestamp="2025-11-24 09:12:42 +0000 UTC" firstStartedPulling="2025-11-24 09:12:43.837827659 +0000 UTC m=+1419.724565794" lastFinishedPulling="2025-11-24 09:12:44.2698358 +0000 UTC m=+1420.156573935" observedRunningTime="2025-11-24 09:12:44.762869894 +0000 UTC m=+1420.649608049" watchObservedRunningTime="2025-11-24 09:12:44.764947773 +0000 UTC m=+1420.651685908" Nov 24 09:12:45 crc kubenswrapper[4886]: I1124 09:12:45.756101 4886 generic.go:334] "Generic (PLEG): container finished" podID="d1b3051c-49c1-4a3d-b767-7f7d0c2780d7" containerID="1d8222ef08bfcd7a9b00df5349dab5815b913c9812f58be84b3391d8df7a40ff" exitCode=0 Nov 24 09:12:45 crc kubenswrapper[4886]: I1124 09:12:45.756183 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zrrfv" event={"ID":"d1b3051c-49c1-4a3d-b767-7f7d0c2780d7","Type":"ContainerDied","Data":"1d8222ef08bfcd7a9b00df5349dab5815b913c9812f58be84b3391d8df7a40ff"} Nov 24 09:12:47 crc kubenswrapper[4886]: I1124 09:12:46.773436 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zrrfv" event={"ID":"d1b3051c-49c1-4a3d-b767-7f7d0c2780d7","Type":"ContainerStarted","Data":"f75941095fa6fa6a27b934e3dea617047d0114f62c990d9ff98977ea75bfeb80"} Nov 24 09:12:47 crc kubenswrapper[4886]: I1124 09:12:46.798985 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-zrrfv" podStartSLOduration=2.368044667 podStartE2EDuration="4.798959833s" podCreationTimestamp="2025-11-24 09:12:42 +0000 UTC" firstStartedPulling="2025-11-24 09:12:43.730559244 +0000 UTC m=+1419.617297379" lastFinishedPulling="2025-11-24 09:12:46.16147441 +0000 UTC m=+1422.048212545" observedRunningTime="2025-11-24 09:12:46.794726373 +0000 UTC m=+1422.681464508" watchObservedRunningTime="2025-11-24 09:12:46.798959833 +0000 UTC m=+1422.685697968" Nov 24 09:12:47 crc kubenswrapper[4886]: I1124 09:12:47.784438 4886 generic.go:334] "Generic (PLEG): container finished" podID="21022c6d-8637-4952-b0c1-33b80b316a3a" containerID="9b97941dbb9f1e9eaf3d921d4580784d1eed833385a57c8646adaec384a99895" exitCode=0 Nov 24 09:12:47 crc kubenswrapper[4886]: I1124 09:12:47.784530 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-g9sf7" event={"ID":"21022c6d-8637-4952-b0c1-33b80b316a3a","Type":"ContainerDied","Data":"9b97941dbb9f1e9eaf3d921d4580784d1eed833385a57c8646adaec384a99895"} Nov 24 09:12:49 crc kubenswrapper[4886]: I1124 09:12:49.288571 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-g9sf7" Nov 24 09:12:49 crc kubenswrapper[4886]: I1124 09:12:49.399022 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8fwpm\" (UniqueName: \"kubernetes.io/projected/21022c6d-8637-4952-b0c1-33b80b316a3a-kube-api-access-8fwpm\") pod \"21022c6d-8637-4952-b0c1-33b80b316a3a\" (UID: \"21022c6d-8637-4952-b0c1-33b80b316a3a\") " Nov 24 09:12:49 crc kubenswrapper[4886]: I1124 09:12:49.399085 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/21022c6d-8637-4952-b0c1-33b80b316a3a-inventory\") pod \"21022c6d-8637-4952-b0c1-33b80b316a3a\" (UID: \"21022c6d-8637-4952-b0c1-33b80b316a3a\") " Nov 24 09:12:49 crc kubenswrapper[4886]: I1124 09:12:49.399176 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/21022c6d-8637-4952-b0c1-33b80b316a3a-ssh-key\") pod \"21022c6d-8637-4952-b0c1-33b80b316a3a\" (UID: \"21022c6d-8637-4952-b0c1-33b80b316a3a\") " Nov 24 09:12:49 crc kubenswrapper[4886]: I1124 09:12:49.406046 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/21022c6d-8637-4952-b0c1-33b80b316a3a-kube-api-access-8fwpm" (OuterVolumeSpecName: "kube-api-access-8fwpm") pod "21022c6d-8637-4952-b0c1-33b80b316a3a" (UID: "21022c6d-8637-4952-b0c1-33b80b316a3a"). InnerVolumeSpecName "kube-api-access-8fwpm". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 09:12:49 crc kubenswrapper[4886]: I1124 09:12:49.436932 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/21022c6d-8637-4952-b0c1-33b80b316a3a-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "21022c6d-8637-4952-b0c1-33b80b316a3a" (UID: "21022c6d-8637-4952-b0c1-33b80b316a3a"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:12:49 crc kubenswrapper[4886]: I1124 09:12:49.443514 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/21022c6d-8637-4952-b0c1-33b80b316a3a-inventory" (OuterVolumeSpecName: "inventory") pod "21022c6d-8637-4952-b0c1-33b80b316a3a" (UID: "21022c6d-8637-4952-b0c1-33b80b316a3a"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:12:49 crc kubenswrapper[4886]: I1124 09:12:49.502436 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8fwpm\" (UniqueName: \"kubernetes.io/projected/21022c6d-8637-4952-b0c1-33b80b316a3a-kube-api-access-8fwpm\") on node \"crc\" DevicePath \"\"" Nov 24 09:12:49 crc kubenswrapper[4886]: I1124 09:12:49.502485 4886 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/21022c6d-8637-4952-b0c1-33b80b316a3a-inventory\") on node \"crc\" DevicePath \"\"" Nov 24 09:12:49 crc kubenswrapper[4886]: I1124 09:12:49.502495 4886 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/21022c6d-8637-4952-b0c1-33b80b316a3a-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 24 09:12:49 crc kubenswrapper[4886]: I1124 09:12:49.810632 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-g9sf7" event={"ID":"21022c6d-8637-4952-b0c1-33b80b316a3a","Type":"ContainerDied","Data":"1f87bb85d539db527fe7617bc201e634c60a91c436c2a55c76f736db1cb2d3ec"} Nov 24 09:12:49 crc kubenswrapper[4886]: I1124 09:12:49.810753 4886 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1f87bb85d539db527fe7617bc201e634c60a91c436c2a55c76f736db1cb2d3ec" Nov 24 09:12:49 crc kubenswrapper[4886]: I1124 09:12:49.811091 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-g9sf7" Nov 24 09:12:49 crc kubenswrapper[4886]: I1124 09:12:49.906764 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-k624c"] Nov 24 09:12:49 crc kubenswrapper[4886]: E1124 09:12:49.907359 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="21022c6d-8637-4952-b0c1-33b80b316a3a" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Nov 24 09:12:49 crc kubenswrapper[4886]: I1124 09:12:49.907386 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="21022c6d-8637-4952-b0c1-33b80b316a3a" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Nov 24 09:12:49 crc kubenswrapper[4886]: I1124 09:12:49.907649 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="21022c6d-8637-4952-b0c1-33b80b316a3a" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Nov 24 09:12:49 crc kubenswrapper[4886]: I1124 09:12:49.908496 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-k624c" Nov 24 09:12:49 crc kubenswrapper[4886]: I1124 09:12:49.911957 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 24 09:12:49 crc kubenswrapper[4886]: I1124 09:12:49.913326 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 24 09:12:49 crc kubenswrapper[4886]: I1124 09:12:49.915598 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 24 09:12:49 crc kubenswrapper[4886]: I1124 09:12:49.915825 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-n8v49" Nov 24 09:12:49 crc kubenswrapper[4886]: I1124 09:12:49.936347 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-k624c"] Nov 24 09:12:50 crc kubenswrapper[4886]: I1124 09:12:50.015273 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e26edc4e-16ec-494e-9011-1dcaf51099be-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-k624c\" (UID: \"e26edc4e-16ec-494e-9011-1dcaf51099be\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-k624c" Nov 24 09:12:50 crc kubenswrapper[4886]: I1124 09:12:50.015408 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e26edc4e-16ec-494e-9011-1dcaf51099be-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-k624c\" (UID: \"e26edc4e-16ec-494e-9011-1dcaf51099be\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-k624c" Nov 24 09:12:50 crc kubenswrapper[4886]: I1124 09:12:50.015473 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x68p8\" (UniqueName: \"kubernetes.io/projected/e26edc4e-16ec-494e-9011-1dcaf51099be-kube-api-access-x68p8\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-k624c\" (UID: \"e26edc4e-16ec-494e-9011-1dcaf51099be\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-k624c" Nov 24 09:12:50 crc kubenswrapper[4886]: I1124 09:12:50.015506 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e26edc4e-16ec-494e-9011-1dcaf51099be-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-k624c\" (UID: \"e26edc4e-16ec-494e-9011-1dcaf51099be\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-k624c" Nov 24 09:12:50 crc kubenswrapper[4886]: I1124 09:12:50.117507 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e26edc4e-16ec-494e-9011-1dcaf51099be-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-k624c\" (UID: \"e26edc4e-16ec-494e-9011-1dcaf51099be\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-k624c" Nov 24 09:12:50 crc kubenswrapper[4886]: I1124 09:12:50.117628 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e26edc4e-16ec-494e-9011-1dcaf51099be-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-k624c\" (UID: \"e26edc4e-16ec-494e-9011-1dcaf51099be\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-k624c" Nov 24 09:12:50 crc kubenswrapper[4886]: I1124 09:12:50.117678 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x68p8\" (UniqueName: \"kubernetes.io/projected/e26edc4e-16ec-494e-9011-1dcaf51099be-kube-api-access-x68p8\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-k624c\" (UID: \"e26edc4e-16ec-494e-9011-1dcaf51099be\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-k624c" Nov 24 09:12:50 crc kubenswrapper[4886]: I1124 09:12:50.117718 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e26edc4e-16ec-494e-9011-1dcaf51099be-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-k624c\" (UID: \"e26edc4e-16ec-494e-9011-1dcaf51099be\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-k624c" Nov 24 09:12:50 crc kubenswrapper[4886]: I1124 09:12:50.122941 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e26edc4e-16ec-494e-9011-1dcaf51099be-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-k624c\" (UID: \"e26edc4e-16ec-494e-9011-1dcaf51099be\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-k624c" Nov 24 09:12:50 crc kubenswrapper[4886]: I1124 09:12:50.123018 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e26edc4e-16ec-494e-9011-1dcaf51099be-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-k624c\" (UID: \"e26edc4e-16ec-494e-9011-1dcaf51099be\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-k624c" Nov 24 09:12:50 crc kubenswrapper[4886]: I1124 09:12:50.136760 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e26edc4e-16ec-494e-9011-1dcaf51099be-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-k624c\" (UID: \"e26edc4e-16ec-494e-9011-1dcaf51099be\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-k624c" Nov 24 09:12:50 crc kubenswrapper[4886]: I1124 09:12:50.138274 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x68p8\" (UniqueName: \"kubernetes.io/projected/e26edc4e-16ec-494e-9011-1dcaf51099be-kube-api-access-x68p8\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-k624c\" (UID: \"e26edc4e-16ec-494e-9011-1dcaf51099be\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-k624c" Nov 24 09:12:50 crc kubenswrapper[4886]: I1124 09:12:50.254016 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-k624c" Nov 24 09:12:50 crc kubenswrapper[4886]: I1124 09:12:50.788424 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-k624c"] Nov 24 09:12:50 crc kubenswrapper[4886]: I1124 09:12:50.823560 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-k624c" event={"ID":"e26edc4e-16ec-494e-9011-1dcaf51099be","Type":"ContainerStarted","Data":"eeb87cdaf6cf1a7baca088af6dca5202b276906d2803b12c6ca9a36c7cc6459b"} Nov 24 09:12:51 crc kubenswrapper[4886]: I1124 09:12:51.837419 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-k624c" event={"ID":"e26edc4e-16ec-494e-9011-1dcaf51099be","Type":"ContainerStarted","Data":"67dc99dd3042d384688ecd20b5a6c002661b08a24233c5e1162dac3fcd280bf7"} Nov 24 09:12:52 crc kubenswrapper[4886]: I1124 09:12:52.527721 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-zrrfv" Nov 24 09:12:52 crc kubenswrapper[4886]: I1124 09:12:52.527804 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-zrrfv" Nov 24 09:12:52 crc kubenswrapper[4886]: I1124 09:12:52.589037 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-zrrfv" Nov 24 09:12:52 crc kubenswrapper[4886]: I1124 09:12:52.614914 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-k624c" podStartSLOduration=3.094182555 podStartE2EDuration="3.614891754s" podCreationTimestamp="2025-11-24 09:12:49 +0000 UTC" firstStartedPulling="2025-11-24 09:12:50.79004778 +0000 UTC m=+1426.676785915" lastFinishedPulling="2025-11-24 09:12:51.310756979 +0000 UTC m=+1427.197495114" observedRunningTime="2025-11-24 09:12:51.860442491 +0000 UTC m=+1427.747180626" watchObservedRunningTime="2025-11-24 09:12:52.614891754 +0000 UTC m=+1428.501629889" Nov 24 09:12:52 crc kubenswrapper[4886]: I1124 09:12:52.909090 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-zrrfv" Nov 24 09:12:52 crc kubenswrapper[4886]: I1124 09:12:52.967097 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-zrrfv"] Nov 24 09:12:54 crc kubenswrapper[4886]: I1124 09:12:54.875889 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-zrrfv" podUID="d1b3051c-49c1-4a3d-b767-7f7d0c2780d7" containerName="registry-server" containerID="cri-o://f75941095fa6fa6a27b934e3dea617047d0114f62c990d9ff98977ea75bfeb80" gracePeriod=2 Nov 24 09:12:55 crc kubenswrapper[4886]: I1124 09:12:55.365018 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zrrfv" Nov 24 09:12:55 crc kubenswrapper[4886]: I1124 09:12:55.450244 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-48tdj\" (UniqueName: \"kubernetes.io/projected/d1b3051c-49c1-4a3d-b767-7f7d0c2780d7-kube-api-access-48tdj\") pod \"d1b3051c-49c1-4a3d-b767-7f7d0c2780d7\" (UID: \"d1b3051c-49c1-4a3d-b767-7f7d0c2780d7\") " Nov 24 09:12:55 crc kubenswrapper[4886]: I1124 09:12:55.450580 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d1b3051c-49c1-4a3d-b767-7f7d0c2780d7-catalog-content\") pod \"d1b3051c-49c1-4a3d-b767-7f7d0c2780d7\" (UID: \"d1b3051c-49c1-4a3d-b767-7f7d0c2780d7\") " Nov 24 09:12:55 crc kubenswrapper[4886]: I1124 09:12:55.450635 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d1b3051c-49c1-4a3d-b767-7f7d0c2780d7-utilities\") pod \"d1b3051c-49c1-4a3d-b767-7f7d0c2780d7\" (UID: \"d1b3051c-49c1-4a3d-b767-7f7d0c2780d7\") " Nov 24 09:12:55 crc kubenswrapper[4886]: I1124 09:12:55.451900 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d1b3051c-49c1-4a3d-b767-7f7d0c2780d7-utilities" (OuterVolumeSpecName: "utilities") pod "d1b3051c-49c1-4a3d-b767-7f7d0c2780d7" (UID: "d1b3051c-49c1-4a3d-b767-7f7d0c2780d7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 09:12:55 crc kubenswrapper[4886]: I1124 09:12:55.464468 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d1b3051c-49c1-4a3d-b767-7f7d0c2780d7-kube-api-access-48tdj" (OuterVolumeSpecName: "kube-api-access-48tdj") pod "d1b3051c-49c1-4a3d-b767-7f7d0c2780d7" (UID: "d1b3051c-49c1-4a3d-b767-7f7d0c2780d7"). InnerVolumeSpecName "kube-api-access-48tdj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 09:12:55 crc kubenswrapper[4886]: I1124 09:12:55.480801 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d1b3051c-49c1-4a3d-b767-7f7d0c2780d7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d1b3051c-49c1-4a3d-b767-7f7d0c2780d7" (UID: "d1b3051c-49c1-4a3d-b767-7f7d0c2780d7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 09:12:55 crc kubenswrapper[4886]: I1124 09:12:55.553562 4886 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d1b3051c-49c1-4a3d-b767-7f7d0c2780d7-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 09:12:55 crc kubenswrapper[4886]: I1124 09:12:55.553631 4886 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d1b3051c-49c1-4a3d-b767-7f7d0c2780d7-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 09:12:55 crc kubenswrapper[4886]: I1124 09:12:55.553649 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-48tdj\" (UniqueName: \"kubernetes.io/projected/d1b3051c-49c1-4a3d-b767-7f7d0c2780d7-kube-api-access-48tdj\") on node \"crc\" DevicePath \"\"" Nov 24 09:12:55 crc kubenswrapper[4886]: I1124 09:12:55.893486 4886 generic.go:334] "Generic (PLEG): container finished" podID="d1b3051c-49c1-4a3d-b767-7f7d0c2780d7" containerID="f75941095fa6fa6a27b934e3dea617047d0114f62c990d9ff98977ea75bfeb80" exitCode=0 Nov 24 09:12:55 crc kubenswrapper[4886]: I1124 09:12:55.893557 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zrrfv" event={"ID":"d1b3051c-49c1-4a3d-b767-7f7d0c2780d7","Type":"ContainerDied","Data":"f75941095fa6fa6a27b934e3dea617047d0114f62c990d9ff98977ea75bfeb80"} Nov 24 09:12:55 crc kubenswrapper[4886]: I1124 09:12:55.893625 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zrrfv" Nov 24 09:12:55 crc kubenswrapper[4886]: I1124 09:12:55.893699 4886 scope.go:117] "RemoveContainer" containerID="f75941095fa6fa6a27b934e3dea617047d0114f62c990d9ff98977ea75bfeb80" Nov 24 09:12:55 crc kubenswrapper[4886]: I1124 09:12:55.893703 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zrrfv" event={"ID":"d1b3051c-49c1-4a3d-b767-7f7d0c2780d7","Type":"ContainerDied","Data":"71faa966fe5eb91ec42041ec054fd014997425fca0b980ca2921a965aad8b5b1"} Nov 24 09:12:55 crc kubenswrapper[4886]: I1124 09:12:55.919333 4886 scope.go:117] "RemoveContainer" containerID="1d8222ef08bfcd7a9b00df5349dab5815b913c9812f58be84b3391d8df7a40ff" Nov 24 09:12:55 crc kubenswrapper[4886]: I1124 09:12:55.942780 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-zrrfv"] Nov 24 09:12:55 crc kubenswrapper[4886]: I1124 09:12:55.961801 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-zrrfv"] Nov 24 09:12:55 crc kubenswrapper[4886]: I1124 09:12:55.978631 4886 scope.go:117] "RemoveContainer" containerID="395055cc76e3663de4733045ead2b7c3bec910aef3f758c1cc9ba5c62ea9204a" Nov 24 09:12:56 crc kubenswrapper[4886]: I1124 09:12:56.010337 4886 scope.go:117] "RemoveContainer" containerID="f75941095fa6fa6a27b934e3dea617047d0114f62c990d9ff98977ea75bfeb80" Nov 24 09:12:56 crc kubenswrapper[4886]: E1124 09:12:56.011058 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f75941095fa6fa6a27b934e3dea617047d0114f62c990d9ff98977ea75bfeb80\": container with ID starting with f75941095fa6fa6a27b934e3dea617047d0114f62c990d9ff98977ea75bfeb80 not found: ID does not exist" containerID="f75941095fa6fa6a27b934e3dea617047d0114f62c990d9ff98977ea75bfeb80" Nov 24 09:12:56 crc kubenswrapper[4886]: I1124 09:12:56.011128 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f75941095fa6fa6a27b934e3dea617047d0114f62c990d9ff98977ea75bfeb80"} err="failed to get container status \"f75941095fa6fa6a27b934e3dea617047d0114f62c990d9ff98977ea75bfeb80\": rpc error: code = NotFound desc = could not find container \"f75941095fa6fa6a27b934e3dea617047d0114f62c990d9ff98977ea75bfeb80\": container with ID starting with f75941095fa6fa6a27b934e3dea617047d0114f62c990d9ff98977ea75bfeb80 not found: ID does not exist" Nov 24 09:12:56 crc kubenswrapper[4886]: I1124 09:12:56.011187 4886 scope.go:117] "RemoveContainer" containerID="1d8222ef08bfcd7a9b00df5349dab5815b913c9812f58be84b3391d8df7a40ff" Nov 24 09:12:56 crc kubenswrapper[4886]: E1124 09:12:56.011633 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1d8222ef08bfcd7a9b00df5349dab5815b913c9812f58be84b3391d8df7a40ff\": container with ID starting with 1d8222ef08bfcd7a9b00df5349dab5815b913c9812f58be84b3391d8df7a40ff not found: ID does not exist" containerID="1d8222ef08bfcd7a9b00df5349dab5815b913c9812f58be84b3391d8df7a40ff" Nov 24 09:12:56 crc kubenswrapper[4886]: I1124 09:12:56.011680 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1d8222ef08bfcd7a9b00df5349dab5815b913c9812f58be84b3391d8df7a40ff"} err="failed to get container status \"1d8222ef08bfcd7a9b00df5349dab5815b913c9812f58be84b3391d8df7a40ff\": rpc error: code = NotFound desc = could not find container \"1d8222ef08bfcd7a9b00df5349dab5815b913c9812f58be84b3391d8df7a40ff\": container with ID starting with 1d8222ef08bfcd7a9b00df5349dab5815b913c9812f58be84b3391d8df7a40ff not found: ID does not exist" Nov 24 09:12:56 crc kubenswrapper[4886]: I1124 09:12:56.011713 4886 scope.go:117] "RemoveContainer" containerID="395055cc76e3663de4733045ead2b7c3bec910aef3f758c1cc9ba5c62ea9204a" Nov 24 09:12:56 crc kubenswrapper[4886]: E1124 09:12:56.012346 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"395055cc76e3663de4733045ead2b7c3bec910aef3f758c1cc9ba5c62ea9204a\": container with ID starting with 395055cc76e3663de4733045ead2b7c3bec910aef3f758c1cc9ba5c62ea9204a not found: ID does not exist" containerID="395055cc76e3663de4733045ead2b7c3bec910aef3f758c1cc9ba5c62ea9204a" Nov 24 09:12:56 crc kubenswrapper[4886]: I1124 09:12:56.012383 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"395055cc76e3663de4733045ead2b7c3bec910aef3f758c1cc9ba5c62ea9204a"} err="failed to get container status \"395055cc76e3663de4733045ead2b7c3bec910aef3f758c1cc9ba5c62ea9204a\": rpc error: code = NotFound desc = could not find container \"395055cc76e3663de4733045ead2b7c3bec910aef3f758c1cc9ba5c62ea9204a\": container with ID starting with 395055cc76e3663de4733045ead2b7c3bec910aef3f758c1cc9ba5c62ea9204a not found: ID does not exist" Nov 24 09:12:56 crc kubenswrapper[4886]: I1124 09:12:56.862843 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d1b3051c-49c1-4a3d-b767-7f7d0c2780d7" path="/var/lib/kubelet/pods/d1b3051c-49c1-4a3d-b767-7f7d0c2780d7/volumes" Nov 24 09:13:01 crc kubenswrapper[4886]: I1124 09:13:01.785349 4886 patch_prober.go:28] interesting pod/machine-config-daemon-zc46q container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 09:13:01 crc kubenswrapper[4886]: I1124 09:13:01.786536 4886 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zc46q" podUID="23cb993e-0360-4449-b604-8ddd825a6502" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 09:13:17 crc kubenswrapper[4886]: I1124 09:13:17.701183 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-ggbf5"] Nov 24 09:13:17 crc kubenswrapper[4886]: E1124 09:13:17.702878 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d1b3051c-49c1-4a3d-b767-7f7d0c2780d7" containerName="extract-content" Nov 24 09:13:17 crc kubenswrapper[4886]: I1124 09:13:17.702898 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1b3051c-49c1-4a3d-b767-7f7d0c2780d7" containerName="extract-content" Nov 24 09:13:17 crc kubenswrapper[4886]: E1124 09:13:17.702923 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d1b3051c-49c1-4a3d-b767-7f7d0c2780d7" containerName="extract-utilities" Nov 24 09:13:17 crc kubenswrapper[4886]: I1124 09:13:17.702934 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1b3051c-49c1-4a3d-b767-7f7d0c2780d7" containerName="extract-utilities" Nov 24 09:13:17 crc kubenswrapper[4886]: E1124 09:13:17.702952 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d1b3051c-49c1-4a3d-b767-7f7d0c2780d7" containerName="registry-server" Nov 24 09:13:17 crc kubenswrapper[4886]: I1124 09:13:17.702959 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1b3051c-49c1-4a3d-b767-7f7d0c2780d7" containerName="registry-server" Nov 24 09:13:17 crc kubenswrapper[4886]: I1124 09:13:17.703238 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="d1b3051c-49c1-4a3d-b767-7f7d0c2780d7" containerName="registry-server" Nov 24 09:13:17 crc kubenswrapper[4886]: I1124 09:13:17.705542 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ggbf5" Nov 24 09:13:17 crc kubenswrapper[4886]: I1124 09:13:17.721380 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-ggbf5"] Nov 24 09:13:17 crc kubenswrapper[4886]: I1124 09:13:17.795880 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2018aa66-fca3-457f-84bf-048b30f88dbf-catalog-content\") pod \"redhat-operators-ggbf5\" (UID: \"2018aa66-fca3-457f-84bf-048b30f88dbf\") " pod="openshift-marketplace/redhat-operators-ggbf5" Nov 24 09:13:17 crc kubenswrapper[4886]: I1124 09:13:17.796566 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2018aa66-fca3-457f-84bf-048b30f88dbf-utilities\") pod \"redhat-operators-ggbf5\" (UID: \"2018aa66-fca3-457f-84bf-048b30f88dbf\") " pod="openshift-marketplace/redhat-operators-ggbf5" Nov 24 09:13:17 crc kubenswrapper[4886]: I1124 09:13:17.796898 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qtdtc\" (UniqueName: \"kubernetes.io/projected/2018aa66-fca3-457f-84bf-048b30f88dbf-kube-api-access-qtdtc\") pod \"redhat-operators-ggbf5\" (UID: \"2018aa66-fca3-457f-84bf-048b30f88dbf\") " pod="openshift-marketplace/redhat-operators-ggbf5" Nov 24 09:13:17 crc kubenswrapper[4886]: I1124 09:13:17.899387 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2018aa66-fca3-457f-84bf-048b30f88dbf-utilities\") pod \"redhat-operators-ggbf5\" (UID: \"2018aa66-fca3-457f-84bf-048b30f88dbf\") " pod="openshift-marketplace/redhat-operators-ggbf5" Nov 24 09:13:17 crc kubenswrapper[4886]: I1124 09:13:17.899559 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qtdtc\" (UniqueName: \"kubernetes.io/projected/2018aa66-fca3-457f-84bf-048b30f88dbf-kube-api-access-qtdtc\") pod \"redhat-operators-ggbf5\" (UID: \"2018aa66-fca3-457f-84bf-048b30f88dbf\") " pod="openshift-marketplace/redhat-operators-ggbf5" Nov 24 09:13:17 crc kubenswrapper[4886]: I1124 09:13:17.899654 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2018aa66-fca3-457f-84bf-048b30f88dbf-catalog-content\") pod \"redhat-operators-ggbf5\" (UID: \"2018aa66-fca3-457f-84bf-048b30f88dbf\") " pod="openshift-marketplace/redhat-operators-ggbf5" Nov 24 09:13:17 crc kubenswrapper[4886]: I1124 09:13:17.900028 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2018aa66-fca3-457f-84bf-048b30f88dbf-utilities\") pod \"redhat-operators-ggbf5\" (UID: \"2018aa66-fca3-457f-84bf-048b30f88dbf\") " pod="openshift-marketplace/redhat-operators-ggbf5" Nov 24 09:13:17 crc kubenswrapper[4886]: I1124 09:13:17.900307 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2018aa66-fca3-457f-84bf-048b30f88dbf-catalog-content\") pod \"redhat-operators-ggbf5\" (UID: \"2018aa66-fca3-457f-84bf-048b30f88dbf\") " pod="openshift-marketplace/redhat-operators-ggbf5" Nov 24 09:13:17 crc kubenswrapper[4886]: I1124 09:13:17.924971 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qtdtc\" (UniqueName: \"kubernetes.io/projected/2018aa66-fca3-457f-84bf-048b30f88dbf-kube-api-access-qtdtc\") pod \"redhat-operators-ggbf5\" (UID: \"2018aa66-fca3-457f-84bf-048b30f88dbf\") " pod="openshift-marketplace/redhat-operators-ggbf5" Nov 24 09:13:18 crc kubenswrapper[4886]: I1124 09:13:18.027342 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ggbf5" Nov 24 09:13:18 crc kubenswrapper[4886]: I1124 09:13:18.562042 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-ggbf5"] Nov 24 09:13:19 crc kubenswrapper[4886]: I1124 09:13:19.165513 4886 generic.go:334] "Generic (PLEG): container finished" podID="2018aa66-fca3-457f-84bf-048b30f88dbf" containerID="61fc19c82c6d39e3876297001f2e8b89006cad76fb40ccc335e851985d1c671d" exitCode=0 Nov 24 09:13:19 crc kubenswrapper[4886]: I1124 09:13:19.165585 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ggbf5" event={"ID":"2018aa66-fca3-457f-84bf-048b30f88dbf","Type":"ContainerDied","Data":"61fc19c82c6d39e3876297001f2e8b89006cad76fb40ccc335e851985d1c671d"} Nov 24 09:13:19 crc kubenswrapper[4886]: I1124 09:13:19.167112 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ggbf5" event={"ID":"2018aa66-fca3-457f-84bf-048b30f88dbf","Type":"ContainerStarted","Data":"409cd97819cac41bc8b66b1f0d35093a070db3bea5774d162e97da63839a67fb"} Nov 24 09:13:22 crc kubenswrapper[4886]: I1124 09:13:22.060582 4886 scope.go:117] "RemoveContainer" containerID="d68a5d6237822d87f912b831c39028de71f2381796c7c8d731910a94191d89ef" Nov 24 09:13:22 crc kubenswrapper[4886]: I1124 09:13:22.097550 4886 scope.go:117] "RemoveContainer" containerID="2a57bdaff3d59792302a81061c7531f811a5a9fae61be7ced5e8a140e8d01508" Nov 24 09:13:22 crc kubenswrapper[4886]: I1124 09:13:22.164565 4886 scope.go:117] "RemoveContainer" containerID="076b0adb589615dd8664bc77955de8b132285fa883ef6ce2a66466567eee6be6" Nov 24 09:13:22 crc kubenswrapper[4886]: I1124 09:13:22.207317 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ggbf5" event={"ID":"2018aa66-fca3-457f-84bf-048b30f88dbf","Type":"ContainerStarted","Data":"26bcb351268f6bf985b5226affa97175b3b8a8029a696a15195c17122519ccb8"} Nov 24 09:13:23 crc kubenswrapper[4886]: I1124 09:13:23.217971 4886 generic.go:334] "Generic (PLEG): container finished" podID="2018aa66-fca3-457f-84bf-048b30f88dbf" containerID="26bcb351268f6bf985b5226affa97175b3b8a8029a696a15195c17122519ccb8" exitCode=0 Nov 24 09:13:23 crc kubenswrapper[4886]: I1124 09:13:23.218038 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ggbf5" event={"ID":"2018aa66-fca3-457f-84bf-048b30f88dbf","Type":"ContainerDied","Data":"26bcb351268f6bf985b5226affa97175b3b8a8029a696a15195c17122519ccb8"} Nov 24 09:13:25 crc kubenswrapper[4886]: I1124 09:13:25.243620 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ggbf5" event={"ID":"2018aa66-fca3-457f-84bf-048b30f88dbf","Type":"ContainerStarted","Data":"6e51c50e675c8655ca3a6bf99586a9d9ad795c43e3b3f748c51b17371ba4018e"} Nov 24 09:13:25 crc kubenswrapper[4886]: I1124 09:13:25.265713 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-ggbf5" podStartSLOduration=3.386734889 podStartE2EDuration="8.265689317s" podCreationTimestamp="2025-11-24 09:13:17 +0000 UTC" firstStartedPulling="2025-11-24 09:13:19.168006028 +0000 UTC m=+1455.054744153" lastFinishedPulling="2025-11-24 09:13:24.046960446 +0000 UTC m=+1459.933698581" observedRunningTime="2025-11-24 09:13:25.259584354 +0000 UTC m=+1461.146322499" watchObservedRunningTime="2025-11-24 09:13:25.265689317 +0000 UTC m=+1461.152427452" Nov 24 09:13:28 crc kubenswrapper[4886]: I1124 09:13:28.028483 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-ggbf5" Nov 24 09:13:28 crc kubenswrapper[4886]: I1124 09:13:28.029038 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-ggbf5" Nov 24 09:13:29 crc kubenswrapper[4886]: I1124 09:13:29.082785 4886 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-ggbf5" podUID="2018aa66-fca3-457f-84bf-048b30f88dbf" containerName="registry-server" probeResult="failure" output=< Nov 24 09:13:29 crc kubenswrapper[4886]: timeout: failed to connect service ":50051" within 1s Nov 24 09:13:29 crc kubenswrapper[4886]: > Nov 24 09:13:31 crc kubenswrapper[4886]: I1124 09:13:31.784290 4886 patch_prober.go:28] interesting pod/machine-config-daemon-zc46q container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 09:13:31 crc kubenswrapper[4886]: I1124 09:13:31.785018 4886 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zc46q" podUID="23cb993e-0360-4449-b604-8ddd825a6502" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 09:13:31 crc kubenswrapper[4886]: I1124 09:13:31.785075 4886 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-zc46q" Nov 24 09:13:31 crc kubenswrapper[4886]: I1124 09:13:31.786031 4886 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"4f19fc5058a9ae6baaee874798ea7b1c9ef07faaf66a15067253a35a9f971d8b"} pod="openshift-machine-config-operator/machine-config-daemon-zc46q" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 24 09:13:31 crc kubenswrapper[4886]: I1124 09:13:31.786098 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-zc46q" podUID="23cb993e-0360-4449-b604-8ddd825a6502" containerName="machine-config-daemon" containerID="cri-o://4f19fc5058a9ae6baaee874798ea7b1c9ef07faaf66a15067253a35a9f971d8b" gracePeriod=600 Nov 24 09:13:32 crc kubenswrapper[4886]: I1124 09:13:32.319693 4886 generic.go:334] "Generic (PLEG): container finished" podID="23cb993e-0360-4449-b604-8ddd825a6502" containerID="4f19fc5058a9ae6baaee874798ea7b1c9ef07faaf66a15067253a35a9f971d8b" exitCode=0 Nov 24 09:13:32 crc kubenswrapper[4886]: I1124 09:13:32.319773 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zc46q" event={"ID":"23cb993e-0360-4449-b604-8ddd825a6502","Type":"ContainerDied","Data":"4f19fc5058a9ae6baaee874798ea7b1c9ef07faaf66a15067253a35a9f971d8b"} Nov 24 09:13:32 crc kubenswrapper[4886]: I1124 09:13:32.320705 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zc46q" event={"ID":"23cb993e-0360-4449-b604-8ddd825a6502","Type":"ContainerStarted","Data":"e9e5da22d6f503b164f934a14d5dfa5723b622287d3192f538cd5fb4bb3d237c"} Nov 24 09:13:32 crc kubenswrapper[4886]: I1124 09:13:32.320749 4886 scope.go:117] "RemoveContainer" containerID="36e22a101132c390ac35de718c60f14be6675ff8618943dfbe4e49f19370e8c5" Nov 24 09:13:38 crc kubenswrapper[4886]: I1124 09:13:38.080298 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-ggbf5" Nov 24 09:13:38 crc kubenswrapper[4886]: I1124 09:13:38.137054 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-ggbf5" Nov 24 09:13:38 crc kubenswrapper[4886]: I1124 09:13:38.326810 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-ggbf5"] Nov 24 09:13:39 crc kubenswrapper[4886]: I1124 09:13:39.401466 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-ggbf5" podUID="2018aa66-fca3-457f-84bf-048b30f88dbf" containerName="registry-server" containerID="cri-o://6e51c50e675c8655ca3a6bf99586a9d9ad795c43e3b3f748c51b17371ba4018e" gracePeriod=2 Nov 24 09:13:39 crc kubenswrapper[4886]: I1124 09:13:39.827225 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ggbf5" Nov 24 09:13:39 crc kubenswrapper[4886]: I1124 09:13:39.944178 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qtdtc\" (UniqueName: \"kubernetes.io/projected/2018aa66-fca3-457f-84bf-048b30f88dbf-kube-api-access-qtdtc\") pod \"2018aa66-fca3-457f-84bf-048b30f88dbf\" (UID: \"2018aa66-fca3-457f-84bf-048b30f88dbf\") " Nov 24 09:13:39 crc kubenswrapper[4886]: I1124 09:13:39.944296 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2018aa66-fca3-457f-84bf-048b30f88dbf-catalog-content\") pod \"2018aa66-fca3-457f-84bf-048b30f88dbf\" (UID: \"2018aa66-fca3-457f-84bf-048b30f88dbf\") " Nov 24 09:13:39 crc kubenswrapper[4886]: I1124 09:13:39.944459 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2018aa66-fca3-457f-84bf-048b30f88dbf-utilities\") pod \"2018aa66-fca3-457f-84bf-048b30f88dbf\" (UID: \"2018aa66-fca3-457f-84bf-048b30f88dbf\") " Nov 24 09:13:39 crc kubenswrapper[4886]: I1124 09:13:39.945752 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2018aa66-fca3-457f-84bf-048b30f88dbf-utilities" (OuterVolumeSpecName: "utilities") pod "2018aa66-fca3-457f-84bf-048b30f88dbf" (UID: "2018aa66-fca3-457f-84bf-048b30f88dbf"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 09:13:39 crc kubenswrapper[4886]: I1124 09:13:39.954303 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2018aa66-fca3-457f-84bf-048b30f88dbf-kube-api-access-qtdtc" (OuterVolumeSpecName: "kube-api-access-qtdtc") pod "2018aa66-fca3-457f-84bf-048b30f88dbf" (UID: "2018aa66-fca3-457f-84bf-048b30f88dbf"). InnerVolumeSpecName "kube-api-access-qtdtc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 09:13:40 crc kubenswrapper[4886]: I1124 09:13:40.044348 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2018aa66-fca3-457f-84bf-048b30f88dbf-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2018aa66-fca3-457f-84bf-048b30f88dbf" (UID: "2018aa66-fca3-457f-84bf-048b30f88dbf"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 09:13:40 crc kubenswrapper[4886]: I1124 09:13:40.047758 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qtdtc\" (UniqueName: \"kubernetes.io/projected/2018aa66-fca3-457f-84bf-048b30f88dbf-kube-api-access-qtdtc\") on node \"crc\" DevicePath \"\"" Nov 24 09:13:40 crc kubenswrapper[4886]: I1124 09:13:40.047821 4886 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2018aa66-fca3-457f-84bf-048b30f88dbf-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 09:13:40 crc kubenswrapper[4886]: I1124 09:13:40.047832 4886 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2018aa66-fca3-457f-84bf-048b30f88dbf-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 09:13:40 crc kubenswrapper[4886]: I1124 09:13:40.416344 4886 generic.go:334] "Generic (PLEG): container finished" podID="2018aa66-fca3-457f-84bf-048b30f88dbf" containerID="6e51c50e675c8655ca3a6bf99586a9d9ad795c43e3b3f748c51b17371ba4018e" exitCode=0 Nov 24 09:13:40 crc kubenswrapper[4886]: I1124 09:13:40.416388 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ggbf5" event={"ID":"2018aa66-fca3-457f-84bf-048b30f88dbf","Type":"ContainerDied","Data":"6e51c50e675c8655ca3a6bf99586a9d9ad795c43e3b3f748c51b17371ba4018e"} Nov 24 09:13:40 crc kubenswrapper[4886]: I1124 09:13:40.416435 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ggbf5" Nov 24 09:13:40 crc kubenswrapper[4886]: I1124 09:13:40.416455 4886 scope.go:117] "RemoveContainer" containerID="6e51c50e675c8655ca3a6bf99586a9d9ad795c43e3b3f748c51b17371ba4018e" Nov 24 09:13:40 crc kubenswrapper[4886]: I1124 09:13:40.416441 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ggbf5" event={"ID":"2018aa66-fca3-457f-84bf-048b30f88dbf","Type":"ContainerDied","Data":"409cd97819cac41bc8b66b1f0d35093a070db3bea5774d162e97da63839a67fb"} Nov 24 09:13:40 crc kubenswrapper[4886]: I1124 09:13:40.443754 4886 scope.go:117] "RemoveContainer" containerID="26bcb351268f6bf985b5226affa97175b3b8a8029a696a15195c17122519ccb8" Nov 24 09:13:40 crc kubenswrapper[4886]: I1124 09:13:40.454443 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-ggbf5"] Nov 24 09:13:40 crc kubenswrapper[4886]: I1124 09:13:40.473620 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-ggbf5"] Nov 24 09:13:40 crc kubenswrapper[4886]: I1124 09:13:40.490262 4886 scope.go:117] "RemoveContainer" containerID="61fc19c82c6d39e3876297001f2e8b89006cad76fb40ccc335e851985d1c671d" Nov 24 09:13:40 crc kubenswrapper[4886]: I1124 09:13:40.521845 4886 scope.go:117] "RemoveContainer" containerID="6e51c50e675c8655ca3a6bf99586a9d9ad795c43e3b3f748c51b17371ba4018e" Nov 24 09:13:40 crc kubenswrapper[4886]: E1124 09:13:40.522685 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6e51c50e675c8655ca3a6bf99586a9d9ad795c43e3b3f748c51b17371ba4018e\": container with ID starting with 6e51c50e675c8655ca3a6bf99586a9d9ad795c43e3b3f748c51b17371ba4018e not found: ID does not exist" containerID="6e51c50e675c8655ca3a6bf99586a9d9ad795c43e3b3f748c51b17371ba4018e" Nov 24 09:13:40 crc kubenswrapper[4886]: I1124 09:13:40.522743 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6e51c50e675c8655ca3a6bf99586a9d9ad795c43e3b3f748c51b17371ba4018e"} err="failed to get container status \"6e51c50e675c8655ca3a6bf99586a9d9ad795c43e3b3f748c51b17371ba4018e\": rpc error: code = NotFound desc = could not find container \"6e51c50e675c8655ca3a6bf99586a9d9ad795c43e3b3f748c51b17371ba4018e\": container with ID starting with 6e51c50e675c8655ca3a6bf99586a9d9ad795c43e3b3f748c51b17371ba4018e not found: ID does not exist" Nov 24 09:13:40 crc kubenswrapper[4886]: I1124 09:13:40.522776 4886 scope.go:117] "RemoveContainer" containerID="26bcb351268f6bf985b5226affa97175b3b8a8029a696a15195c17122519ccb8" Nov 24 09:13:40 crc kubenswrapper[4886]: E1124 09:13:40.523253 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"26bcb351268f6bf985b5226affa97175b3b8a8029a696a15195c17122519ccb8\": container with ID starting with 26bcb351268f6bf985b5226affa97175b3b8a8029a696a15195c17122519ccb8 not found: ID does not exist" containerID="26bcb351268f6bf985b5226affa97175b3b8a8029a696a15195c17122519ccb8" Nov 24 09:13:40 crc kubenswrapper[4886]: I1124 09:13:40.523308 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"26bcb351268f6bf985b5226affa97175b3b8a8029a696a15195c17122519ccb8"} err="failed to get container status \"26bcb351268f6bf985b5226affa97175b3b8a8029a696a15195c17122519ccb8\": rpc error: code = NotFound desc = could not find container \"26bcb351268f6bf985b5226affa97175b3b8a8029a696a15195c17122519ccb8\": container with ID starting with 26bcb351268f6bf985b5226affa97175b3b8a8029a696a15195c17122519ccb8 not found: ID does not exist" Nov 24 09:13:40 crc kubenswrapper[4886]: I1124 09:13:40.523340 4886 scope.go:117] "RemoveContainer" containerID="61fc19c82c6d39e3876297001f2e8b89006cad76fb40ccc335e851985d1c671d" Nov 24 09:13:40 crc kubenswrapper[4886]: E1124 09:13:40.523945 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"61fc19c82c6d39e3876297001f2e8b89006cad76fb40ccc335e851985d1c671d\": container with ID starting with 61fc19c82c6d39e3876297001f2e8b89006cad76fb40ccc335e851985d1c671d not found: ID does not exist" containerID="61fc19c82c6d39e3876297001f2e8b89006cad76fb40ccc335e851985d1c671d" Nov 24 09:13:40 crc kubenswrapper[4886]: I1124 09:13:40.523988 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"61fc19c82c6d39e3876297001f2e8b89006cad76fb40ccc335e851985d1c671d"} err="failed to get container status \"61fc19c82c6d39e3876297001f2e8b89006cad76fb40ccc335e851985d1c671d\": rpc error: code = NotFound desc = could not find container \"61fc19c82c6d39e3876297001f2e8b89006cad76fb40ccc335e851985d1c671d\": container with ID starting with 61fc19c82c6d39e3876297001f2e8b89006cad76fb40ccc335e851985d1c671d not found: ID does not exist" Nov 24 09:13:40 crc kubenswrapper[4886]: I1124 09:13:40.863204 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2018aa66-fca3-457f-84bf-048b30f88dbf" path="/var/lib/kubelet/pods/2018aa66-fca3-457f-84bf-048b30f88dbf/volumes" Nov 24 09:13:49 crc kubenswrapper[4886]: I1124 09:13:49.617622 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-4jqlz"] Nov 24 09:13:49 crc kubenswrapper[4886]: E1124 09:13:49.618920 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2018aa66-fca3-457f-84bf-048b30f88dbf" containerName="extract-content" Nov 24 09:13:49 crc kubenswrapper[4886]: I1124 09:13:49.618937 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="2018aa66-fca3-457f-84bf-048b30f88dbf" containerName="extract-content" Nov 24 09:13:49 crc kubenswrapper[4886]: E1124 09:13:49.618954 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2018aa66-fca3-457f-84bf-048b30f88dbf" containerName="extract-utilities" Nov 24 09:13:49 crc kubenswrapper[4886]: I1124 09:13:49.618961 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="2018aa66-fca3-457f-84bf-048b30f88dbf" containerName="extract-utilities" Nov 24 09:13:49 crc kubenswrapper[4886]: E1124 09:13:49.618980 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2018aa66-fca3-457f-84bf-048b30f88dbf" containerName="registry-server" Nov 24 09:13:49 crc kubenswrapper[4886]: I1124 09:13:49.618986 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="2018aa66-fca3-457f-84bf-048b30f88dbf" containerName="registry-server" Nov 24 09:13:49 crc kubenswrapper[4886]: I1124 09:13:49.621613 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="2018aa66-fca3-457f-84bf-048b30f88dbf" containerName="registry-server" Nov 24 09:13:49 crc kubenswrapper[4886]: I1124 09:13:49.624102 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4jqlz" Nov 24 09:13:49 crc kubenswrapper[4886]: I1124 09:13:49.638457 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-4jqlz"] Nov 24 09:13:49 crc kubenswrapper[4886]: I1124 09:13:49.663410 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/30ad19ab-504f-414e-b53e-cd54a1f917f6-utilities\") pod \"certified-operators-4jqlz\" (UID: \"30ad19ab-504f-414e-b53e-cd54a1f917f6\") " pod="openshift-marketplace/certified-operators-4jqlz" Nov 24 09:13:49 crc kubenswrapper[4886]: I1124 09:13:49.663593 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wsfqb\" (UniqueName: \"kubernetes.io/projected/30ad19ab-504f-414e-b53e-cd54a1f917f6-kube-api-access-wsfqb\") pod \"certified-operators-4jqlz\" (UID: \"30ad19ab-504f-414e-b53e-cd54a1f917f6\") " pod="openshift-marketplace/certified-operators-4jqlz" Nov 24 09:13:49 crc kubenswrapper[4886]: I1124 09:13:49.663646 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/30ad19ab-504f-414e-b53e-cd54a1f917f6-catalog-content\") pod \"certified-operators-4jqlz\" (UID: \"30ad19ab-504f-414e-b53e-cd54a1f917f6\") " pod="openshift-marketplace/certified-operators-4jqlz" Nov 24 09:13:49 crc kubenswrapper[4886]: I1124 09:13:49.767907 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wsfqb\" (UniqueName: \"kubernetes.io/projected/30ad19ab-504f-414e-b53e-cd54a1f917f6-kube-api-access-wsfqb\") pod \"certified-operators-4jqlz\" (UID: \"30ad19ab-504f-414e-b53e-cd54a1f917f6\") " pod="openshift-marketplace/certified-operators-4jqlz" Nov 24 09:13:49 crc kubenswrapper[4886]: I1124 09:13:49.768056 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/30ad19ab-504f-414e-b53e-cd54a1f917f6-catalog-content\") pod \"certified-operators-4jqlz\" (UID: \"30ad19ab-504f-414e-b53e-cd54a1f917f6\") " pod="openshift-marketplace/certified-operators-4jqlz" Nov 24 09:13:49 crc kubenswrapper[4886]: I1124 09:13:49.768274 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/30ad19ab-504f-414e-b53e-cd54a1f917f6-utilities\") pod \"certified-operators-4jqlz\" (UID: \"30ad19ab-504f-414e-b53e-cd54a1f917f6\") " pod="openshift-marketplace/certified-operators-4jqlz" Nov 24 09:13:49 crc kubenswrapper[4886]: I1124 09:13:49.769105 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/30ad19ab-504f-414e-b53e-cd54a1f917f6-utilities\") pod \"certified-operators-4jqlz\" (UID: \"30ad19ab-504f-414e-b53e-cd54a1f917f6\") " pod="openshift-marketplace/certified-operators-4jqlz" Nov 24 09:13:49 crc kubenswrapper[4886]: I1124 09:13:49.769836 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/30ad19ab-504f-414e-b53e-cd54a1f917f6-catalog-content\") pod \"certified-operators-4jqlz\" (UID: \"30ad19ab-504f-414e-b53e-cd54a1f917f6\") " pod="openshift-marketplace/certified-operators-4jqlz" Nov 24 09:13:49 crc kubenswrapper[4886]: I1124 09:13:49.804348 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wsfqb\" (UniqueName: \"kubernetes.io/projected/30ad19ab-504f-414e-b53e-cd54a1f917f6-kube-api-access-wsfqb\") pod \"certified-operators-4jqlz\" (UID: \"30ad19ab-504f-414e-b53e-cd54a1f917f6\") " pod="openshift-marketplace/certified-operators-4jqlz" Nov 24 09:13:49 crc kubenswrapper[4886]: I1124 09:13:49.955061 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4jqlz" Nov 24 09:13:50 crc kubenswrapper[4886]: I1124 09:13:50.294771 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-4jqlz"] Nov 24 09:13:50 crc kubenswrapper[4886]: I1124 09:13:50.530742 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4jqlz" event={"ID":"30ad19ab-504f-414e-b53e-cd54a1f917f6","Type":"ContainerStarted","Data":"446189de6c70fd5a371281dc8a185048159b2e7f03b032553210670021d4eba7"} Nov 24 09:13:51 crc kubenswrapper[4886]: I1124 09:13:51.543943 4886 generic.go:334] "Generic (PLEG): container finished" podID="30ad19ab-504f-414e-b53e-cd54a1f917f6" containerID="6bdc7b9bc8388a207c1b54318ad2d807217215c72e1aab365f9d7e2b74c0d460" exitCode=0 Nov 24 09:13:51 crc kubenswrapper[4886]: I1124 09:13:51.544027 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4jqlz" event={"ID":"30ad19ab-504f-414e-b53e-cd54a1f917f6","Type":"ContainerDied","Data":"6bdc7b9bc8388a207c1b54318ad2d807217215c72e1aab365f9d7e2b74c0d460"} Nov 24 09:13:52 crc kubenswrapper[4886]: I1124 09:13:52.556735 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4jqlz" event={"ID":"30ad19ab-504f-414e-b53e-cd54a1f917f6","Type":"ContainerStarted","Data":"2c3e56fbecc24c12d6d74f63a117737a815dedd20ae8ebb424ac9d946e38e965"} Nov 24 09:13:53 crc kubenswrapper[4886]: I1124 09:13:53.592354 4886 generic.go:334] "Generic (PLEG): container finished" podID="30ad19ab-504f-414e-b53e-cd54a1f917f6" containerID="2c3e56fbecc24c12d6d74f63a117737a815dedd20ae8ebb424ac9d946e38e965" exitCode=0 Nov 24 09:13:53 crc kubenswrapper[4886]: I1124 09:13:53.592471 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4jqlz" event={"ID":"30ad19ab-504f-414e-b53e-cd54a1f917f6","Type":"ContainerDied","Data":"2c3e56fbecc24c12d6d74f63a117737a815dedd20ae8ebb424ac9d946e38e965"} Nov 24 09:13:54 crc kubenswrapper[4886]: I1124 09:13:54.610102 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4jqlz" event={"ID":"30ad19ab-504f-414e-b53e-cd54a1f917f6","Type":"ContainerStarted","Data":"879f219d91de5926a5e51c030f11599d7236ac02205857f958357cf50261920c"} Nov 24 09:13:54 crc kubenswrapper[4886]: I1124 09:13:54.637289 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-4jqlz" podStartSLOduration=3.103789258 podStartE2EDuration="5.637263885s" podCreationTimestamp="2025-11-24 09:13:49 +0000 UTC" firstStartedPulling="2025-11-24 09:13:51.549878568 +0000 UTC m=+1487.436616703" lastFinishedPulling="2025-11-24 09:13:54.083353195 +0000 UTC m=+1489.970091330" observedRunningTime="2025-11-24 09:13:54.631399919 +0000 UTC m=+1490.518138074" watchObservedRunningTime="2025-11-24 09:13:54.637263885 +0000 UTC m=+1490.524002020" Nov 24 09:13:59 crc kubenswrapper[4886]: I1124 09:13:59.955754 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-4jqlz" Nov 24 09:13:59 crc kubenswrapper[4886]: I1124 09:13:59.956565 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-4jqlz" Nov 24 09:14:00 crc kubenswrapper[4886]: I1124 09:14:00.008556 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-4jqlz" Nov 24 09:14:00 crc kubenswrapper[4886]: I1124 09:14:00.716133 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-4jqlz" Nov 24 09:14:00 crc kubenswrapper[4886]: I1124 09:14:00.781845 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-4jqlz"] Nov 24 09:14:02 crc kubenswrapper[4886]: I1124 09:14:02.680622 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-4jqlz" podUID="30ad19ab-504f-414e-b53e-cd54a1f917f6" containerName="registry-server" containerID="cri-o://879f219d91de5926a5e51c030f11599d7236ac02205857f958357cf50261920c" gracePeriod=2 Nov 24 09:14:03 crc kubenswrapper[4886]: I1124 09:14:03.191476 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4jqlz" Nov 24 09:14:03 crc kubenswrapper[4886]: I1124 09:14:03.362753 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/30ad19ab-504f-414e-b53e-cd54a1f917f6-catalog-content\") pod \"30ad19ab-504f-414e-b53e-cd54a1f917f6\" (UID: \"30ad19ab-504f-414e-b53e-cd54a1f917f6\") " Nov 24 09:14:03 crc kubenswrapper[4886]: I1124 09:14:03.363059 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wsfqb\" (UniqueName: \"kubernetes.io/projected/30ad19ab-504f-414e-b53e-cd54a1f917f6-kube-api-access-wsfqb\") pod \"30ad19ab-504f-414e-b53e-cd54a1f917f6\" (UID: \"30ad19ab-504f-414e-b53e-cd54a1f917f6\") " Nov 24 09:14:03 crc kubenswrapper[4886]: I1124 09:14:03.363118 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/30ad19ab-504f-414e-b53e-cd54a1f917f6-utilities\") pod \"30ad19ab-504f-414e-b53e-cd54a1f917f6\" (UID: \"30ad19ab-504f-414e-b53e-cd54a1f917f6\") " Nov 24 09:14:03 crc kubenswrapper[4886]: I1124 09:14:03.364403 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/30ad19ab-504f-414e-b53e-cd54a1f917f6-utilities" (OuterVolumeSpecName: "utilities") pod "30ad19ab-504f-414e-b53e-cd54a1f917f6" (UID: "30ad19ab-504f-414e-b53e-cd54a1f917f6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 09:14:03 crc kubenswrapper[4886]: I1124 09:14:03.377081 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/30ad19ab-504f-414e-b53e-cd54a1f917f6-kube-api-access-wsfqb" (OuterVolumeSpecName: "kube-api-access-wsfqb") pod "30ad19ab-504f-414e-b53e-cd54a1f917f6" (UID: "30ad19ab-504f-414e-b53e-cd54a1f917f6"). InnerVolumeSpecName "kube-api-access-wsfqb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 09:14:03 crc kubenswrapper[4886]: I1124 09:14:03.466501 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wsfqb\" (UniqueName: \"kubernetes.io/projected/30ad19ab-504f-414e-b53e-cd54a1f917f6-kube-api-access-wsfqb\") on node \"crc\" DevicePath \"\"" Nov 24 09:14:03 crc kubenswrapper[4886]: I1124 09:14:03.466545 4886 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/30ad19ab-504f-414e-b53e-cd54a1f917f6-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 09:14:03 crc kubenswrapper[4886]: I1124 09:14:03.694461 4886 generic.go:334] "Generic (PLEG): container finished" podID="30ad19ab-504f-414e-b53e-cd54a1f917f6" containerID="879f219d91de5926a5e51c030f11599d7236ac02205857f958357cf50261920c" exitCode=0 Nov 24 09:14:03 crc kubenswrapper[4886]: I1124 09:14:03.694469 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4jqlz" event={"ID":"30ad19ab-504f-414e-b53e-cd54a1f917f6","Type":"ContainerDied","Data":"879f219d91de5926a5e51c030f11599d7236ac02205857f958357cf50261920c"} Nov 24 09:14:03 crc kubenswrapper[4886]: I1124 09:14:03.694571 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4jqlz" event={"ID":"30ad19ab-504f-414e-b53e-cd54a1f917f6","Type":"ContainerDied","Data":"446189de6c70fd5a371281dc8a185048159b2e7f03b032553210670021d4eba7"} Nov 24 09:14:03 crc kubenswrapper[4886]: I1124 09:14:03.694521 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4jqlz" Nov 24 09:14:03 crc kubenswrapper[4886]: I1124 09:14:03.694605 4886 scope.go:117] "RemoveContainer" containerID="879f219d91de5926a5e51c030f11599d7236ac02205857f958357cf50261920c" Nov 24 09:14:03 crc kubenswrapper[4886]: I1124 09:14:03.724325 4886 scope.go:117] "RemoveContainer" containerID="2c3e56fbecc24c12d6d74f63a117737a815dedd20ae8ebb424ac9d946e38e965" Nov 24 09:14:03 crc kubenswrapper[4886]: I1124 09:14:03.754245 4886 scope.go:117] "RemoveContainer" containerID="6bdc7b9bc8388a207c1b54318ad2d807217215c72e1aab365f9d7e2b74c0d460" Nov 24 09:14:03 crc kubenswrapper[4886]: I1124 09:14:03.791693 4886 scope.go:117] "RemoveContainer" containerID="879f219d91de5926a5e51c030f11599d7236ac02205857f958357cf50261920c" Nov 24 09:14:03 crc kubenswrapper[4886]: E1124 09:14:03.792459 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"879f219d91de5926a5e51c030f11599d7236ac02205857f958357cf50261920c\": container with ID starting with 879f219d91de5926a5e51c030f11599d7236ac02205857f958357cf50261920c not found: ID does not exist" containerID="879f219d91de5926a5e51c030f11599d7236ac02205857f958357cf50261920c" Nov 24 09:14:03 crc kubenswrapper[4886]: I1124 09:14:03.792576 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"879f219d91de5926a5e51c030f11599d7236ac02205857f958357cf50261920c"} err="failed to get container status \"879f219d91de5926a5e51c030f11599d7236ac02205857f958357cf50261920c\": rpc error: code = NotFound desc = could not find container \"879f219d91de5926a5e51c030f11599d7236ac02205857f958357cf50261920c\": container with ID starting with 879f219d91de5926a5e51c030f11599d7236ac02205857f958357cf50261920c not found: ID does not exist" Nov 24 09:14:03 crc kubenswrapper[4886]: I1124 09:14:03.792663 4886 scope.go:117] "RemoveContainer" containerID="2c3e56fbecc24c12d6d74f63a117737a815dedd20ae8ebb424ac9d946e38e965" Nov 24 09:14:03 crc kubenswrapper[4886]: E1124 09:14:03.793364 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2c3e56fbecc24c12d6d74f63a117737a815dedd20ae8ebb424ac9d946e38e965\": container with ID starting with 2c3e56fbecc24c12d6d74f63a117737a815dedd20ae8ebb424ac9d946e38e965 not found: ID does not exist" containerID="2c3e56fbecc24c12d6d74f63a117737a815dedd20ae8ebb424ac9d946e38e965" Nov 24 09:14:03 crc kubenswrapper[4886]: I1124 09:14:03.793384 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2c3e56fbecc24c12d6d74f63a117737a815dedd20ae8ebb424ac9d946e38e965"} err="failed to get container status \"2c3e56fbecc24c12d6d74f63a117737a815dedd20ae8ebb424ac9d946e38e965\": rpc error: code = NotFound desc = could not find container \"2c3e56fbecc24c12d6d74f63a117737a815dedd20ae8ebb424ac9d946e38e965\": container with ID starting with 2c3e56fbecc24c12d6d74f63a117737a815dedd20ae8ebb424ac9d946e38e965 not found: ID does not exist" Nov 24 09:14:03 crc kubenswrapper[4886]: I1124 09:14:03.793397 4886 scope.go:117] "RemoveContainer" containerID="6bdc7b9bc8388a207c1b54318ad2d807217215c72e1aab365f9d7e2b74c0d460" Nov 24 09:14:03 crc kubenswrapper[4886]: E1124 09:14:03.793741 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6bdc7b9bc8388a207c1b54318ad2d807217215c72e1aab365f9d7e2b74c0d460\": container with ID starting with 6bdc7b9bc8388a207c1b54318ad2d807217215c72e1aab365f9d7e2b74c0d460 not found: ID does not exist" containerID="6bdc7b9bc8388a207c1b54318ad2d807217215c72e1aab365f9d7e2b74c0d460" Nov 24 09:14:03 crc kubenswrapper[4886]: I1124 09:14:03.793835 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6bdc7b9bc8388a207c1b54318ad2d807217215c72e1aab365f9d7e2b74c0d460"} err="failed to get container status \"6bdc7b9bc8388a207c1b54318ad2d807217215c72e1aab365f9d7e2b74c0d460\": rpc error: code = NotFound desc = could not find container \"6bdc7b9bc8388a207c1b54318ad2d807217215c72e1aab365f9d7e2b74c0d460\": container with ID starting with 6bdc7b9bc8388a207c1b54318ad2d807217215c72e1aab365f9d7e2b74c0d460 not found: ID does not exist" Nov 24 09:14:04 crc kubenswrapper[4886]: I1124 09:14:04.077940 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/30ad19ab-504f-414e-b53e-cd54a1f917f6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "30ad19ab-504f-414e-b53e-cd54a1f917f6" (UID: "30ad19ab-504f-414e-b53e-cd54a1f917f6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 09:14:04 crc kubenswrapper[4886]: I1124 09:14:04.081058 4886 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/30ad19ab-504f-414e-b53e-cd54a1f917f6-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 09:14:04 crc kubenswrapper[4886]: I1124 09:14:04.332643 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-4jqlz"] Nov 24 09:14:04 crc kubenswrapper[4886]: I1124 09:14:04.343659 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-4jqlz"] Nov 24 09:14:04 crc kubenswrapper[4886]: I1124 09:14:04.866900 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="30ad19ab-504f-414e-b53e-cd54a1f917f6" path="/var/lib/kubelet/pods/30ad19ab-504f-414e-b53e-cd54a1f917f6/volumes" Nov 24 09:14:22 crc kubenswrapper[4886]: I1124 09:14:22.340053 4886 scope.go:117] "RemoveContainer" containerID="ca45e518e732ccb8752a7945fedfd2abc7db31393667870fd9da4b132a0bff5b" Nov 24 09:14:22 crc kubenswrapper[4886]: I1124 09:14:22.362779 4886 scope.go:117] "RemoveContainer" containerID="a753f4d959369346b62434e569d11133ba573b9a5cbd32cd9161eeffadf88a2b" Nov 24 09:14:22 crc kubenswrapper[4886]: I1124 09:14:22.410008 4886 scope.go:117] "RemoveContainer" containerID="a9e609372ed8f7c41485e93de92f4c0f886ac1348e1a3d5e436e93097f5f35cf" Nov 24 09:15:00 crc kubenswrapper[4886]: I1124 09:15:00.170252 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29399595-hxg8m"] Nov 24 09:15:00 crc kubenswrapper[4886]: E1124 09:15:00.171705 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30ad19ab-504f-414e-b53e-cd54a1f917f6" containerName="extract-content" Nov 24 09:15:00 crc kubenswrapper[4886]: I1124 09:15:00.171723 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="30ad19ab-504f-414e-b53e-cd54a1f917f6" containerName="extract-content" Nov 24 09:15:00 crc kubenswrapper[4886]: E1124 09:15:00.171789 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30ad19ab-504f-414e-b53e-cd54a1f917f6" containerName="extract-utilities" Nov 24 09:15:00 crc kubenswrapper[4886]: I1124 09:15:00.171799 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="30ad19ab-504f-414e-b53e-cd54a1f917f6" containerName="extract-utilities" Nov 24 09:15:00 crc kubenswrapper[4886]: E1124 09:15:00.171816 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30ad19ab-504f-414e-b53e-cd54a1f917f6" containerName="registry-server" Nov 24 09:15:00 crc kubenswrapper[4886]: I1124 09:15:00.171823 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="30ad19ab-504f-414e-b53e-cd54a1f917f6" containerName="registry-server" Nov 24 09:15:00 crc kubenswrapper[4886]: I1124 09:15:00.172064 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="30ad19ab-504f-414e-b53e-cd54a1f917f6" containerName="registry-server" Nov 24 09:15:00 crc kubenswrapper[4886]: I1124 09:15:00.173508 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29399595-hxg8m" Nov 24 09:15:00 crc kubenswrapper[4886]: I1124 09:15:00.184308 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 24 09:15:00 crc kubenswrapper[4886]: I1124 09:15:00.184431 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 24 09:15:00 crc kubenswrapper[4886]: I1124 09:15:00.190591 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29399595-hxg8m"] Nov 24 09:15:00 crc kubenswrapper[4886]: I1124 09:15:00.256700 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cbckp\" (UniqueName: \"kubernetes.io/projected/74884a84-50f8-45a2-9b2c-29f84f510593-kube-api-access-cbckp\") pod \"collect-profiles-29399595-hxg8m\" (UID: \"74884a84-50f8-45a2-9b2c-29f84f510593\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399595-hxg8m" Nov 24 09:15:00 crc kubenswrapper[4886]: I1124 09:15:00.256768 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/74884a84-50f8-45a2-9b2c-29f84f510593-secret-volume\") pod \"collect-profiles-29399595-hxg8m\" (UID: \"74884a84-50f8-45a2-9b2c-29f84f510593\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399595-hxg8m" Nov 24 09:15:00 crc kubenswrapper[4886]: I1124 09:15:00.256882 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/74884a84-50f8-45a2-9b2c-29f84f510593-config-volume\") pod \"collect-profiles-29399595-hxg8m\" (UID: \"74884a84-50f8-45a2-9b2c-29f84f510593\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399595-hxg8m" Nov 24 09:15:00 crc kubenswrapper[4886]: I1124 09:15:00.358939 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cbckp\" (UniqueName: \"kubernetes.io/projected/74884a84-50f8-45a2-9b2c-29f84f510593-kube-api-access-cbckp\") pod \"collect-profiles-29399595-hxg8m\" (UID: \"74884a84-50f8-45a2-9b2c-29f84f510593\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399595-hxg8m" Nov 24 09:15:00 crc kubenswrapper[4886]: I1124 09:15:00.359048 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/74884a84-50f8-45a2-9b2c-29f84f510593-secret-volume\") pod \"collect-profiles-29399595-hxg8m\" (UID: \"74884a84-50f8-45a2-9b2c-29f84f510593\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399595-hxg8m" Nov 24 09:15:00 crc kubenswrapper[4886]: I1124 09:15:00.359245 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/74884a84-50f8-45a2-9b2c-29f84f510593-config-volume\") pod \"collect-profiles-29399595-hxg8m\" (UID: \"74884a84-50f8-45a2-9b2c-29f84f510593\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399595-hxg8m" Nov 24 09:15:00 crc kubenswrapper[4886]: I1124 09:15:00.360312 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/74884a84-50f8-45a2-9b2c-29f84f510593-config-volume\") pod \"collect-profiles-29399595-hxg8m\" (UID: \"74884a84-50f8-45a2-9b2c-29f84f510593\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399595-hxg8m" Nov 24 09:15:00 crc kubenswrapper[4886]: I1124 09:15:00.367726 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/74884a84-50f8-45a2-9b2c-29f84f510593-secret-volume\") pod \"collect-profiles-29399595-hxg8m\" (UID: \"74884a84-50f8-45a2-9b2c-29f84f510593\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399595-hxg8m" Nov 24 09:15:00 crc kubenswrapper[4886]: I1124 09:15:00.377893 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cbckp\" (UniqueName: \"kubernetes.io/projected/74884a84-50f8-45a2-9b2c-29f84f510593-kube-api-access-cbckp\") pod \"collect-profiles-29399595-hxg8m\" (UID: \"74884a84-50f8-45a2-9b2c-29f84f510593\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399595-hxg8m" Nov 24 09:15:00 crc kubenswrapper[4886]: I1124 09:15:00.510704 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29399595-hxg8m" Nov 24 09:15:00 crc kubenswrapper[4886]: I1124 09:15:00.998301 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29399595-hxg8m"] Nov 24 09:15:01 crc kubenswrapper[4886]: I1124 09:15:01.355718 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29399595-hxg8m" event={"ID":"74884a84-50f8-45a2-9b2c-29f84f510593","Type":"ContainerStarted","Data":"984c775b9a47f7a5588a2753d4614fdfaf7ff38c830934c1ccec29151730b8c5"} Nov 24 09:15:01 crc kubenswrapper[4886]: I1124 09:15:01.356249 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29399595-hxg8m" event={"ID":"74884a84-50f8-45a2-9b2c-29f84f510593","Type":"ContainerStarted","Data":"d6420d5a0cac1d2def48789da72ee57b97fd49a4025d6b93adf9946fd9f7ef2b"} Nov 24 09:15:01 crc kubenswrapper[4886]: I1124 09:15:01.387726 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29399595-hxg8m" podStartSLOduration=1.387703143 podStartE2EDuration="1.387703143s" podCreationTimestamp="2025-11-24 09:15:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 09:15:01.383828953 +0000 UTC m=+1557.270567088" watchObservedRunningTime="2025-11-24 09:15:01.387703143 +0000 UTC m=+1557.274441278" Nov 24 09:15:02 crc kubenswrapper[4886]: I1124 09:15:02.371908 4886 generic.go:334] "Generic (PLEG): container finished" podID="74884a84-50f8-45a2-9b2c-29f84f510593" containerID="984c775b9a47f7a5588a2753d4614fdfaf7ff38c830934c1ccec29151730b8c5" exitCode=0 Nov 24 09:15:02 crc kubenswrapper[4886]: I1124 09:15:02.372021 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29399595-hxg8m" event={"ID":"74884a84-50f8-45a2-9b2c-29f84f510593","Type":"ContainerDied","Data":"984c775b9a47f7a5588a2753d4614fdfaf7ff38c830934c1ccec29151730b8c5"} Nov 24 09:15:03 crc kubenswrapper[4886]: I1124 09:15:03.727659 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29399595-hxg8m" Nov 24 09:15:03 crc kubenswrapper[4886]: I1124 09:15:03.849851 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/74884a84-50f8-45a2-9b2c-29f84f510593-config-volume\") pod \"74884a84-50f8-45a2-9b2c-29f84f510593\" (UID: \"74884a84-50f8-45a2-9b2c-29f84f510593\") " Nov 24 09:15:03 crc kubenswrapper[4886]: I1124 09:15:03.850005 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cbckp\" (UniqueName: \"kubernetes.io/projected/74884a84-50f8-45a2-9b2c-29f84f510593-kube-api-access-cbckp\") pod \"74884a84-50f8-45a2-9b2c-29f84f510593\" (UID: \"74884a84-50f8-45a2-9b2c-29f84f510593\") " Nov 24 09:15:03 crc kubenswrapper[4886]: I1124 09:15:03.850092 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/74884a84-50f8-45a2-9b2c-29f84f510593-secret-volume\") pod \"74884a84-50f8-45a2-9b2c-29f84f510593\" (UID: \"74884a84-50f8-45a2-9b2c-29f84f510593\") " Nov 24 09:15:03 crc kubenswrapper[4886]: I1124 09:15:03.851292 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/74884a84-50f8-45a2-9b2c-29f84f510593-config-volume" (OuterVolumeSpecName: "config-volume") pod "74884a84-50f8-45a2-9b2c-29f84f510593" (UID: "74884a84-50f8-45a2-9b2c-29f84f510593"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 09:15:03 crc kubenswrapper[4886]: I1124 09:15:03.852677 4886 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/74884a84-50f8-45a2-9b2c-29f84f510593-config-volume\") on node \"crc\" DevicePath \"\"" Nov 24 09:15:03 crc kubenswrapper[4886]: I1124 09:15:03.861001 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/74884a84-50f8-45a2-9b2c-29f84f510593-kube-api-access-cbckp" (OuterVolumeSpecName: "kube-api-access-cbckp") pod "74884a84-50f8-45a2-9b2c-29f84f510593" (UID: "74884a84-50f8-45a2-9b2c-29f84f510593"). InnerVolumeSpecName "kube-api-access-cbckp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 09:15:03 crc kubenswrapper[4886]: I1124 09:15:03.862349 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/74884a84-50f8-45a2-9b2c-29f84f510593-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "74884a84-50f8-45a2-9b2c-29f84f510593" (UID: "74884a84-50f8-45a2-9b2c-29f84f510593"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:15:03 crc kubenswrapper[4886]: I1124 09:15:03.956640 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cbckp\" (UniqueName: \"kubernetes.io/projected/74884a84-50f8-45a2-9b2c-29f84f510593-kube-api-access-cbckp\") on node \"crc\" DevicePath \"\"" Nov 24 09:15:03 crc kubenswrapper[4886]: I1124 09:15:03.956712 4886 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/74884a84-50f8-45a2-9b2c-29f84f510593-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 24 09:15:04 crc kubenswrapper[4886]: I1124 09:15:04.406379 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29399595-hxg8m" event={"ID":"74884a84-50f8-45a2-9b2c-29f84f510593","Type":"ContainerDied","Data":"d6420d5a0cac1d2def48789da72ee57b97fd49a4025d6b93adf9946fd9f7ef2b"} Nov 24 09:15:04 crc kubenswrapper[4886]: I1124 09:15:04.406849 4886 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d6420d5a0cac1d2def48789da72ee57b97fd49a4025d6b93adf9946fd9f7ef2b" Nov 24 09:15:04 crc kubenswrapper[4886]: I1124 09:15:04.406492 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29399595-hxg8m" Nov 24 09:16:01 crc kubenswrapper[4886]: I1124 09:16:01.785003 4886 patch_prober.go:28] interesting pod/machine-config-daemon-zc46q container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 09:16:01 crc kubenswrapper[4886]: I1124 09:16:01.785838 4886 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zc46q" podUID="23cb993e-0360-4449-b604-8ddd825a6502" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 09:16:22 crc kubenswrapper[4886]: I1124 09:16:22.224097 4886 generic.go:334] "Generic (PLEG): container finished" podID="e26edc4e-16ec-494e-9011-1dcaf51099be" containerID="67dc99dd3042d384688ecd20b5a6c002661b08a24233c5e1162dac3fcd280bf7" exitCode=0 Nov 24 09:16:22 crc kubenswrapper[4886]: I1124 09:16:22.224185 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-k624c" event={"ID":"e26edc4e-16ec-494e-9011-1dcaf51099be","Type":"ContainerDied","Data":"67dc99dd3042d384688ecd20b5a6c002661b08a24233c5e1162dac3fcd280bf7"} Nov 24 09:16:23 crc kubenswrapper[4886]: I1124 09:16:23.679126 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-k624c" Nov 24 09:16:23 crc kubenswrapper[4886]: I1124 09:16:23.782090 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e26edc4e-16ec-494e-9011-1dcaf51099be-bootstrap-combined-ca-bundle\") pod \"e26edc4e-16ec-494e-9011-1dcaf51099be\" (UID: \"e26edc4e-16ec-494e-9011-1dcaf51099be\") " Nov 24 09:16:23 crc kubenswrapper[4886]: I1124 09:16:23.782341 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x68p8\" (UniqueName: \"kubernetes.io/projected/e26edc4e-16ec-494e-9011-1dcaf51099be-kube-api-access-x68p8\") pod \"e26edc4e-16ec-494e-9011-1dcaf51099be\" (UID: \"e26edc4e-16ec-494e-9011-1dcaf51099be\") " Nov 24 09:16:23 crc kubenswrapper[4886]: I1124 09:16:23.782395 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e26edc4e-16ec-494e-9011-1dcaf51099be-ssh-key\") pod \"e26edc4e-16ec-494e-9011-1dcaf51099be\" (UID: \"e26edc4e-16ec-494e-9011-1dcaf51099be\") " Nov 24 09:16:23 crc kubenswrapper[4886]: I1124 09:16:23.782548 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e26edc4e-16ec-494e-9011-1dcaf51099be-inventory\") pod \"e26edc4e-16ec-494e-9011-1dcaf51099be\" (UID: \"e26edc4e-16ec-494e-9011-1dcaf51099be\") " Nov 24 09:16:23 crc kubenswrapper[4886]: I1124 09:16:23.791519 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e26edc4e-16ec-494e-9011-1dcaf51099be-kube-api-access-x68p8" (OuterVolumeSpecName: "kube-api-access-x68p8") pod "e26edc4e-16ec-494e-9011-1dcaf51099be" (UID: "e26edc4e-16ec-494e-9011-1dcaf51099be"). InnerVolumeSpecName "kube-api-access-x68p8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 09:16:23 crc kubenswrapper[4886]: I1124 09:16:23.792803 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e26edc4e-16ec-494e-9011-1dcaf51099be-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "e26edc4e-16ec-494e-9011-1dcaf51099be" (UID: "e26edc4e-16ec-494e-9011-1dcaf51099be"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:16:23 crc kubenswrapper[4886]: I1124 09:16:23.820947 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e26edc4e-16ec-494e-9011-1dcaf51099be-inventory" (OuterVolumeSpecName: "inventory") pod "e26edc4e-16ec-494e-9011-1dcaf51099be" (UID: "e26edc4e-16ec-494e-9011-1dcaf51099be"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:16:23 crc kubenswrapper[4886]: I1124 09:16:23.821716 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e26edc4e-16ec-494e-9011-1dcaf51099be-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "e26edc4e-16ec-494e-9011-1dcaf51099be" (UID: "e26edc4e-16ec-494e-9011-1dcaf51099be"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:16:23 crc kubenswrapper[4886]: I1124 09:16:23.886269 4886 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e26edc4e-16ec-494e-9011-1dcaf51099be-inventory\") on node \"crc\" DevicePath \"\"" Nov 24 09:16:23 crc kubenswrapper[4886]: I1124 09:16:23.886314 4886 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e26edc4e-16ec-494e-9011-1dcaf51099be-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 09:16:23 crc kubenswrapper[4886]: I1124 09:16:23.886331 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x68p8\" (UniqueName: \"kubernetes.io/projected/e26edc4e-16ec-494e-9011-1dcaf51099be-kube-api-access-x68p8\") on node \"crc\" DevicePath \"\"" Nov 24 09:16:23 crc kubenswrapper[4886]: I1124 09:16:23.886346 4886 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e26edc4e-16ec-494e-9011-1dcaf51099be-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 24 09:16:24 crc kubenswrapper[4886]: I1124 09:16:24.246742 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-k624c" event={"ID":"e26edc4e-16ec-494e-9011-1dcaf51099be","Type":"ContainerDied","Data":"eeb87cdaf6cf1a7baca088af6dca5202b276906d2803b12c6ca9a36c7cc6459b"} Nov 24 09:16:24 crc kubenswrapper[4886]: I1124 09:16:24.246825 4886 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eeb87cdaf6cf1a7baca088af6dca5202b276906d2803b12c6ca9a36c7cc6459b" Nov 24 09:16:24 crc kubenswrapper[4886]: I1124 09:16:24.246844 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-k624c" Nov 24 09:16:24 crc kubenswrapper[4886]: I1124 09:16:24.346556 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-h68td"] Nov 24 09:16:24 crc kubenswrapper[4886]: E1124 09:16:24.347170 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e26edc4e-16ec-494e-9011-1dcaf51099be" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Nov 24 09:16:24 crc kubenswrapper[4886]: I1124 09:16:24.347199 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="e26edc4e-16ec-494e-9011-1dcaf51099be" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Nov 24 09:16:24 crc kubenswrapper[4886]: E1124 09:16:24.347232 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="74884a84-50f8-45a2-9b2c-29f84f510593" containerName="collect-profiles" Nov 24 09:16:24 crc kubenswrapper[4886]: I1124 09:16:24.347246 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="74884a84-50f8-45a2-9b2c-29f84f510593" containerName="collect-profiles" Nov 24 09:16:24 crc kubenswrapper[4886]: I1124 09:16:24.347479 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="e26edc4e-16ec-494e-9011-1dcaf51099be" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Nov 24 09:16:24 crc kubenswrapper[4886]: I1124 09:16:24.347514 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="74884a84-50f8-45a2-9b2c-29f84f510593" containerName="collect-profiles" Nov 24 09:16:24 crc kubenswrapper[4886]: I1124 09:16:24.348331 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-h68td" Nov 24 09:16:24 crc kubenswrapper[4886]: I1124 09:16:24.351099 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 24 09:16:24 crc kubenswrapper[4886]: I1124 09:16:24.352948 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 24 09:16:24 crc kubenswrapper[4886]: I1124 09:16:24.353546 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-n8v49" Nov 24 09:16:24 crc kubenswrapper[4886]: I1124 09:16:24.354453 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 24 09:16:24 crc kubenswrapper[4886]: I1124 09:16:24.363244 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-h68td"] Nov 24 09:16:24 crc kubenswrapper[4886]: I1124 09:16:24.396349 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5efd6888-9bb2-49d6-ba3b-b9e30c87e5a1-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-h68td\" (UID: \"5efd6888-9bb2-49d6-ba3b-b9e30c87e5a1\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-h68td" Nov 24 09:16:24 crc kubenswrapper[4886]: I1124 09:16:24.396485 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lxgvt\" (UniqueName: \"kubernetes.io/projected/5efd6888-9bb2-49d6-ba3b-b9e30c87e5a1-kube-api-access-lxgvt\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-h68td\" (UID: \"5efd6888-9bb2-49d6-ba3b-b9e30c87e5a1\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-h68td" Nov 24 09:16:24 crc kubenswrapper[4886]: I1124 09:16:24.396556 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5efd6888-9bb2-49d6-ba3b-b9e30c87e5a1-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-h68td\" (UID: \"5efd6888-9bb2-49d6-ba3b-b9e30c87e5a1\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-h68td" Nov 24 09:16:24 crc kubenswrapper[4886]: I1124 09:16:24.498047 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5efd6888-9bb2-49d6-ba3b-b9e30c87e5a1-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-h68td\" (UID: \"5efd6888-9bb2-49d6-ba3b-b9e30c87e5a1\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-h68td" Nov 24 09:16:24 crc kubenswrapper[4886]: I1124 09:16:24.498203 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5efd6888-9bb2-49d6-ba3b-b9e30c87e5a1-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-h68td\" (UID: \"5efd6888-9bb2-49d6-ba3b-b9e30c87e5a1\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-h68td" Nov 24 09:16:24 crc kubenswrapper[4886]: I1124 09:16:24.498285 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lxgvt\" (UniqueName: \"kubernetes.io/projected/5efd6888-9bb2-49d6-ba3b-b9e30c87e5a1-kube-api-access-lxgvt\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-h68td\" (UID: \"5efd6888-9bb2-49d6-ba3b-b9e30c87e5a1\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-h68td" Nov 24 09:16:24 crc kubenswrapper[4886]: I1124 09:16:24.503630 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5efd6888-9bb2-49d6-ba3b-b9e30c87e5a1-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-h68td\" (UID: \"5efd6888-9bb2-49d6-ba3b-b9e30c87e5a1\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-h68td" Nov 24 09:16:24 crc kubenswrapper[4886]: I1124 09:16:24.504957 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5efd6888-9bb2-49d6-ba3b-b9e30c87e5a1-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-h68td\" (UID: \"5efd6888-9bb2-49d6-ba3b-b9e30c87e5a1\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-h68td" Nov 24 09:16:24 crc kubenswrapper[4886]: I1124 09:16:24.527410 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lxgvt\" (UniqueName: \"kubernetes.io/projected/5efd6888-9bb2-49d6-ba3b-b9e30c87e5a1-kube-api-access-lxgvt\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-h68td\" (UID: \"5efd6888-9bb2-49d6-ba3b-b9e30c87e5a1\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-h68td" Nov 24 09:16:24 crc kubenswrapper[4886]: I1124 09:16:24.680654 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-h68td" Nov 24 09:16:25 crc kubenswrapper[4886]: I1124 09:16:25.299749 4886 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 24 09:16:25 crc kubenswrapper[4886]: I1124 09:16:25.309091 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-h68td"] Nov 24 09:16:26 crc kubenswrapper[4886]: I1124 09:16:26.291552 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-h68td" event={"ID":"5efd6888-9bb2-49d6-ba3b-b9e30c87e5a1","Type":"ContainerStarted","Data":"8caa1c6b85b7524910d054ea0446ba6955aabbe19afdc02950b7cae3a37ff4b2"} Nov 24 09:16:27 crc kubenswrapper[4886]: I1124 09:16:27.305856 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-h68td" event={"ID":"5efd6888-9bb2-49d6-ba3b-b9e30c87e5a1","Type":"ContainerStarted","Data":"3a0f183eed4c0b4439b5161d6cdb1bb2b00952953d0039f4224fd16390c6ad4e"} Nov 24 09:16:27 crc kubenswrapper[4886]: I1124 09:16:27.341276 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-h68td" podStartSLOduration=2.584848122 podStartE2EDuration="3.341251101s" podCreationTimestamp="2025-11-24 09:16:24 +0000 UTC" firstStartedPulling="2025-11-24 09:16:25.299476454 +0000 UTC m=+1641.186214589" lastFinishedPulling="2025-11-24 09:16:26.055879433 +0000 UTC m=+1641.942617568" observedRunningTime="2025-11-24 09:16:27.322067772 +0000 UTC m=+1643.208805907" watchObservedRunningTime="2025-11-24 09:16:27.341251101 +0000 UTC m=+1643.227989246" Nov 24 09:16:31 crc kubenswrapper[4886]: I1124 09:16:31.784012 4886 patch_prober.go:28] interesting pod/machine-config-daemon-zc46q container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 09:16:31 crc kubenswrapper[4886]: I1124 09:16:31.784653 4886 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zc46q" podUID="23cb993e-0360-4449-b604-8ddd825a6502" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 09:16:52 crc kubenswrapper[4886]: I1124 09:16:52.049517 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-cbj48"] Nov 24 09:16:52 crc kubenswrapper[4886]: I1124 09:16:52.057651 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-8106-account-create-dkh5l"] Nov 24 09:16:52 crc kubenswrapper[4886]: I1124 09:16:52.066211 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-cbj48"] Nov 24 09:16:52 crc kubenswrapper[4886]: I1124 09:16:52.095450 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-8106-account-create-dkh5l"] Nov 24 09:16:52 crc kubenswrapper[4886]: I1124 09:16:52.861715 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="18c6c7c4-c29b-4884-9803-ee0d75bd2791" path="/var/lib/kubelet/pods/18c6c7c4-c29b-4884-9803-ee0d75bd2791/volumes" Nov 24 09:16:52 crc kubenswrapper[4886]: I1124 09:16:52.862749 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="19a39199-f209-40bc-932d-9b0274ce5a12" path="/var/lib/kubelet/pods/19a39199-f209-40bc-932d-9b0274ce5a12/volumes" Nov 24 09:16:55 crc kubenswrapper[4886]: I1124 09:16:55.051634 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-mhdfz"] Nov 24 09:16:55 crc kubenswrapper[4886]: I1124 09:16:55.061740 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-mhdfz"] Nov 24 09:16:56 crc kubenswrapper[4886]: I1124 09:16:56.052017 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-e102-account-create-rjpsv"] Nov 24 09:16:56 crc kubenswrapper[4886]: I1124 09:16:56.063224 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-e102-account-create-rjpsv"] Nov 24 09:16:56 crc kubenswrapper[4886]: I1124 09:16:56.867002 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c0d4e8bd-b5f7-4429-8494-e88fcdb32491" path="/var/lib/kubelet/pods/c0d4e8bd-b5f7-4429-8494-e88fcdb32491/volumes" Nov 24 09:16:56 crc kubenswrapper[4886]: I1124 09:16:56.867732 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ed63768a-813f-4e2e-8a49-878492cc908c" path="/var/lib/kubelet/pods/ed63768a-813f-4e2e-8a49-878492cc908c/volumes" Nov 24 09:17:00 crc kubenswrapper[4886]: I1124 09:17:00.035739 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-9n6ds"] Nov 24 09:17:00 crc kubenswrapper[4886]: I1124 09:17:00.045228 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-9n6ds"] Nov 24 09:17:00 crc kubenswrapper[4886]: I1124 09:17:00.055105 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-e011-account-create-rzd97"] Nov 24 09:17:00 crc kubenswrapper[4886]: I1124 09:17:00.063792 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-e011-account-create-rzd97"] Nov 24 09:17:00 crc kubenswrapper[4886]: I1124 09:17:00.862905 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6b674b68-5eef-4c55-817f-8dec4dc781fd" path="/var/lib/kubelet/pods/6b674b68-5eef-4c55-817f-8dec4dc781fd/volumes" Nov 24 09:17:00 crc kubenswrapper[4886]: I1124 09:17:00.863539 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9ed0c697-2ac4-4500-a90d-e09d6ba279ae" path="/var/lib/kubelet/pods/9ed0c697-2ac4-4500-a90d-e09d6ba279ae/volumes" Nov 24 09:17:01 crc kubenswrapper[4886]: I1124 09:17:01.784329 4886 patch_prober.go:28] interesting pod/machine-config-daemon-zc46q container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 09:17:01 crc kubenswrapper[4886]: I1124 09:17:01.784809 4886 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zc46q" podUID="23cb993e-0360-4449-b604-8ddd825a6502" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 09:17:01 crc kubenswrapper[4886]: I1124 09:17:01.784871 4886 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-zc46q" Nov 24 09:17:01 crc kubenswrapper[4886]: I1124 09:17:01.786000 4886 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e9e5da22d6f503b164f934a14d5dfa5723b622287d3192f538cd5fb4bb3d237c"} pod="openshift-machine-config-operator/machine-config-daemon-zc46q" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 24 09:17:01 crc kubenswrapper[4886]: I1124 09:17:01.786061 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-zc46q" podUID="23cb993e-0360-4449-b604-8ddd825a6502" containerName="machine-config-daemon" containerID="cri-o://e9e5da22d6f503b164f934a14d5dfa5723b622287d3192f538cd5fb4bb3d237c" gracePeriod=600 Nov 24 09:17:01 crc kubenswrapper[4886]: E1124 09:17:01.914922 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zc46q_openshift-machine-config-operator(23cb993e-0360-4449-b604-8ddd825a6502)\"" pod="openshift-machine-config-operator/machine-config-daemon-zc46q" podUID="23cb993e-0360-4449-b604-8ddd825a6502" Nov 24 09:17:02 crc kubenswrapper[4886]: I1124 09:17:02.693110 4886 generic.go:334] "Generic (PLEG): container finished" podID="23cb993e-0360-4449-b604-8ddd825a6502" containerID="e9e5da22d6f503b164f934a14d5dfa5723b622287d3192f538cd5fb4bb3d237c" exitCode=0 Nov 24 09:17:02 crc kubenswrapper[4886]: I1124 09:17:02.693183 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zc46q" event={"ID":"23cb993e-0360-4449-b604-8ddd825a6502","Type":"ContainerDied","Data":"e9e5da22d6f503b164f934a14d5dfa5723b622287d3192f538cd5fb4bb3d237c"} Nov 24 09:17:02 crc kubenswrapper[4886]: I1124 09:17:02.693249 4886 scope.go:117] "RemoveContainer" containerID="4f19fc5058a9ae6baaee874798ea7b1c9ef07faaf66a15067253a35a9f971d8b" Nov 24 09:17:02 crc kubenswrapper[4886]: I1124 09:17:02.694101 4886 scope.go:117] "RemoveContainer" containerID="e9e5da22d6f503b164f934a14d5dfa5723b622287d3192f538cd5fb4bb3d237c" Nov 24 09:17:02 crc kubenswrapper[4886]: E1124 09:17:02.694553 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zc46q_openshift-machine-config-operator(23cb993e-0360-4449-b604-8ddd825a6502)\"" pod="openshift-machine-config-operator/machine-config-daemon-zc46q" podUID="23cb993e-0360-4449-b604-8ddd825a6502" Nov 24 09:17:13 crc kubenswrapper[4886]: I1124 09:17:13.849994 4886 scope.go:117] "RemoveContainer" containerID="e9e5da22d6f503b164f934a14d5dfa5723b622287d3192f538cd5fb4bb3d237c" Nov 24 09:17:13 crc kubenswrapper[4886]: E1124 09:17:13.851047 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zc46q_openshift-machine-config-operator(23cb993e-0360-4449-b604-8ddd825a6502)\"" pod="openshift-machine-config-operator/machine-config-daemon-zc46q" podUID="23cb993e-0360-4449-b604-8ddd825a6502" Nov 24 09:17:18 crc kubenswrapper[4886]: I1124 09:17:18.037709 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-8e9a-account-create-b55cn"] Nov 24 09:17:18 crc kubenswrapper[4886]: I1124 09:17:18.054210 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-8e9a-account-create-b55cn"] Nov 24 09:17:18 crc kubenswrapper[4886]: I1124 09:17:18.862883 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7659c776-9218-442b-b813-ebff19a5e5ee" path="/var/lib/kubelet/pods/7659c776-9218-442b-b813-ebff19a5e5ee/volumes" Nov 24 09:17:19 crc kubenswrapper[4886]: I1124 09:17:19.035605 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-387e-account-create-hz44q"] Nov 24 09:17:19 crc kubenswrapper[4886]: I1124 09:17:19.044695 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-2c4c-account-create-qcwml"] Nov 24 09:17:19 crc kubenswrapper[4886]: I1124 09:17:19.053765 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-7fqnd"] Nov 24 09:17:19 crc kubenswrapper[4886]: I1124 09:17:19.062812 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-5xk4v"] Nov 24 09:17:19 crc kubenswrapper[4886]: I1124 09:17:19.072065 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-387e-account-create-hz44q"] Nov 24 09:17:19 crc kubenswrapper[4886]: I1124 09:17:19.080776 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-ll5c2"] Nov 24 09:17:19 crc kubenswrapper[4886]: I1124 09:17:19.090777 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-2c4c-account-create-qcwml"] Nov 24 09:17:19 crc kubenswrapper[4886]: I1124 09:17:19.102615 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-5xk4v"] Nov 24 09:17:19 crc kubenswrapper[4886]: I1124 09:17:19.115786 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-7fqnd"] Nov 24 09:17:19 crc kubenswrapper[4886]: I1124 09:17:19.126325 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-ll5c2"] Nov 24 09:17:20 crc kubenswrapper[4886]: I1124 09:17:20.866473 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22909ed9-c35f-4768-ab83-9f8a3442718b" path="/var/lib/kubelet/pods/22909ed9-c35f-4768-ab83-9f8a3442718b/volumes" Nov 24 09:17:20 crc kubenswrapper[4886]: I1124 09:17:20.867290 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="63eb5a8c-8a76-421f-9a44-a63c7ab43077" path="/var/lib/kubelet/pods/63eb5a8c-8a76-421f-9a44-a63c7ab43077/volumes" Nov 24 09:17:20 crc kubenswrapper[4886]: I1124 09:17:20.867973 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fa05ad9f-553a-4565-b074-6cae6220d5d1" path="/var/lib/kubelet/pods/fa05ad9f-553a-4565-b074-6cae6220d5d1/volumes" Nov 24 09:17:20 crc kubenswrapper[4886]: I1124 09:17:20.868762 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fa3c02ae-6e3f-4d5d-bd36-57dc2ea8c131" path="/var/lib/kubelet/pods/fa3c02ae-6e3f-4d5d-bd36-57dc2ea8c131/volumes" Nov 24 09:17:20 crc kubenswrapper[4886]: I1124 09:17:20.875082 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fe94e3da-9230-46bd-9139-1ec416d11108" path="/var/lib/kubelet/pods/fe94e3da-9230-46bd-9139-1ec416d11108/volumes" Nov 24 09:17:22 crc kubenswrapper[4886]: I1124 09:17:22.561927 4886 scope.go:117] "RemoveContainer" containerID="3c84c6617657ebb244bbabcc08a3d5478d8672f9c4313122c44cc435d41e0cbc" Nov 24 09:17:22 crc kubenswrapper[4886]: I1124 09:17:22.595608 4886 scope.go:117] "RemoveContainer" containerID="f7d92ca92717e86cfb25908c6478d07438610dc900d55fdbfdd9f32a7eedf7e0" Nov 24 09:17:22 crc kubenswrapper[4886]: I1124 09:17:22.646264 4886 scope.go:117] "RemoveContainer" containerID="0786816348c1f062fb2bc65a7e71b1bc252fb4f34cbb392aacf694c2e1faa01d" Nov 24 09:17:22 crc kubenswrapper[4886]: I1124 09:17:22.706789 4886 scope.go:117] "RemoveContainer" containerID="f0691e020efc30c39acfccae054077c4373d9bf81d4c01d5c3966c750533db67" Nov 24 09:17:22 crc kubenswrapper[4886]: I1124 09:17:22.747383 4886 scope.go:117] "RemoveContainer" containerID="ee2c3e316b0e6a192598ed3200f7cf25cb688664e2dd7f2c96ff86b9759f9a8c" Nov 24 09:17:22 crc kubenswrapper[4886]: I1124 09:17:22.799318 4886 scope.go:117] "RemoveContainer" containerID="d6df49255f3dce252e91237451426f9168cb6ac269bbf6064047991e81d048fd" Nov 24 09:17:22 crc kubenswrapper[4886]: I1124 09:17:22.848013 4886 scope.go:117] "RemoveContainer" containerID="9a4b068b4efa9e961ba6485d83cd258db877c7b2ebd1a6c688d9d3508638d9f4" Nov 24 09:17:22 crc kubenswrapper[4886]: I1124 09:17:22.874552 4886 scope.go:117] "RemoveContainer" containerID="3ba76402272655a4a57caed0d6d49a8a00ea1c56aca763d213346df679685c22" Nov 24 09:17:22 crc kubenswrapper[4886]: I1124 09:17:22.903720 4886 scope.go:117] "RemoveContainer" containerID="02118a86ec1c00b341f4e149ec6b1bdbeb9ccfb61da67958ae942f8004d6372c" Nov 24 09:17:22 crc kubenswrapper[4886]: I1124 09:17:22.931258 4886 scope.go:117] "RemoveContainer" containerID="72ebab10f42383296e64cfed0ad748412174894810e8d30511ef9d6e00f3be8f" Nov 24 09:17:22 crc kubenswrapper[4886]: I1124 09:17:22.962705 4886 scope.go:117] "RemoveContainer" containerID="b1e92747402cd8daf4648b849b492976ddbfd8a3f0b0310c191abc6e545d4597" Nov 24 09:17:22 crc kubenswrapper[4886]: I1124 09:17:22.985397 4886 scope.go:117] "RemoveContainer" containerID="3474680a0d4d0f9b57bf626a496826fba8e355aa9b52bd3d677a201467a8b0a2" Nov 24 09:17:24 crc kubenswrapper[4886]: I1124 09:17:24.856835 4886 scope.go:117] "RemoveContainer" containerID="e9e5da22d6f503b164f934a14d5dfa5723b622287d3192f538cd5fb4bb3d237c" Nov 24 09:17:24 crc kubenswrapper[4886]: E1124 09:17:24.857291 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zc46q_openshift-machine-config-operator(23cb993e-0360-4449-b604-8ddd825a6502)\"" pod="openshift-machine-config-operator/machine-config-daemon-zc46q" podUID="23cb993e-0360-4449-b604-8ddd825a6502" Nov 24 09:17:25 crc kubenswrapper[4886]: I1124 09:17:25.042563 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-4sj8x"] Nov 24 09:17:25 crc kubenswrapper[4886]: I1124 09:17:25.050364 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-4sj8x"] Nov 24 09:17:26 crc kubenswrapper[4886]: I1124 09:17:26.861608 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f45428c8-b123-4e3e-9ba0-5ab11cf317a5" path="/var/lib/kubelet/pods/f45428c8-b123-4e3e-9ba0-5ab11cf317a5/volumes" Nov 24 09:17:27 crc kubenswrapper[4886]: I1124 09:17:27.035908 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-t7vln"] Nov 24 09:17:27 crc kubenswrapper[4886]: I1124 09:17:27.044544 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-t7vln"] Nov 24 09:17:28 crc kubenswrapper[4886]: I1124 09:17:28.862718 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="606b0ae4-0857-44f3-a72a-aa8cfa5416ef" path="/var/lib/kubelet/pods/606b0ae4-0857-44f3-a72a-aa8cfa5416ef/volumes" Nov 24 09:17:35 crc kubenswrapper[4886]: I1124 09:17:35.851167 4886 scope.go:117] "RemoveContainer" containerID="e9e5da22d6f503b164f934a14d5dfa5723b622287d3192f538cd5fb4bb3d237c" Nov 24 09:17:35 crc kubenswrapper[4886]: E1124 09:17:35.852489 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zc46q_openshift-machine-config-operator(23cb993e-0360-4449-b604-8ddd825a6502)\"" pod="openshift-machine-config-operator/machine-config-daemon-zc46q" podUID="23cb993e-0360-4449-b604-8ddd825a6502" Nov 24 09:17:49 crc kubenswrapper[4886]: I1124 09:17:49.849976 4886 scope.go:117] "RemoveContainer" containerID="e9e5da22d6f503b164f934a14d5dfa5723b622287d3192f538cd5fb4bb3d237c" Nov 24 09:17:49 crc kubenswrapper[4886]: E1124 09:17:49.850968 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zc46q_openshift-machine-config-operator(23cb993e-0360-4449-b604-8ddd825a6502)\"" pod="openshift-machine-config-operator/machine-config-daemon-zc46q" podUID="23cb993e-0360-4449-b604-8ddd825a6502" Nov 24 09:18:01 crc kubenswrapper[4886]: I1124 09:18:01.849180 4886 scope.go:117] "RemoveContainer" containerID="e9e5da22d6f503b164f934a14d5dfa5723b622287d3192f538cd5fb4bb3d237c" Nov 24 09:18:01 crc kubenswrapper[4886]: E1124 09:18:01.850093 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zc46q_openshift-machine-config-operator(23cb993e-0360-4449-b604-8ddd825a6502)\"" pod="openshift-machine-config-operator/machine-config-daemon-zc46q" podUID="23cb993e-0360-4449-b604-8ddd825a6502" Nov 24 09:18:02 crc kubenswrapper[4886]: I1124 09:18:02.048576 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-lqtgn"] Nov 24 09:18:02 crc kubenswrapper[4886]: I1124 09:18:02.057807 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-lqtgn"] Nov 24 09:18:02 crc kubenswrapper[4886]: I1124 09:18:02.860901 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="390e7d30-f337-4255-a488-3b5b345235ed" path="/var/lib/kubelet/pods/390e7d30-f337-4255-a488-3b5b345235ed/volumes" Nov 24 09:18:03 crc kubenswrapper[4886]: I1124 09:18:03.360098 4886 generic.go:334] "Generic (PLEG): container finished" podID="5efd6888-9bb2-49d6-ba3b-b9e30c87e5a1" containerID="3a0f183eed4c0b4439b5161d6cdb1bb2b00952953d0039f4224fd16390c6ad4e" exitCode=0 Nov 24 09:18:03 crc kubenswrapper[4886]: I1124 09:18:03.360183 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-h68td" event={"ID":"5efd6888-9bb2-49d6-ba3b-b9e30c87e5a1","Type":"ContainerDied","Data":"3a0f183eed4c0b4439b5161d6cdb1bb2b00952953d0039f4224fd16390c6ad4e"} Nov 24 09:18:04 crc kubenswrapper[4886]: I1124 09:18:04.861000 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-h68td" Nov 24 09:18:05 crc kubenswrapper[4886]: I1124 09:18:05.002727 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5efd6888-9bb2-49d6-ba3b-b9e30c87e5a1-ssh-key\") pod \"5efd6888-9bb2-49d6-ba3b-b9e30c87e5a1\" (UID: \"5efd6888-9bb2-49d6-ba3b-b9e30c87e5a1\") " Nov 24 09:18:05 crc kubenswrapper[4886]: I1124 09:18:05.002855 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5efd6888-9bb2-49d6-ba3b-b9e30c87e5a1-inventory\") pod \"5efd6888-9bb2-49d6-ba3b-b9e30c87e5a1\" (UID: \"5efd6888-9bb2-49d6-ba3b-b9e30c87e5a1\") " Nov 24 09:18:05 crc kubenswrapper[4886]: I1124 09:18:05.002884 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lxgvt\" (UniqueName: \"kubernetes.io/projected/5efd6888-9bb2-49d6-ba3b-b9e30c87e5a1-kube-api-access-lxgvt\") pod \"5efd6888-9bb2-49d6-ba3b-b9e30c87e5a1\" (UID: \"5efd6888-9bb2-49d6-ba3b-b9e30c87e5a1\") " Nov 24 09:18:05 crc kubenswrapper[4886]: I1124 09:18:05.009572 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5efd6888-9bb2-49d6-ba3b-b9e30c87e5a1-kube-api-access-lxgvt" (OuterVolumeSpecName: "kube-api-access-lxgvt") pod "5efd6888-9bb2-49d6-ba3b-b9e30c87e5a1" (UID: "5efd6888-9bb2-49d6-ba3b-b9e30c87e5a1"). InnerVolumeSpecName "kube-api-access-lxgvt". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 09:18:05 crc kubenswrapper[4886]: I1124 09:18:05.032896 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5efd6888-9bb2-49d6-ba3b-b9e30c87e5a1-inventory" (OuterVolumeSpecName: "inventory") pod "5efd6888-9bb2-49d6-ba3b-b9e30c87e5a1" (UID: "5efd6888-9bb2-49d6-ba3b-b9e30c87e5a1"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:18:05 crc kubenswrapper[4886]: I1124 09:18:05.035418 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5efd6888-9bb2-49d6-ba3b-b9e30c87e5a1-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "5efd6888-9bb2-49d6-ba3b-b9e30c87e5a1" (UID: "5efd6888-9bb2-49d6-ba3b-b9e30c87e5a1"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:18:05 crc kubenswrapper[4886]: I1124 09:18:05.105526 4886 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5efd6888-9bb2-49d6-ba3b-b9e30c87e5a1-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 24 09:18:05 crc kubenswrapper[4886]: I1124 09:18:05.105569 4886 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5efd6888-9bb2-49d6-ba3b-b9e30c87e5a1-inventory\") on node \"crc\" DevicePath \"\"" Nov 24 09:18:05 crc kubenswrapper[4886]: I1124 09:18:05.105580 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lxgvt\" (UniqueName: \"kubernetes.io/projected/5efd6888-9bb2-49d6-ba3b-b9e30c87e5a1-kube-api-access-lxgvt\") on node \"crc\" DevicePath \"\"" Nov 24 09:18:05 crc kubenswrapper[4886]: I1124 09:18:05.387556 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-h68td" event={"ID":"5efd6888-9bb2-49d6-ba3b-b9e30c87e5a1","Type":"ContainerDied","Data":"8caa1c6b85b7524910d054ea0446ba6955aabbe19afdc02950b7cae3a37ff4b2"} Nov 24 09:18:05 crc kubenswrapper[4886]: I1124 09:18:05.387614 4886 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8caa1c6b85b7524910d054ea0446ba6955aabbe19afdc02950b7cae3a37ff4b2" Nov 24 09:18:05 crc kubenswrapper[4886]: I1124 09:18:05.387710 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-h68td" Nov 24 09:18:05 crc kubenswrapper[4886]: I1124 09:18:05.475215 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-w42vq"] Nov 24 09:18:05 crc kubenswrapper[4886]: E1124 09:18:05.475958 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5efd6888-9bb2-49d6-ba3b-b9e30c87e5a1" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Nov 24 09:18:05 crc kubenswrapper[4886]: I1124 09:18:05.476034 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="5efd6888-9bb2-49d6-ba3b-b9e30c87e5a1" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Nov 24 09:18:05 crc kubenswrapper[4886]: I1124 09:18:05.476327 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="5efd6888-9bb2-49d6-ba3b-b9e30c87e5a1" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Nov 24 09:18:05 crc kubenswrapper[4886]: I1124 09:18:05.477287 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-w42vq" Nov 24 09:18:05 crc kubenswrapper[4886]: I1124 09:18:05.479913 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 24 09:18:05 crc kubenswrapper[4886]: I1124 09:18:05.479913 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-n8v49" Nov 24 09:18:05 crc kubenswrapper[4886]: I1124 09:18:05.480105 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 24 09:18:05 crc kubenswrapper[4886]: I1124 09:18:05.481017 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 24 09:18:05 crc kubenswrapper[4886]: I1124 09:18:05.487215 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-w42vq"] Nov 24 09:18:05 crc kubenswrapper[4886]: I1124 09:18:05.514684 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/352e856d-6e0d-4aba-b2ce-8063ed40a041-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-w42vq\" (UID: \"352e856d-6e0d-4aba-b2ce-8063ed40a041\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-w42vq" Nov 24 09:18:05 crc kubenswrapper[4886]: I1124 09:18:05.514768 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/352e856d-6e0d-4aba-b2ce-8063ed40a041-ssh-key\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-w42vq\" (UID: \"352e856d-6e0d-4aba-b2ce-8063ed40a041\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-w42vq" Nov 24 09:18:05 crc kubenswrapper[4886]: I1124 09:18:05.514997 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8rzg6\" (UniqueName: \"kubernetes.io/projected/352e856d-6e0d-4aba-b2ce-8063ed40a041-kube-api-access-8rzg6\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-w42vq\" (UID: \"352e856d-6e0d-4aba-b2ce-8063ed40a041\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-w42vq" Nov 24 09:18:05 crc kubenswrapper[4886]: I1124 09:18:05.617058 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8rzg6\" (UniqueName: \"kubernetes.io/projected/352e856d-6e0d-4aba-b2ce-8063ed40a041-kube-api-access-8rzg6\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-w42vq\" (UID: \"352e856d-6e0d-4aba-b2ce-8063ed40a041\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-w42vq" Nov 24 09:18:05 crc kubenswrapper[4886]: I1124 09:18:05.617215 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/352e856d-6e0d-4aba-b2ce-8063ed40a041-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-w42vq\" (UID: \"352e856d-6e0d-4aba-b2ce-8063ed40a041\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-w42vq" Nov 24 09:18:05 crc kubenswrapper[4886]: I1124 09:18:05.617266 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/352e856d-6e0d-4aba-b2ce-8063ed40a041-ssh-key\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-w42vq\" (UID: \"352e856d-6e0d-4aba-b2ce-8063ed40a041\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-w42vq" Nov 24 09:18:05 crc kubenswrapper[4886]: I1124 09:18:05.622175 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/352e856d-6e0d-4aba-b2ce-8063ed40a041-ssh-key\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-w42vq\" (UID: \"352e856d-6e0d-4aba-b2ce-8063ed40a041\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-w42vq" Nov 24 09:18:05 crc kubenswrapper[4886]: I1124 09:18:05.622284 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/352e856d-6e0d-4aba-b2ce-8063ed40a041-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-w42vq\" (UID: \"352e856d-6e0d-4aba-b2ce-8063ed40a041\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-w42vq" Nov 24 09:18:05 crc kubenswrapper[4886]: I1124 09:18:05.638483 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8rzg6\" (UniqueName: \"kubernetes.io/projected/352e856d-6e0d-4aba-b2ce-8063ed40a041-kube-api-access-8rzg6\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-w42vq\" (UID: \"352e856d-6e0d-4aba-b2ce-8063ed40a041\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-w42vq" Nov 24 09:18:05 crc kubenswrapper[4886]: I1124 09:18:05.806709 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-w42vq" Nov 24 09:18:06 crc kubenswrapper[4886]: I1124 09:18:06.370448 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-w42vq"] Nov 24 09:18:06 crc kubenswrapper[4886]: I1124 09:18:06.399716 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-w42vq" event={"ID":"352e856d-6e0d-4aba-b2ce-8063ed40a041","Type":"ContainerStarted","Data":"c16087d76519bc9992949886d12286208a347ccacac4cd8af9481caa1d6e72b2"} Nov 24 09:18:07 crc kubenswrapper[4886]: I1124 09:18:07.043209 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-8bzdg"] Nov 24 09:18:07 crc kubenswrapper[4886]: I1124 09:18:07.078093 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-8bzdg"] Nov 24 09:18:07 crc kubenswrapper[4886]: I1124 09:18:07.410828 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-w42vq" event={"ID":"352e856d-6e0d-4aba-b2ce-8063ed40a041","Type":"ContainerStarted","Data":"0718764d0fdd929b24e8643174a060935379f003317afbb1bf7e1c4697e5b29d"} Nov 24 09:18:07 crc kubenswrapper[4886]: I1124 09:18:07.437334 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-w42vq" podStartSLOduration=1.937548507 podStartE2EDuration="2.4373051s" podCreationTimestamp="2025-11-24 09:18:05 +0000 UTC" firstStartedPulling="2025-11-24 09:18:06.386407393 +0000 UTC m=+1742.273145528" lastFinishedPulling="2025-11-24 09:18:06.886163986 +0000 UTC m=+1742.772902121" observedRunningTime="2025-11-24 09:18:07.429041164 +0000 UTC m=+1743.315779309" watchObservedRunningTime="2025-11-24 09:18:07.4373051 +0000 UTC m=+1743.324043235" Nov 24 09:18:08 crc kubenswrapper[4886]: I1124 09:18:08.861463 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d9603d94-2b25-4cdc-bab2-daeae2b9f8a7" path="/var/lib/kubelet/pods/d9603d94-2b25-4cdc-bab2-daeae2b9f8a7/volumes" Nov 24 09:18:14 crc kubenswrapper[4886]: I1124 09:18:14.859170 4886 scope.go:117] "RemoveContainer" containerID="e9e5da22d6f503b164f934a14d5dfa5723b622287d3192f538cd5fb4bb3d237c" Nov 24 09:18:14 crc kubenswrapper[4886]: E1124 09:18:14.860027 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zc46q_openshift-machine-config-operator(23cb993e-0360-4449-b604-8ddd825a6502)\"" pod="openshift-machine-config-operator/machine-config-daemon-zc46q" podUID="23cb993e-0360-4449-b604-8ddd825a6502" Nov 24 09:18:17 crc kubenswrapper[4886]: I1124 09:18:17.032462 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-c6hzr"] Nov 24 09:18:17 crc kubenswrapper[4886]: I1124 09:18:17.043344 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-c6hzr"] Nov 24 09:18:18 crc kubenswrapper[4886]: I1124 09:18:18.861067 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ec036c5c-6eff-4c4e-83c2-5727576b540e" path="/var/lib/kubelet/pods/ec036c5c-6eff-4c4e-83c2-5727576b540e/volumes" Nov 24 09:18:23 crc kubenswrapper[4886]: I1124 09:18:23.229075 4886 scope.go:117] "RemoveContainer" containerID="4bcf034099c8cf9144e31f390a627282fb58c0b2bce78275df11349602751575" Nov 24 09:18:23 crc kubenswrapper[4886]: I1124 09:18:23.274904 4886 scope.go:117] "RemoveContainer" containerID="eb061de0a3b606b9324ec783a29c55b4b2f1d44741b5469c970c9ad488a0b3a6" Nov 24 09:18:23 crc kubenswrapper[4886]: I1124 09:18:23.329571 4886 scope.go:117] "RemoveContainer" containerID="42803f854b1af557b3c6ed91baf22b6bb75aa4c3c8cc122b053f1b891d9e59bd" Nov 24 09:18:23 crc kubenswrapper[4886]: I1124 09:18:23.371805 4886 scope.go:117] "RemoveContainer" containerID="bad5972703be48ed87467a308a29d4c811db61eabcf9b35e13da11a1be3c6fe1" Nov 24 09:18:23 crc kubenswrapper[4886]: I1124 09:18:23.429169 4886 scope.go:117] "RemoveContainer" containerID="ee3f62d7c41f7cbeb64a14223a657dfe0b6d8af2c34cbb7bf58d4ebe083c7ccf" Nov 24 09:18:28 crc kubenswrapper[4886]: I1124 09:18:28.044518 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-8hkhc"] Nov 24 09:18:28 crc kubenswrapper[4886]: I1124 09:18:28.052815 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-8hkhc"] Nov 24 09:18:28 crc kubenswrapper[4886]: I1124 09:18:28.862036 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a2e82e3b-0acc-454e-b8b5-cf584f3298b4" path="/var/lib/kubelet/pods/a2e82e3b-0acc-454e-b8b5-cf584f3298b4/volumes" Nov 24 09:18:29 crc kubenswrapper[4886]: I1124 09:18:29.034242 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-ph4gr"] Nov 24 09:18:29 crc kubenswrapper[4886]: I1124 09:18:29.043634 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-ph4gr"] Nov 24 09:18:29 crc kubenswrapper[4886]: I1124 09:18:29.849735 4886 scope.go:117] "RemoveContainer" containerID="e9e5da22d6f503b164f934a14d5dfa5723b622287d3192f538cd5fb4bb3d237c" Nov 24 09:18:29 crc kubenswrapper[4886]: E1124 09:18:29.850417 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zc46q_openshift-machine-config-operator(23cb993e-0360-4449-b604-8ddd825a6502)\"" pod="openshift-machine-config-operator/machine-config-daemon-zc46q" podUID="23cb993e-0360-4449-b604-8ddd825a6502" Nov 24 09:18:30 crc kubenswrapper[4886]: I1124 09:18:30.863823 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7ca0ca62-7545-4e1a-9969-121899a789b0" path="/var/lib/kubelet/pods/7ca0ca62-7545-4e1a-9969-121899a789b0/volumes" Nov 24 09:18:41 crc kubenswrapper[4886]: I1124 09:18:41.850198 4886 scope.go:117] "RemoveContainer" containerID="e9e5da22d6f503b164f934a14d5dfa5723b622287d3192f538cd5fb4bb3d237c" Nov 24 09:18:41 crc kubenswrapper[4886]: E1124 09:18:41.851555 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zc46q_openshift-machine-config-operator(23cb993e-0360-4449-b604-8ddd825a6502)\"" pod="openshift-machine-config-operator/machine-config-daemon-zc46q" podUID="23cb993e-0360-4449-b604-8ddd825a6502" Nov 24 09:18:53 crc kubenswrapper[4886]: I1124 09:18:53.849742 4886 scope.go:117] "RemoveContainer" containerID="e9e5da22d6f503b164f934a14d5dfa5723b622287d3192f538cd5fb4bb3d237c" Nov 24 09:18:53 crc kubenswrapper[4886]: E1124 09:18:53.850759 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zc46q_openshift-machine-config-operator(23cb993e-0360-4449-b604-8ddd825a6502)\"" pod="openshift-machine-config-operator/machine-config-daemon-zc46q" podUID="23cb993e-0360-4449-b604-8ddd825a6502" Nov 24 09:19:06 crc kubenswrapper[4886]: I1124 09:19:06.850070 4886 scope.go:117] "RemoveContainer" containerID="e9e5da22d6f503b164f934a14d5dfa5723b622287d3192f538cd5fb4bb3d237c" Nov 24 09:19:06 crc kubenswrapper[4886]: E1124 09:19:06.852447 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zc46q_openshift-machine-config-operator(23cb993e-0360-4449-b604-8ddd825a6502)\"" pod="openshift-machine-config-operator/machine-config-daemon-zc46q" podUID="23cb993e-0360-4449-b604-8ddd825a6502" Nov 24 09:19:12 crc kubenswrapper[4886]: I1124 09:19:12.042483 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-vbj6z"] Nov 24 09:19:12 crc kubenswrapper[4886]: I1124 09:19:12.053080 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-vbj6z"] Nov 24 09:19:12 crc kubenswrapper[4886]: I1124 09:19:12.873344 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="820724e3-dec6-48f0-8626-e287d58059d3" path="/var/lib/kubelet/pods/820724e3-dec6-48f0-8626-e287d58059d3/volumes" Nov 24 09:19:13 crc kubenswrapper[4886]: I1124 09:19:13.042639 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-t72qh"] Nov 24 09:19:13 crc kubenswrapper[4886]: I1124 09:19:13.052680 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-3106-account-create-rd5w6"] Nov 24 09:19:13 crc kubenswrapper[4886]: I1124 09:19:13.060387 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-01c7-account-create-h5txn"] Nov 24 09:19:13 crc kubenswrapper[4886]: I1124 09:19:13.067879 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-krgfp"] Nov 24 09:19:13 crc kubenswrapper[4886]: I1124 09:19:13.075118 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-bf4c-account-create-2vcrs"] Nov 24 09:19:13 crc kubenswrapper[4886]: I1124 09:19:13.084099 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-t72qh"] Nov 24 09:19:13 crc kubenswrapper[4886]: I1124 09:19:13.092851 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-01c7-account-create-h5txn"] Nov 24 09:19:13 crc kubenswrapper[4886]: I1124 09:19:13.100240 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-3106-account-create-rd5w6"] Nov 24 09:19:13 crc kubenswrapper[4886]: I1124 09:19:13.108058 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-krgfp"] Nov 24 09:19:13 crc kubenswrapper[4886]: I1124 09:19:13.116293 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-bf4c-account-create-2vcrs"] Nov 24 09:19:14 crc kubenswrapper[4886]: I1124 09:19:14.864933 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3d522eb1-b9f9-47ae-bb27-616dffd736d3" path="/var/lib/kubelet/pods/3d522eb1-b9f9-47ae-bb27-616dffd736d3/volumes" Nov 24 09:19:14 crc kubenswrapper[4886]: I1124 09:19:14.866319 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="58780350-18ab-4b0c-ace4-fa09769e0266" path="/var/lib/kubelet/pods/58780350-18ab-4b0c-ace4-fa09769e0266/volumes" Nov 24 09:19:14 crc kubenswrapper[4886]: I1124 09:19:14.869080 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="78e1de22-7cd4-4929-b282-886695a613c2" path="/var/lib/kubelet/pods/78e1de22-7cd4-4929-b282-886695a613c2/volumes" Nov 24 09:19:14 crc kubenswrapper[4886]: I1124 09:19:14.870374 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f1558f7-ba7f-4a8b-9dad-112f3aaacb7a" path="/var/lib/kubelet/pods/8f1558f7-ba7f-4a8b-9dad-112f3aaacb7a/volumes" Nov 24 09:19:14 crc kubenswrapper[4886]: I1124 09:19:14.872539 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="902ebf1f-132f-40a8-b469-d816a555740e" path="/var/lib/kubelet/pods/902ebf1f-132f-40a8-b469-d816a555740e/volumes" Nov 24 09:19:20 crc kubenswrapper[4886]: I1124 09:19:20.850386 4886 scope.go:117] "RemoveContainer" containerID="e9e5da22d6f503b164f934a14d5dfa5723b622287d3192f538cd5fb4bb3d237c" Nov 24 09:19:20 crc kubenswrapper[4886]: E1124 09:19:20.851393 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zc46q_openshift-machine-config-operator(23cb993e-0360-4449-b604-8ddd825a6502)\"" pod="openshift-machine-config-operator/machine-config-daemon-zc46q" podUID="23cb993e-0360-4449-b604-8ddd825a6502" Nov 24 09:19:22 crc kubenswrapper[4886]: I1124 09:19:22.590891 4886 generic.go:334] "Generic (PLEG): container finished" podID="352e856d-6e0d-4aba-b2ce-8063ed40a041" containerID="0718764d0fdd929b24e8643174a060935379f003317afbb1bf7e1c4697e5b29d" exitCode=0 Nov 24 09:19:22 crc kubenswrapper[4886]: I1124 09:19:22.590973 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-w42vq" event={"ID":"352e856d-6e0d-4aba-b2ce-8063ed40a041","Type":"ContainerDied","Data":"0718764d0fdd929b24e8643174a060935379f003317afbb1bf7e1c4697e5b29d"} Nov 24 09:19:23 crc kubenswrapper[4886]: I1124 09:19:23.579170 4886 scope.go:117] "RemoveContainer" containerID="b3b4e689be17251d4e47ec14d33393654fb1815831a108f236ce236410b2daf9" Nov 24 09:19:23 crc kubenswrapper[4886]: I1124 09:19:23.614726 4886 scope.go:117] "RemoveContainer" containerID="1a2f781f50b1db4cb1b9621aa4fdeee17b675d073a091c05c759de27d2a1091e" Nov 24 09:19:23 crc kubenswrapper[4886]: I1124 09:19:23.693995 4886 scope.go:117] "RemoveContainer" containerID="8766467d25b1045be73ca05429414766d88a4024d0760bba2df09828d4f4a0a0" Nov 24 09:19:23 crc kubenswrapper[4886]: I1124 09:19:23.722747 4886 scope.go:117] "RemoveContainer" containerID="00696bc1d377f8a981cb98c84bf8c9ad3a268c7c4ca0677a14dd0a50455372e5" Nov 24 09:19:23 crc kubenswrapper[4886]: I1124 09:19:23.771540 4886 scope.go:117] "RemoveContainer" containerID="e8c401767cb66edae23e292dc1f191bbb8eaf7b87218ad36a6b3ecb4a4b5d8b2" Nov 24 09:19:23 crc kubenswrapper[4886]: I1124 09:19:23.829724 4886 scope.go:117] "RemoveContainer" containerID="6d670346f6590ef8a761045a7345b4b5931daeb38d9a47c05a3863b4e7414f9b" Nov 24 09:19:23 crc kubenswrapper[4886]: I1124 09:19:23.867801 4886 scope.go:117] "RemoveContainer" containerID="732c728a6b3ed010564394fe88f87ac9941533d77aa0c5dbafea5a33aff6bd43" Nov 24 09:19:23 crc kubenswrapper[4886]: I1124 09:19:23.898487 4886 scope.go:117] "RemoveContainer" containerID="41646597baed68efe99287ccad64edc11f2729fa330d98fb1d265cf588e08baf" Nov 24 09:19:23 crc kubenswrapper[4886]: I1124 09:19:23.953628 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-w42vq" Nov 24 09:19:24 crc kubenswrapper[4886]: I1124 09:19:24.079629 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/352e856d-6e0d-4aba-b2ce-8063ed40a041-inventory\") pod \"352e856d-6e0d-4aba-b2ce-8063ed40a041\" (UID: \"352e856d-6e0d-4aba-b2ce-8063ed40a041\") " Nov 24 09:19:24 crc kubenswrapper[4886]: I1124 09:19:24.079713 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/352e856d-6e0d-4aba-b2ce-8063ed40a041-ssh-key\") pod \"352e856d-6e0d-4aba-b2ce-8063ed40a041\" (UID: \"352e856d-6e0d-4aba-b2ce-8063ed40a041\") " Nov 24 09:19:24 crc kubenswrapper[4886]: I1124 09:19:24.079903 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8rzg6\" (UniqueName: \"kubernetes.io/projected/352e856d-6e0d-4aba-b2ce-8063ed40a041-kube-api-access-8rzg6\") pod \"352e856d-6e0d-4aba-b2ce-8063ed40a041\" (UID: \"352e856d-6e0d-4aba-b2ce-8063ed40a041\") " Nov 24 09:19:24 crc kubenswrapper[4886]: I1124 09:19:24.087710 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/352e856d-6e0d-4aba-b2ce-8063ed40a041-kube-api-access-8rzg6" (OuterVolumeSpecName: "kube-api-access-8rzg6") pod "352e856d-6e0d-4aba-b2ce-8063ed40a041" (UID: "352e856d-6e0d-4aba-b2ce-8063ed40a041"). InnerVolumeSpecName "kube-api-access-8rzg6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 09:19:24 crc kubenswrapper[4886]: I1124 09:19:24.112792 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/352e856d-6e0d-4aba-b2ce-8063ed40a041-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "352e856d-6e0d-4aba-b2ce-8063ed40a041" (UID: "352e856d-6e0d-4aba-b2ce-8063ed40a041"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:19:24 crc kubenswrapper[4886]: I1124 09:19:24.115135 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/352e856d-6e0d-4aba-b2ce-8063ed40a041-inventory" (OuterVolumeSpecName: "inventory") pod "352e856d-6e0d-4aba-b2ce-8063ed40a041" (UID: "352e856d-6e0d-4aba-b2ce-8063ed40a041"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:19:24 crc kubenswrapper[4886]: I1124 09:19:24.183467 4886 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/352e856d-6e0d-4aba-b2ce-8063ed40a041-inventory\") on node \"crc\" DevicePath \"\"" Nov 24 09:19:24 crc kubenswrapper[4886]: I1124 09:19:24.183524 4886 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/352e856d-6e0d-4aba-b2ce-8063ed40a041-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 24 09:19:24 crc kubenswrapper[4886]: I1124 09:19:24.183540 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8rzg6\" (UniqueName: \"kubernetes.io/projected/352e856d-6e0d-4aba-b2ce-8063ed40a041-kube-api-access-8rzg6\") on node \"crc\" DevicePath \"\"" Nov 24 09:19:24 crc kubenswrapper[4886]: I1124 09:19:24.616516 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-w42vq" event={"ID":"352e856d-6e0d-4aba-b2ce-8063ed40a041","Type":"ContainerDied","Data":"c16087d76519bc9992949886d12286208a347ccacac4cd8af9481caa1d6e72b2"} Nov 24 09:19:24 crc kubenswrapper[4886]: I1124 09:19:24.616568 4886 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c16087d76519bc9992949886d12286208a347ccacac4cd8af9481caa1d6e72b2" Nov 24 09:19:24 crc kubenswrapper[4886]: I1124 09:19:24.616576 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-w42vq" Nov 24 09:19:24 crc kubenswrapper[4886]: I1124 09:19:24.704898 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-q6b2v"] Nov 24 09:19:24 crc kubenswrapper[4886]: E1124 09:19:24.707632 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="352e856d-6e0d-4aba-b2ce-8063ed40a041" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Nov 24 09:19:24 crc kubenswrapper[4886]: I1124 09:19:24.707662 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="352e856d-6e0d-4aba-b2ce-8063ed40a041" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Nov 24 09:19:24 crc kubenswrapper[4886]: I1124 09:19:24.707910 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="352e856d-6e0d-4aba-b2ce-8063ed40a041" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Nov 24 09:19:24 crc kubenswrapper[4886]: I1124 09:19:24.708728 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-q6b2v" Nov 24 09:19:24 crc kubenswrapper[4886]: I1124 09:19:24.711352 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 24 09:19:24 crc kubenswrapper[4886]: I1124 09:19:24.711408 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-n8v49" Nov 24 09:19:24 crc kubenswrapper[4886]: I1124 09:19:24.711480 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 24 09:19:24 crc kubenswrapper[4886]: I1124 09:19:24.712134 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 24 09:19:24 crc kubenswrapper[4886]: I1124 09:19:24.723538 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-q6b2v"] Nov 24 09:19:24 crc kubenswrapper[4886]: I1124 09:19:24.797036 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/23e016b0-6143-48d5-85e3-fad3392b2de4-ssh-key\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-q6b2v\" (UID: \"23e016b0-6143-48d5-85e3-fad3392b2de4\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-q6b2v" Nov 24 09:19:24 crc kubenswrapper[4886]: I1124 09:19:24.797220 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qm9wq\" (UniqueName: \"kubernetes.io/projected/23e016b0-6143-48d5-85e3-fad3392b2de4-kube-api-access-qm9wq\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-q6b2v\" (UID: \"23e016b0-6143-48d5-85e3-fad3392b2de4\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-q6b2v" Nov 24 09:19:24 crc kubenswrapper[4886]: I1124 09:19:24.797353 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/23e016b0-6143-48d5-85e3-fad3392b2de4-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-q6b2v\" (UID: \"23e016b0-6143-48d5-85e3-fad3392b2de4\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-q6b2v" Nov 24 09:19:24 crc kubenswrapper[4886]: I1124 09:19:24.899473 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/23e016b0-6143-48d5-85e3-fad3392b2de4-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-q6b2v\" (UID: \"23e016b0-6143-48d5-85e3-fad3392b2de4\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-q6b2v" Nov 24 09:19:24 crc kubenswrapper[4886]: I1124 09:19:24.899999 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/23e016b0-6143-48d5-85e3-fad3392b2de4-ssh-key\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-q6b2v\" (UID: \"23e016b0-6143-48d5-85e3-fad3392b2de4\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-q6b2v" Nov 24 09:19:24 crc kubenswrapper[4886]: I1124 09:19:24.900193 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qm9wq\" (UniqueName: \"kubernetes.io/projected/23e016b0-6143-48d5-85e3-fad3392b2de4-kube-api-access-qm9wq\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-q6b2v\" (UID: \"23e016b0-6143-48d5-85e3-fad3392b2de4\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-q6b2v" Nov 24 09:19:24 crc kubenswrapper[4886]: I1124 09:19:24.905555 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/23e016b0-6143-48d5-85e3-fad3392b2de4-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-q6b2v\" (UID: \"23e016b0-6143-48d5-85e3-fad3392b2de4\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-q6b2v" Nov 24 09:19:24 crc kubenswrapper[4886]: I1124 09:19:24.905555 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/23e016b0-6143-48d5-85e3-fad3392b2de4-ssh-key\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-q6b2v\" (UID: \"23e016b0-6143-48d5-85e3-fad3392b2de4\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-q6b2v" Nov 24 09:19:24 crc kubenswrapper[4886]: I1124 09:19:24.919959 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qm9wq\" (UniqueName: \"kubernetes.io/projected/23e016b0-6143-48d5-85e3-fad3392b2de4-kube-api-access-qm9wq\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-q6b2v\" (UID: \"23e016b0-6143-48d5-85e3-fad3392b2de4\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-q6b2v" Nov 24 09:19:25 crc kubenswrapper[4886]: I1124 09:19:25.029533 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-q6b2v" Nov 24 09:19:25 crc kubenswrapper[4886]: I1124 09:19:25.685182 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-q6b2v"] Nov 24 09:19:26 crc kubenswrapper[4886]: I1124 09:19:26.643702 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-q6b2v" event={"ID":"23e016b0-6143-48d5-85e3-fad3392b2de4","Type":"ContainerStarted","Data":"2812aca60eaa263214e147e6b0467ae489250d685cede98602ccde5653b9191e"} Nov 24 09:19:26 crc kubenswrapper[4886]: I1124 09:19:26.644191 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-q6b2v" event={"ID":"23e016b0-6143-48d5-85e3-fad3392b2de4","Type":"ContainerStarted","Data":"3540295e522fe1f55c2f3d997422ff7dd552a56cd608b994395eb6363d662286"} Nov 24 09:19:26 crc kubenswrapper[4886]: I1124 09:19:26.664871 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-q6b2v" podStartSLOduration=2.18811711 podStartE2EDuration="2.664849504s" podCreationTimestamp="2025-11-24 09:19:24 +0000 UTC" firstStartedPulling="2025-11-24 09:19:25.689356315 +0000 UTC m=+1821.576094450" lastFinishedPulling="2025-11-24 09:19:26.166088709 +0000 UTC m=+1822.052826844" observedRunningTime="2025-11-24 09:19:26.661760375 +0000 UTC m=+1822.548498540" watchObservedRunningTime="2025-11-24 09:19:26.664849504 +0000 UTC m=+1822.551587639" Nov 24 09:19:31 crc kubenswrapper[4886]: I1124 09:19:31.696974 4886 generic.go:334] "Generic (PLEG): container finished" podID="23e016b0-6143-48d5-85e3-fad3392b2de4" containerID="2812aca60eaa263214e147e6b0467ae489250d685cede98602ccde5653b9191e" exitCode=0 Nov 24 09:19:31 crc kubenswrapper[4886]: I1124 09:19:31.697070 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-q6b2v" event={"ID":"23e016b0-6143-48d5-85e3-fad3392b2de4","Type":"ContainerDied","Data":"2812aca60eaa263214e147e6b0467ae489250d685cede98602ccde5653b9191e"} Nov 24 09:19:32 crc kubenswrapper[4886]: I1124 09:19:32.849232 4886 scope.go:117] "RemoveContainer" containerID="e9e5da22d6f503b164f934a14d5dfa5723b622287d3192f538cd5fb4bb3d237c" Nov 24 09:19:32 crc kubenswrapper[4886]: E1124 09:19:32.849840 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zc46q_openshift-machine-config-operator(23cb993e-0360-4449-b604-8ddd825a6502)\"" pod="openshift-machine-config-operator/machine-config-daemon-zc46q" podUID="23cb993e-0360-4449-b604-8ddd825a6502" Nov 24 09:19:33 crc kubenswrapper[4886]: I1124 09:19:33.118047 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-q6b2v" Nov 24 09:19:33 crc kubenswrapper[4886]: I1124 09:19:33.282454 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/23e016b0-6143-48d5-85e3-fad3392b2de4-ssh-key\") pod \"23e016b0-6143-48d5-85e3-fad3392b2de4\" (UID: \"23e016b0-6143-48d5-85e3-fad3392b2de4\") " Nov 24 09:19:33 crc kubenswrapper[4886]: I1124 09:19:33.282547 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/23e016b0-6143-48d5-85e3-fad3392b2de4-inventory\") pod \"23e016b0-6143-48d5-85e3-fad3392b2de4\" (UID: \"23e016b0-6143-48d5-85e3-fad3392b2de4\") " Nov 24 09:19:33 crc kubenswrapper[4886]: I1124 09:19:33.282626 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qm9wq\" (UniqueName: \"kubernetes.io/projected/23e016b0-6143-48d5-85e3-fad3392b2de4-kube-api-access-qm9wq\") pod \"23e016b0-6143-48d5-85e3-fad3392b2de4\" (UID: \"23e016b0-6143-48d5-85e3-fad3392b2de4\") " Nov 24 09:19:33 crc kubenswrapper[4886]: I1124 09:19:33.296363 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/23e016b0-6143-48d5-85e3-fad3392b2de4-kube-api-access-qm9wq" (OuterVolumeSpecName: "kube-api-access-qm9wq") pod "23e016b0-6143-48d5-85e3-fad3392b2de4" (UID: "23e016b0-6143-48d5-85e3-fad3392b2de4"). InnerVolumeSpecName "kube-api-access-qm9wq". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 09:19:33 crc kubenswrapper[4886]: I1124 09:19:33.312893 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/23e016b0-6143-48d5-85e3-fad3392b2de4-inventory" (OuterVolumeSpecName: "inventory") pod "23e016b0-6143-48d5-85e3-fad3392b2de4" (UID: "23e016b0-6143-48d5-85e3-fad3392b2de4"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:19:33 crc kubenswrapper[4886]: I1124 09:19:33.317647 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/23e016b0-6143-48d5-85e3-fad3392b2de4-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "23e016b0-6143-48d5-85e3-fad3392b2de4" (UID: "23e016b0-6143-48d5-85e3-fad3392b2de4"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:19:33 crc kubenswrapper[4886]: I1124 09:19:33.385559 4886 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/23e016b0-6143-48d5-85e3-fad3392b2de4-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 24 09:19:33 crc kubenswrapper[4886]: I1124 09:19:33.385595 4886 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/23e016b0-6143-48d5-85e3-fad3392b2de4-inventory\") on node \"crc\" DevicePath \"\"" Nov 24 09:19:33 crc kubenswrapper[4886]: I1124 09:19:33.385607 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qm9wq\" (UniqueName: \"kubernetes.io/projected/23e016b0-6143-48d5-85e3-fad3392b2de4-kube-api-access-qm9wq\") on node \"crc\" DevicePath \"\"" Nov 24 09:19:33 crc kubenswrapper[4886]: I1124 09:19:33.716899 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-q6b2v" event={"ID":"23e016b0-6143-48d5-85e3-fad3392b2de4","Type":"ContainerDied","Data":"3540295e522fe1f55c2f3d997422ff7dd552a56cd608b994395eb6363d662286"} Nov 24 09:19:33 crc kubenswrapper[4886]: I1124 09:19:33.716962 4886 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3540295e522fe1f55c2f3d997422ff7dd552a56cd608b994395eb6363d662286" Nov 24 09:19:33 crc kubenswrapper[4886]: I1124 09:19:33.716968 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-q6b2v" Nov 24 09:19:33 crc kubenswrapper[4886]: I1124 09:19:33.791688 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-4wj4b"] Nov 24 09:19:33 crc kubenswrapper[4886]: E1124 09:19:33.792247 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="23e016b0-6143-48d5-85e3-fad3392b2de4" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Nov 24 09:19:33 crc kubenswrapper[4886]: I1124 09:19:33.792272 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="23e016b0-6143-48d5-85e3-fad3392b2de4" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Nov 24 09:19:33 crc kubenswrapper[4886]: I1124 09:19:33.792583 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="23e016b0-6143-48d5-85e3-fad3392b2de4" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Nov 24 09:19:33 crc kubenswrapper[4886]: I1124 09:19:33.793471 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-4wj4b" Nov 24 09:19:33 crc kubenswrapper[4886]: I1124 09:19:33.795605 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-n8v49" Nov 24 09:19:33 crc kubenswrapper[4886]: I1124 09:19:33.795949 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 24 09:19:33 crc kubenswrapper[4886]: I1124 09:19:33.796176 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 24 09:19:33 crc kubenswrapper[4886]: I1124 09:19:33.796349 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 24 09:19:33 crc kubenswrapper[4886]: I1124 09:19:33.801140 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-4wj4b"] Nov 24 09:19:33 crc kubenswrapper[4886]: I1124 09:19:33.896067 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5hj7g\" (UniqueName: \"kubernetes.io/projected/06b5ba9e-eb74-4f82-ba9e-2fe84eb7d255-kube-api-access-5hj7g\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-4wj4b\" (UID: \"06b5ba9e-eb74-4f82-ba9e-2fe84eb7d255\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-4wj4b" Nov 24 09:19:33 crc kubenswrapper[4886]: I1124 09:19:33.896194 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/06b5ba9e-eb74-4f82-ba9e-2fe84eb7d255-ssh-key\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-4wj4b\" (UID: \"06b5ba9e-eb74-4f82-ba9e-2fe84eb7d255\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-4wj4b" Nov 24 09:19:33 crc kubenswrapper[4886]: I1124 09:19:33.896220 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/06b5ba9e-eb74-4f82-ba9e-2fe84eb7d255-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-4wj4b\" (UID: \"06b5ba9e-eb74-4f82-ba9e-2fe84eb7d255\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-4wj4b" Nov 24 09:19:33 crc kubenswrapper[4886]: I1124 09:19:33.998696 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/06b5ba9e-eb74-4f82-ba9e-2fe84eb7d255-ssh-key\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-4wj4b\" (UID: \"06b5ba9e-eb74-4f82-ba9e-2fe84eb7d255\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-4wj4b" Nov 24 09:19:33 crc kubenswrapper[4886]: I1124 09:19:33.998767 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/06b5ba9e-eb74-4f82-ba9e-2fe84eb7d255-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-4wj4b\" (UID: \"06b5ba9e-eb74-4f82-ba9e-2fe84eb7d255\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-4wj4b" Nov 24 09:19:33 crc kubenswrapper[4886]: I1124 09:19:33.998900 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5hj7g\" (UniqueName: \"kubernetes.io/projected/06b5ba9e-eb74-4f82-ba9e-2fe84eb7d255-kube-api-access-5hj7g\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-4wj4b\" (UID: \"06b5ba9e-eb74-4f82-ba9e-2fe84eb7d255\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-4wj4b" Nov 24 09:19:34 crc kubenswrapper[4886]: I1124 09:19:34.002687 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/06b5ba9e-eb74-4f82-ba9e-2fe84eb7d255-ssh-key\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-4wj4b\" (UID: \"06b5ba9e-eb74-4f82-ba9e-2fe84eb7d255\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-4wj4b" Nov 24 09:19:34 crc kubenswrapper[4886]: I1124 09:19:34.007898 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/06b5ba9e-eb74-4f82-ba9e-2fe84eb7d255-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-4wj4b\" (UID: \"06b5ba9e-eb74-4f82-ba9e-2fe84eb7d255\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-4wj4b" Nov 24 09:19:34 crc kubenswrapper[4886]: I1124 09:19:34.032013 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5hj7g\" (UniqueName: \"kubernetes.io/projected/06b5ba9e-eb74-4f82-ba9e-2fe84eb7d255-kube-api-access-5hj7g\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-4wj4b\" (UID: \"06b5ba9e-eb74-4f82-ba9e-2fe84eb7d255\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-4wj4b" Nov 24 09:19:34 crc kubenswrapper[4886]: I1124 09:19:34.172281 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-4wj4b" Nov 24 09:19:34 crc kubenswrapper[4886]: I1124 09:19:34.711816 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-4wj4b"] Nov 24 09:19:34 crc kubenswrapper[4886]: I1124 09:19:34.734939 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-4wj4b" event={"ID":"06b5ba9e-eb74-4f82-ba9e-2fe84eb7d255","Type":"ContainerStarted","Data":"68cb6759a1b34f3c02a5852e4cc81a4556220be846f2006a37b4a0226a7a6084"} Nov 24 09:19:35 crc kubenswrapper[4886]: I1124 09:19:35.746547 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-4wj4b" event={"ID":"06b5ba9e-eb74-4f82-ba9e-2fe84eb7d255","Type":"ContainerStarted","Data":"dacd83772818789b7cb08b7f7d0653e616c4cbed922f4e74106bbc74243d335a"} Nov 24 09:19:44 crc kubenswrapper[4886]: I1124 09:19:44.048655 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-4wj4b" podStartSLOduration=10.549740786 podStartE2EDuration="11.048630724s" podCreationTimestamp="2025-11-24 09:19:33 +0000 UTC" firstStartedPulling="2025-11-24 09:19:34.717554075 +0000 UTC m=+1830.604292210" lastFinishedPulling="2025-11-24 09:19:35.216444023 +0000 UTC m=+1831.103182148" observedRunningTime="2025-11-24 09:19:35.76628247 +0000 UTC m=+1831.653020605" watchObservedRunningTime="2025-11-24 09:19:44.048630724 +0000 UTC m=+1839.935368859" Nov 24 09:19:44 crc kubenswrapper[4886]: I1124 09:19:44.050292 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-c9t2q"] Nov 24 09:19:44 crc kubenswrapper[4886]: I1124 09:19:44.058440 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-c9t2q"] Nov 24 09:19:44 crc kubenswrapper[4886]: I1124 09:19:44.856463 4886 scope.go:117] "RemoveContainer" containerID="e9e5da22d6f503b164f934a14d5dfa5723b622287d3192f538cd5fb4bb3d237c" Nov 24 09:19:44 crc kubenswrapper[4886]: E1124 09:19:44.856723 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zc46q_openshift-machine-config-operator(23cb993e-0360-4449-b604-8ddd825a6502)\"" pod="openshift-machine-config-operator/machine-config-daemon-zc46q" podUID="23cb993e-0360-4449-b604-8ddd825a6502" Nov 24 09:19:44 crc kubenswrapper[4886]: I1124 09:19:44.863460 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f2a4f443-5a71-4a49-816a-b052b3f6246c" path="/var/lib/kubelet/pods/f2a4f443-5a71-4a49-816a-b052b3f6246c/volumes" Nov 24 09:19:59 crc kubenswrapper[4886]: I1124 09:19:59.849454 4886 scope.go:117] "RemoveContainer" containerID="e9e5da22d6f503b164f934a14d5dfa5723b622287d3192f538cd5fb4bb3d237c" Nov 24 09:19:59 crc kubenswrapper[4886]: E1124 09:19:59.850450 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zc46q_openshift-machine-config-operator(23cb993e-0360-4449-b604-8ddd825a6502)\"" pod="openshift-machine-config-operator/machine-config-daemon-zc46q" podUID="23cb993e-0360-4449-b604-8ddd825a6502" Nov 24 09:20:05 crc kubenswrapper[4886]: I1124 09:20:05.051189 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-rmkch"] Nov 24 09:20:05 crc kubenswrapper[4886]: I1124 09:20:05.059065 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-rmkch"] Nov 24 09:20:06 crc kubenswrapper[4886]: I1124 09:20:06.862447 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a7266b1c-bb21-4f54-994c-52ab5db8d4eb" path="/var/lib/kubelet/pods/a7266b1c-bb21-4f54-994c-52ab5db8d4eb/volumes" Nov 24 09:20:09 crc kubenswrapper[4886]: I1124 09:20:09.031266 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-c59cj"] Nov 24 09:20:09 crc kubenswrapper[4886]: I1124 09:20:09.038478 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-c59cj"] Nov 24 09:20:10 crc kubenswrapper[4886]: I1124 09:20:10.872214 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="894a9de8-fef6-45c0-9464-fca3f25587e9" path="/var/lib/kubelet/pods/894a9de8-fef6-45c0-9464-fca3f25587e9/volumes" Nov 24 09:20:12 crc kubenswrapper[4886]: I1124 09:20:12.850008 4886 scope.go:117] "RemoveContainer" containerID="e9e5da22d6f503b164f934a14d5dfa5723b622287d3192f538cd5fb4bb3d237c" Nov 24 09:20:12 crc kubenswrapper[4886]: E1124 09:20:12.850812 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zc46q_openshift-machine-config-operator(23cb993e-0360-4449-b604-8ddd825a6502)\"" pod="openshift-machine-config-operator/machine-config-daemon-zc46q" podUID="23cb993e-0360-4449-b604-8ddd825a6502" Nov 24 09:20:13 crc kubenswrapper[4886]: I1124 09:20:13.100422 4886 generic.go:334] "Generic (PLEG): container finished" podID="06b5ba9e-eb74-4f82-ba9e-2fe84eb7d255" containerID="dacd83772818789b7cb08b7f7d0653e616c4cbed922f4e74106bbc74243d335a" exitCode=0 Nov 24 09:20:13 crc kubenswrapper[4886]: I1124 09:20:13.100487 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-4wj4b" event={"ID":"06b5ba9e-eb74-4f82-ba9e-2fe84eb7d255","Type":"ContainerDied","Data":"dacd83772818789b7cb08b7f7d0653e616c4cbed922f4e74106bbc74243d335a"} Nov 24 09:20:14 crc kubenswrapper[4886]: I1124 09:20:14.571577 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-4wj4b" Nov 24 09:20:14 crc kubenswrapper[4886]: I1124 09:20:14.695980 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5hj7g\" (UniqueName: \"kubernetes.io/projected/06b5ba9e-eb74-4f82-ba9e-2fe84eb7d255-kube-api-access-5hj7g\") pod \"06b5ba9e-eb74-4f82-ba9e-2fe84eb7d255\" (UID: \"06b5ba9e-eb74-4f82-ba9e-2fe84eb7d255\") " Nov 24 09:20:14 crc kubenswrapper[4886]: I1124 09:20:14.696108 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/06b5ba9e-eb74-4f82-ba9e-2fe84eb7d255-ssh-key\") pod \"06b5ba9e-eb74-4f82-ba9e-2fe84eb7d255\" (UID: \"06b5ba9e-eb74-4f82-ba9e-2fe84eb7d255\") " Nov 24 09:20:14 crc kubenswrapper[4886]: I1124 09:20:14.696413 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/06b5ba9e-eb74-4f82-ba9e-2fe84eb7d255-inventory\") pod \"06b5ba9e-eb74-4f82-ba9e-2fe84eb7d255\" (UID: \"06b5ba9e-eb74-4f82-ba9e-2fe84eb7d255\") " Nov 24 09:20:14 crc kubenswrapper[4886]: I1124 09:20:14.703886 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/06b5ba9e-eb74-4f82-ba9e-2fe84eb7d255-kube-api-access-5hj7g" (OuterVolumeSpecName: "kube-api-access-5hj7g") pod "06b5ba9e-eb74-4f82-ba9e-2fe84eb7d255" (UID: "06b5ba9e-eb74-4f82-ba9e-2fe84eb7d255"). InnerVolumeSpecName "kube-api-access-5hj7g". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 09:20:14 crc kubenswrapper[4886]: I1124 09:20:14.725875 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/06b5ba9e-eb74-4f82-ba9e-2fe84eb7d255-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "06b5ba9e-eb74-4f82-ba9e-2fe84eb7d255" (UID: "06b5ba9e-eb74-4f82-ba9e-2fe84eb7d255"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:20:14 crc kubenswrapper[4886]: I1124 09:20:14.733401 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/06b5ba9e-eb74-4f82-ba9e-2fe84eb7d255-inventory" (OuterVolumeSpecName: "inventory") pod "06b5ba9e-eb74-4f82-ba9e-2fe84eb7d255" (UID: "06b5ba9e-eb74-4f82-ba9e-2fe84eb7d255"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:20:14 crc kubenswrapper[4886]: I1124 09:20:14.798906 4886 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/06b5ba9e-eb74-4f82-ba9e-2fe84eb7d255-inventory\") on node \"crc\" DevicePath \"\"" Nov 24 09:20:14 crc kubenswrapper[4886]: I1124 09:20:14.799230 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5hj7g\" (UniqueName: \"kubernetes.io/projected/06b5ba9e-eb74-4f82-ba9e-2fe84eb7d255-kube-api-access-5hj7g\") on node \"crc\" DevicePath \"\"" Nov 24 09:20:14 crc kubenswrapper[4886]: I1124 09:20:14.799321 4886 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/06b5ba9e-eb74-4f82-ba9e-2fe84eb7d255-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 24 09:20:15 crc kubenswrapper[4886]: I1124 09:20:15.121691 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-4wj4b" event={"ID":"06b5ba9e-eb74-4f82-ba9e-2fe84eb7d255","Type":"ContainerDied","Data":"68cb6759a1b34f3c02a5852e4cc81a4556220be846f2006a37b4a0226a7a6084"} Nov 24 09:20:15 crc kubenswrapper[4886]: I1124 09:20:15.121734 4886 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="68cb6759a1b34f3c02a5852e4cc81a4556220be846f2006a37b4a0226a7a6084" Nov 24 09:20:15 crc kubenswrapper[4886]: I1124 09:20:15.121793 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-4wj4b" Nov 24 09:20:15 crc kubenswrapper[4886]: I1124 09:20:15.302664 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-qc5h5"] Nov 24 09:20:15 crc kubenswrapper[4886]: E1124 09:20:15.303966 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06b5ba9e-eb74-4f82-ba9e-2fe84eb7d255" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Nov 24 09:20:15 crc kubenswrapper[4886]: I1124 09:20:15.304211 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="06b5ba9e-eb74-4f82-ba9e-2fe84eb7d255" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Nov 24 09:20:15 crc kubenswrapper[4886]: I1124 09:20:15.304843 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="06b5ba9e-eb74-4f82-ba9e-2fe84eb7d255" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Nov 24 09:20:15 crc kubenswrapper[4886]: I1124 09:20:15.306089 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-qc5h5" Nov 24 09:20:15 crc kubenswrapper[4886]: I1124 09:20:15.315950 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-n8v49" Nov 24 09:20:15 crc kubenswrapper[4886]: I1124 09:20:15.316723 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 24 09:20:15 crc kubenswrapper[4886]: I1124 09:20:15.316808 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 24 09:20:15 crc kubenswrapper[4886]: I1124 09:20:15.316712 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 24 09:20:15 crc kubenswrapper[4886]: I1124 09:20:15.330975 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-qc5h5"] Nov 24 09:20:15 crc kubenswrapper[4886]: I1124 09:20:15.420661 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fsxpm\" (UniqueName: \"kubernetes.io/projected/f7b875a5-9e9f-43bc-b6da-48223ea2c653-kube-api-access-fsxpm\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-qc5h5\" (UID: \"f7b875a5-9e9f-43bc-b6da-48223ea2c653\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-qc5h5" Nov 24 09:20:15 crc kubenswrapper[4886]: I1124 09:20:15.421005 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f7b875a5-9e9f-43bc-b6da-48223ea2c653-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-qc5h5\" (UID: \"f7b875a5-9e9f-43bc-b6da-48223ea2c653\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-qc5h5" Nov 24 09:20:15 crc kubenswrapper[4886]: I1124 09:20:15.421138 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f7b875a5-9e9f-43bc-b6da-48223ea2c653-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-qc5h5\" (UID: \"f7b875a5-9e9f-43bc-b6da-48223ea2c653\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-qc5h5" Nov 24 09:20:15 crc kubenswrapper[4886]: I1124 09:20:15.523508 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f7b875a5-9e9f-43bc-b6da-48223ea2c653-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-qc5h5\" (UID: \"f7b875a5-9e9f-43bc-b6da-48223ea2c653\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-qc5h5" Nov 24 09:20:15 crc kubenswrapper[4886]: I1124 09:20:15.523683 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fsxpm\" (UniqueName: \"kubernetes.io/projected/f7b875a5-9e9f-43bc-b6da-48223ea2c653-kube-api-access-fsxpm\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-qc5h5\" (UID: \"f7b875a5-9e9f-43bc-b6da-48223ea2c653\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-qc5h5" Nov 24 09:20:15 crc kubenswrapper[4886]: I1124 09:20:15.523775 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f7b875a5-9e9f-43bc-b6da-48223ea2c653-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-qc5h5\" (UID: \"f7b875a5-9e9f-43bc-b6da-48223ea2c653\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-qc5h5" Nov 24 09:20:15 crc kubenswrapper[4886]: I1124 09:20:15.529254 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f7b875a5-9e9f-43bc-b6da-48223ea2c653-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-qc5h5\" (UID: \"f7b875a5-9e9f-43bc-b6da-48223ea2c653\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-qc5h5" Nov 24 09:20:15 crc kubenswrapper[4886]: I1124 09:20:15.541036 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f7b875a5-9e9f-43bc-b6da-48223ea2c653-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-qc5h5\" (UID: \"f7b875a5-9e9f-43bc-b6da-48223ea2c653\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-qc5h5" Nov 24 09:20:15 crc kubenswrapper[4886]: I1124 09:20:15.542046 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fsxpm\" (UniqueName: \"kubernetes.io/projected/f7b875a5-9e9f-43bc-b6da-48223ea2c653-kube-api-access-fsxpm\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-qc5h5\" (UID: \"f7b875a5-9e9f-43bc-b6da-48223ea2c653\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-qc5h5" Nov 24 09:20:15 crc kubenswrapper[4886]: I1124 09:20:15.639832 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-qc5h5" Nov 24 09:20:16 crc kubenswrapper[4886]: I1124 09:20:16.171347 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-qc5h5"] Nov 24 09:20:17 crc kubenswrapper[4886]: I1124 09:20:17.143887 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-qc5h5" event={"ID":"f7b875a5-9e9f-43bc-b6da-48223ea2c653","Type":"ContainerStarted","Data":"635d3d32192f50c681fd9c6c446ff4e1887a32145938d1a70f2853302ef1b5d6"} Nov 24 09:20:17 crc kubenswrapper[4886]: I1124 09:20:17.144368 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-qc5h5" event={"ID":"f7b875a5-9e9f-43bc-b6da-48223ea2c653","Type":"ContainerStarted","Data":"6066825c96eeb82c8ea5c3c25a3f70b5b62338e0e1bcfd4a31711747227526ae"} Nov 24 09:20:17 crc kubenswrapper[4886]: I1124 09:20:17.160123 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-qc5h5" podStartSLOduration=1.459000873 podStartE2EDuration="2.160098659s" podCreationTimestamp="2025-11-24 09:20:15 +0000 UTC" firstStartedPulling="2025-11-24 09:20:16.182766957 +0000 UTC m=+1872.069505092" lastFinishedPulling="2025-11-24 09:20:16.883864733 +0000 UTC m=+1872.770602878" observedRunningTime="2025-11-24 09:20:17.159186283 +0000 UTC m=+1873.045924438" watchObservedRunningTime="2025-11-24 09:20:17.160098659 +0000 UTC m=+1873.046836794" Nov 24 09:20:24 crc kubenswrapper[4886]: I1124 09:20:24.133408 4886 scope.go:117] "RemoveContainer" containerID="4785f8ddb6e0c935ba3f80c8611df368145a2d78eba83d4bfc13e053f494c5c9" Nov 24 09:20:24 crc kubenswrapper[4886]: I1124 09:20:24.183357 4886 scope.go:117] "RemoveContainer" containerID="239439736ecdb042aea787b616b9c9fc9e8422bebfb64a7cc6b3c3993243325e" Nov 24 09:20:24 crc kubenswrapper[4886]: I1124 09:20:24.232806 4886 scope.go:117] "RemoveContainer" containerID="1abe7a1f7783f8358d0fa155117a9f0b2511fc0b3628864bbfc37391e4d314cf" Nov 24 09:20:25 crc kubenswrapper[4886]: I1124 09:20:25.849479 4886 scope.go:117] "RemoveContainer" containerID="e9e5da22d6f503b164f934a14d5dfa5723b622287d3192f538cd5fb4bb3d237c" Nov 24 09:20:25 crc kubenswrapper[4886]: E1124 09:20:25.850013 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zc46q_openshift-machine-config-operator(23cb993e-0360-4449-b604-8ddd825a6502)\"" pod="openshift-machine-config-operator/machine-config-daemon-zc46q" podUID="23cb993e-0360-4449-b604-8ddd825a6502" Nov 24 09:20:38 crc kubenswrapper[4886]: I1124 09:20:38.852842 4886 scope.go:117] "RemoveContainer" containerID="e9e5da22d6f503b164f934a14d5dfa5723b622287d3192f538cd5fb4bb3d237c" Nov 24 09:20:38 crc kubenswrapper[4886]: E1124 09:20:38.853736 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zc46q_openshift-machine-config-operator(23cb993e-0360-4449-b604-8ddd825a6502)\"" pod="openshift-machine-config-operator/machine-config-daemon-zc46q" podUID="23cb993e-0360-4449-b604-8ddd825a6502" Nov 24 09:20:49 crc kubenswrapper[4886]: I1124 09:20:49.849771 4886 scope.go:117] "RemoveContainer" containerID="e9e5da22d6f503b164f934a14d5dfa5723b622287d3192f538cd5fb4bb3d237c" Nov 24 09:20:49 crc kubenswrapper[4886]: E1124 09:20:49.850621 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zc46q_openshift-machine-config-operator(23cb993e-0360-4449-b604-8ddd825a6502)\"" pod="openshift-machine-config-operator/machine-config-daemon-zc46q" podUID="23cb993e-0360-4449-b604-8ddd825a6502" Nov 24 09:20:50 crc kubenswrapper[4886]: I1124 09:20:50.038952 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-bx8hx"] Nov 24 09:20:50 crc kubenswrapper[4886]: I1124 09:20:50.046451 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-bx8hx"] Nov 24 09:20:50 crc kubenswrapper[4886]: I1124 09:20:50.862954 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d1ca2352-9c13-4db2-9c3a-ce2557f39968" path="/var/lib/kubelet/pods/d1ca2352-9c13-4db2-9c3a-ce2557f39968/volumes" Nov 24 09:21:00 crc kubenswrapper[4886]: I1124 09:21:00.849420 4886 scope.go:117] "RemoveContainer" containerID="e9e5da22d6f503b164f934a14d5dfa5723b622287d3192f538cd5fb4bb3d237c" Nov 24 09:21:00 crc kubenswrapper[4886]: E1124 09:21:00.850249 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zc46q_openshift-machine-config-operator(23cb993e-0360-4449-b604-8ddd825a6502)\"" pod="openshift-machine-config-operator/machine-config-daemon-zc46q" podUID="23cb993e-0360-4449-b604-8ddd825a6502" Nov 24 09:21:06 crc kubenswrapper[4886]: I1124 09:21:06.611782 4886 generic.go:334] "Generic (PLEG): container finished" podID="f7b875a5-9e9f-43bc-b6da-48223ea2c653" containerID="635d3d32192f50c681fd9c6c446ff4e1887a32145938d1a70f2853302ef1b5d6" exitCode=0 Nov 24 09:21:06 crc kubenswrapper[4886]: I1124 09:21:06.611877 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-qc5h5" event={"ID":"f7b875a5-9e9f-43bc-b6da-48223ea2c653","Type":"ContainerDied","Data":"635d3d32192f50c681fd9c6c446ff4e1887a32145938d1a70f2853302ef1b5d6"} Nov 24 09:21:08 crc kubenswrapper[4886]: I1124 09:21:08.083327 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-qc5h5" Nov 24 09:21:08 crc kubenswrapper[4886]: I1124 09:21:08.238302 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f7b875a5-9e9f-43bc-b6da-48223ea2c653-inventory\") pod \"f7b875a5-9e9f-43bc-b6da-48223ea2c653\" (UID: \"f7b875a5-9e9f-43bc-b6da-48223ea2c653\") " Nov 24 09:21:08 crc kubenswrapper[4886]: I1124 09:21:08.238901 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f7b875a5-9e9f-43bc-b6da-48223ea2c653-ssh-key\") pod \"f7b875a5-9e9f-43bc-b6da-48223ea2c653\" (UID: \"f7b875a5-9e9f-43bc-b6da-48223ea2c653\") " Nov 24 09:21:08 crc kubenswrapper[4886]: I1124 09:21:08.238948 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fsxpm\" (UniqueName: \"kubernetes.io/projected/f7b875a5-9e9f-43bc-b6da-48223ea2c653-kube-api-access-fsxpm\") pod \"f7b875a5-9e9f-43bc-b6da-48223ea2c653\" (UID: \"f7b875a5-9e9f-43bc-b6da-48223ea2c653\") " Nov 24 09:21:08 crc kubenswrapper[4886]: I1124 09:21:08.243740 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7b875a5-9e9f-43bc-b6da-48223ea2c653-kube-api-access-fsxpm" (OuterVolumeSpecName: "kube-api-access-fsxpm") pod "f7b875a5-9e9f-43bc-b6da-48223ea2c653" (UID: "f7b875a5-9e9f-43bc-b6da-48223ea2c653"). InnerVolumeSpecName "kube-api-access-fsxpm". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 09:21:08 crc kubenswrapper[4886]: I1124 09:21:08.266330 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f7b875a5-9e9f-43bc-b6da-48223ea2c653-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "f7b875a5-9e9f-43bc-b6da-48223ea2c653" (UID: "f7b875a5-9e9f-43bc-b6da-48223ea2c653"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:21:08 crc kubenswrapper[4886]: I1124 09:21:08.268624 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f7b875a5-9e9f-43bc-b6da-48223ea2c653-inventory" (OuterVolumeSpecName: "inventory") pod "f7b875a5-9e9f-43bc-b6da-48223ea2c653" (UID: "f7b875a5-9e9f-43bc-b6da-48223ea2c653"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:21:08 crc kubenswrapper[4886]: I1124 09:21:08.340850 4886 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f7b875a5-9e9f-43bc-b6da-48223ea2c653-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 24 09:21:08 crc kubenswrapper[4886]: I1124 09:21:08.340888 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fsxpm\" (UniqueName: \"kubernetes.io/projected/f7b875a5-9e9f-43bc-b6da-48223ea2c653-kube-api-access-fsxpm\") on node \"crc\" DevicePath \"\"" Nov 24 09:21:08 crc kubenswrapper[4886]: I1124 09:21:08.340899 4886 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f7b875a5-9e9f-43bc-b6da-48223ea2c653-inventory\") on node \"crc\" DevicePath \"\"" Nov 24 09:21:08 crc kubenswrapper[4886]: I1124 09:21:08.636995 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-qc5h5" event={"ID":"f7b875a5-9e9f-43bc-b6da-48223ea2c653","Type":"ContainerDied","Data":"6066825c96eeb82c8ea5c3c25a3f70b5b62338e0e1bcfd4a31711747227526ae"} Nov 24 09:21:08 crc kubenswrapper[4886]: I1124 09:21:08.637064 4886 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6066825c96eeb82c8ea5c3c25a3f70b5b62338e0e1bcfd4a31711747227526ae" Nov 24 09:21:08 crc kubenswrapper[4886]: I1124 09:21:08.637140 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-qc5h5" Nov 24 09:21:08 crc kubenswrapper[4886]: I1124 09:21:08.730092 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-r4dj2"] Nov 24 09:21:08 crc kubenswrapper[4886]: E1124 09:21:08.730489 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7b875a5-9e9f-43bc-b6da-48223ea2c653" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Nov 24 09:21:08 crc kubenswrapper[4886]: I1124 09:21:08.730509 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7b875a5-9e9f-43bc-b6da-48223ea2c653" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Nov 24 09:21:08 crc kubenswrapper[4886]: I1124 09:21:08.730730 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="f7b875a5-9e9f-43bc-b6da-48223ea2c653" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Nov 24 09:21:08 crc kubenswrapper[4886]: I1124 09:21:08.731425 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-r4dj2" Nov 24 09:21:08 crc kubenswrapper[4886]: I1124 09:21:08.734861 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 24 09:21:08 crc kubenswrapper[4886]: I1124 09:21:08.735138 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 24 09:21:08 crc kubenswrapper[4886]: I1124 09:21:08.735384 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 24 09:21:08 crc kubenswrapper[4886]: I1124 09:21:08.735514 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-n8v49" Nov 24 09:21:08 crc kubenswrapper[4886]: I1124 09:21:08.749230 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-r4dj2"] Nov 24 09:21:08 crc kubenswrapper[4886]: I1124 09:21:08.850698 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xrqdb\" (UniqueName: \"kubernetes.io/projected/c9133cae-660e-41cc-ad42-4b3772bdcdfe-kube-api-access-xrqdb\") pod \"ssh-known-hosts-edpm-deployment-r4dj2\" (UID: \"c9133cae-660e-41cc-ad42-4b3772bdcdfe\") " pod="openstack/ssh-known-hosts-edpm-deployment-r4dj2" Nov 24 09:21:08 crc kubenswrapper[4886]: I1124 09:21:08.850778 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c9133cae-660e-41cc-ad42-4b3772bdcdfe-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-r4dj2\" (UID: \"c9133cae-660e-41cc-ad42-4b3772bdcdfe\") " pod="openstack/ssh-known-hosts-edpm-deployment-r4dj2" Nov 24 09:21:08 crc kubenswrapper[4886]: I1124 09:21:08.850815 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/c9133cae-660e-41cc-ad42-4b3772bdcdfe-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-r4dj2\" (UID: \"c9133cae-660e-41cc-ad42-4b3772bdcdfe\") " pod="openstack/ssh-known-hosts-edpm-deployment-r4dj2" Nov 24 09:21:08 crc kubenswrapper[4886]: I1124 09:21:08.953238 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xrqdb\" (UniqueName: \"kubernetes.io/projected/c9133cae-660e-41cc-ad42-4b3772bdcdfe-kube-api-access-xrqdb\") pod \"ssh-known-hosts-edpm-deployment-r4dj2\" (UID: \"c9133cae-660e-41cc-ad42-4b3772bdcdfe\") " pod="openstack/ssh-known-hosts-edpm-deployment-r4dj2" Nov 24 09:21:08 crc kubenswrapper[4886]: I1124 09:21:08.953299 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c9133cae-660e-41cc-ad42-4b3772bdcdfe-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-r4dj2\" (UID: \"c9133cae-660e-41cc-ad42-4b3772bdcdfe\") " pod="openstack/ssh-known-hosts-edpm-deployment-r4dj2" Nov 24 09:21:08 crc kubenswrapper[4886]: I1124 09:21:08.953339 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/c9133cae-660e-41cc-ad42-4b3772bdcdfe-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-r4dj2\" (UID: \"c9133cae-660e-41cc-ad42-4b3772bdcdfe\") " pod="openstack/ssh-known-hosts-edpm-deployment-r4dj2" Nov 24 09:21:08 crc kubenswrapper[4886]: I1124 09:21:08.958802 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/c9133cae-660e-41cc-ad42-4b3772bdcdfe-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-r4dj2\" (UID: \"c9133cae-660e-41cc-ad42-4b3772bdcdfe\") " pod="openstack/ssh-known-hosts-edpm-deployment-r4dj2" Nov 24 09:21:08 crc kubenswrapper[4886]: I1124 09:21:08.961795 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c9133cae-660e-41cc-ad42-4b3772bdcdfe-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-r4dj2\" (UID: \"c9133cae-660e-41cc-ad42-4b3772bdcdfe\") " pod="openstack/ssh-known-hosts-edpm-deployment-r4dj2" Nov 24 09:21:08 crc kubenswrapper[4886]: I1124 09:21:08.970006 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xrqdb\" (UniqueName: \"kubernetes.io/projected/c9133cae-660e-41cc-ad42-4b3772bdcdfe-kube-api-access-xrqdb\") pod \"ssh-known-hosts-edpm-deployment-r4dj2\" (UID: \"c9133cae-660e-41cc-ad42-4b3772bdcdfe\") " pod="openstack/ssh-known-hosts-edpm-deployment-r4dj2" Nov 24 09:21:09 crc kubenswrapper[4886]: I1124 09:21:09.049308 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-r4dj2" Nov 24 09:21:09 crc kubenswrapper[4886]: I1124 09:21:09.548666 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-r4dj2"] Nov 24 09:21:09 crc kubenswrapper[4886]: I1124 09:21:09.647477 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-r4dj2" event={"ID":"c9133cae-660e-41cc-ad42-4b3772bdcdfe","Type":"ContainerStarted","Data":"128e15575d245445748558f8170ac466f6531c3a8335a6381b06fdefd1c5a86a"} Nov 24 09:21:11 crc kubenswrapper[4886]: I1124 09:21:11.665827 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-r4dj2" event={"ID":"c9133cae-660e-41cc-ad42-4b3772bdcdfe","Type":"ContainerStarted","Data":"704bc3122a154e6bbb7fb2db1982b0dd12398398f3770e6f248a0a69f790f832"} Nov 24 09:21:11 crc kubenswrapper[4886]: I1124 09:21:11.680773 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ssh-known-hosts-edpm-deployment-r4dj2" podStartSLOduration=2.829655587 podStartE2EDuration="3.680752152s" podCreationTimestamp="2025-11-24 09:21:08 +0000 UTC" firstStartedPulling="2025-11-24 09:21:09.558976161 +0000 UTC m=+1925.445714296" lastFinishedPulling="2025-11-24 09:21:10.410072726 +0000 UTC m=+1926.296810861" observedRunningTime="2025-11-24 09:21:11.678456376 +0000 UTC m=+1927.565194511" watchObservedRunningTime="2025-11-24 09:21:11.680752152 +0000 UTC m=+1927.567490287" Nov 24 09:21:11 crc kubenswrapper[4886]: I1124 09:21:11.849415 4886 scope.go:117] "RemoveContainer" containerID="e9e5da22d6f503b164f934a14d5dfa5723b622287d3192f538cd5fb4bb3d237c" Nov 24 09:21:11 crc kubenswrapper[4886]: E1124 09:21:11.849907 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zc46q_openshift-machine-config-operator(23cb993e-0360-4449-b604-8ddd825a6502)\"" pod="openshift-machine-config-operator/machine-config-daemon-zc46q" podUID="23cb993e-0360-4449-b604-8ddd825a6502" Nov 24 09:21:17 crc kubenswrapper[4886]: I1124 09:21:17.713351 4886 generic.go:334] "Generic (PLEG): container finished" podID="c9133cae-660e-41cc-ad42-4b3772bdcdfe" containerID="704bc3122a154e6bbb7fb2db1982b0dd12398398f3770e6f248a0a69f790f832" exitCode=0 Nov 24 09:21:17 crc kubenswrapper[4886]: I1124 09:21:17.713410 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-r4dj2" event={"ID":"c9133cae-660e-41cc-ad42-4b3772bdcdfe","Type":"ContainerDied","Data":"704bc3122a154e6bbb7fb2db1982b0dd12398398f3770e6f248a0a69f790f832"} Nov 24 09:21:19 crc kubenswrapper[4886]: I1124 09:21:19.112548 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-r4dj2" Nov 24 09:21:19 crc kubenswrapper[4886]: I1124 09:21:19.295559 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xrqdb\" (UniqueName: \"kubernetes.io/projected/c9133cae-660e-41cc-ad42-4b3772bdcdfe-kube-api-access-xrqdb\") pod \"c9133cae-660e-41cc-ad42-4b3772bdcdfe\" (UID: \"c9133cae-660e-41cc-ad42-4b3772bdcdfe\") " Nov 24 09:21:19 crc kubenswrapper[4886]: I1124 09:21:19.296009 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c9133cae-660e-41cc-ad42-4b3772bdcdfe-ssh-key-openstack-edpm-ipam\") pod \"c9133cae-660e-41cc-ad42-4b3772bdcdfe\" (UID: \"c9133cae-660e-41cc-ad42-4b3772bdcdfe\") " Nov 24 09:21:19 crc kubenswrapper[4886]: I1124 09:21:19.296283 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/c9133cae-660e-41cc-ad42-4b3772bdcdfe-inventory-0\") pod \"c9133cae-660e-41cc-ad42-4b3772bdcdfe\" (UID: \"c9133cae-660e-41cc-ad42-4b3772bdcdfe\") " Nov 24 09:21:19 crc kubenswrapper[4886]: I1124 09:21:19.315613 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c9133cae-660e-41cc-ad42-4b3772bdcdfe-kube-api-access-xrqdb" (OuterVolumeSpecName: "kube-api-access-xrqdb") pod "c9133cae-660e-41cc-ad42-4b3772bdcdfe" (UID: "c9133cae-660e-41cc-ad42-4b3772bdcdfe"). InnerVolumeSpecName "kube-api-access-xrqdb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 09:21:19 crc kubenswrapper[4886]: I1124 09:21:19.325675 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c9133cae-660e-41cc-ad42-4b3772bdcdfe-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "c9133cae-660e-41cc-ad42-4b3772bdcdfe" (UID: "c9133cae-660e-41cc-ad42-4b3772bdcdfe"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:21:19 crc kubenswrapper[4886]: I1124 09:21:19.328299 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c9133cae-660e-41cc-ad42-4b3772bdcdfe-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "c9133cae-660e-41cc-ad42-4b3772bdcdfe" (UID: "c9133cae-660e-41cc-ad42-4b3772bdcdfe"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:21:19 crc kubenswrapper[4886]: I1124 09:21:19.399597 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xrqdb\" (UniqueName: \"kubernetes.io/projected/c9133cae-660e-41cc-ad42-4b3772bdcdfe-kube-api-access-xrqdb\") on node \"crc\" DevicePath \"\"" Nov 24 09:21:19 crc kubenswrapper[4886]: I1124 09:21:19.399631 4886 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c9133cae-660e-41cc-ad42-4b3772bdcdfe-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Nov 24 09:21:19 crc kubenswrapper[4886]: I1124 09:21:19.399641 4886 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/c9133cae-660e-41cc-ad42-4b3772bdcdfe-inventory-0\") on node \"crc\" DevicePath \"\"" Nov 24 09:21:19 crc kubenswrapper[4886]: I1124 09:21:19.739048 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-r4dj2" event={"ID":"c9133cae-660e-41cc-ad42-4b3772bdcdfe","Type":"ContainerDied","Data":"128e15575d245445748558f8170ac466f6531c3a8335a6381b06fdefd1c5a86a"} Nov 24 09:21:19 crc kubenswrapper[4886]: I1124 09:21:19.739101 4886 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="128e15575d245445748558f8170ac466f6531c3a8335a6381b06fdefd1c5a86a" Nov 24 09:21:19 crc kubenswrapper[4886]: I1124 09:21:19.739177 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-r4dj2" Nov 24 09:21:19 crc kubenswrapper[4886]: I1124 09:21:19.817419 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-m2f4l"] Nov 24 09:21:19 crc kubenswrapper[4886]: E1124 09:21:19.817782 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9133cae-660e-41cc-ad42-4b3772bdcdfe" containerName="ssh-known-hosts-edpm-deployment" Nov 24 09:21:19 crc kubenswrapper[4886]: I1124 09:21:19.817799 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9133cae-660e-41cc-ad42-4b3772bdcdfe" containerName="ssh-known-hosts-edpm-deployment" Nov 24 09:21:19 crc kubenswrapper[4886]: I1124 09:21:19.818000 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="c9133cae-660e-41cc-ad42-4b3772bdcdfe" containerName="ssh-known-hosts-edpm-deployment" Nov 24 09:21:19 crc kubenswrapper[4886]: I1124 09:21:19.818628 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-m2f4l" Nov 24 09:21:19 crc kubenswrapper[4886]: I1124 09:21:19.820635 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 24 09:21:19 crc kubenswrapper[4886]: I1124 09:21:19.821145 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 24 09:21:19 crc kubenswrapper[4886]: I1124 09:21:19.821145 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-n8v49" Nov 24 09:21:19 crc kubenswrapper[4886]: I1124 09:21:19.822396 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 24 09:21:19 crc kubenswrapper[4886]: I1124 09:21:19.829121 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-m2f4l"] Nov 24 09:21:19 crc kubenswrapper[4886]: I1124 09:21:19.910113 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/80cccca8-e8d6-4772-b514-83482acf917e-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-m2f4l\" (UID: \"80cccca8-e8d6-4772-b514-83482acf917e\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-m2f4l" Nov 24 09:21:19 crc kubenswrapper[4886]: I1124 09:21:19.910210 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/80cccca8-e8d6-4772-b514-83482acf917e-ssh-key\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-m2f4l\" (UID: \"80cccca8-e8d6-4772-b514-83482acf917e\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-m2f4l" Nov 24 09:21:19 crc kubenswrapper[4886]: I1124 09:21:19.910259 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rvf7f\" (UniqueName: \"kubernetes.io/projected/80cccca8-e8d6-4772-b514-83482acf917e-kube-api-access-rvf7f\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-m2f4l\" (UID: \"80cccca8-e8d6-4772-b514-83482acf917e\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-m2f4l" Nov 24 09:21:20 crc kubenswrapper[4886]: I1124 09:21:20.012000 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/80cccca8-e8d6-4772-b514-83482acf917e-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-m2f4l\" (UID: \"80cccca8-e8d6-4772-b514-83482acf917e\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-m2f4l" Nov 24 09:21:20 crc kubenswrapper[4886]: I1124 09:21:20.012041 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/80cccca8-e8d6-4772-b514-83482acf917e-ssh-key\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-m2f4l\" (UID: \"80cccca8-e8d6-4772-b514-83482acf917e\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-m2f4l" Nov 24 09:21:20 crc kubenswrapper[4886]: I1124 09:21:20.012083 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rvf7f\" (UniqueName: \"kubernetes.io/projected/80cccca8-e8d6-4772-b514-83482acf917e-kube-api-access-rvf7f\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-m2f4l\" (UID: \"80cccca8-e8d6-4772-b514-83482acf917e\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-m2f4l" Nov 24 09:21:20 crc kubenswrapper[4886]: I1124 09:21:20.026765 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/80cccca8-e8d6-4772-b514-83482acf917e-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-m2f4l\" (UID: \"80cccca8-e8d6-4772-b514-83482acf917e\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-m2f4l" Nov 24 09:21:20 crc kubenswrapper[4886]: I1124 09:21:20.026783 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/80cccca8-e8d6-4772-b514-83482acf917e-ssh-key\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-m2f4l\" (UID: \"80cccca8-e8d6-4772-b514-83482acf917e\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-m2f4l" Nov 24 09:21:20 crc kubenswrapper[4886]: I1124 09:21:20.029494 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rvf7f\" (UniqueName: \"kubernetes.io/projected/80cccca8-e8d6-4772-b514-83482acf917e-kube-api-access-rvf7f\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-m2f4l\" (UID: \"80cccca8-e8d6-4772-b514-83482acf917e\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-m2f4l" Nov 24 09:21:20 crc kubenswrapper[4886]: I1124 09:21:20.193260 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-m2f4l" Nov 24 09:21:20 crc kubenswrapper[4886]: I1124 09:21:20.702175 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-m2f4l"] Nov 24 09:21:20 crc kubenswrapper[4886]: I1124 09:21:20.749723 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-m2f4l" event={"ID":"80cccca8-e8d6-4772-b514-83482acf917e","Type":"ContainerStarted","Data":"48a4d2c12ac976f73ca760deb279fa48324e9dfa6a370be393f005707e37e45f"} Nov 24 09:21:21 crc kubenswrapper[4886]: I1124 09:21:21.761092 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-m2f4l" event={"ID":"80cccca8-e8d6-4772-b514-83482acf917e","Type":"ContainerStarted","Data":"d038e9bd972b731ab7e52c744787c517235d15e8eba87db11f47c1918e58f147"} Nov 24 09:21:21 crc kubenswrapper[4886]: I1124 09:21:21.777380 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-m2f4l" podStartSLOduration=2.281662505 podStartE2EDuration="2.777361246s" podCreationTimestamp="2025-11-24 09:21:19 +0000 UTC" firstStartedPulling="2025-11-24 09:21:20.709641729 +0000 UTC m=+1936.596379864" lastFinishedPulling="2025-11-24 09:21:21.20534047 +0000 UTC m=+1937.092078605" observedRunningTime="2025-11-24 09:21:21.773452524 +0000 UTC m=+1937.660190659" watchObservedRunningTime="2025-11-24 09:21:21.777361246 +0000 UTC m=+1937.664099381" Nov 24 09:21:24 crc kubenswrapper[4886]: I1124 09:21:24.328236 4886 scope.go:117] "RemoveContainer" containerID="5766d62296fc2d378e3a02a3368370cf6f39033324586106db432cd09ca6b173" Nov 24 09:21:24 crc kubenswrapper[4886]: I1124 09:21:24.855646 4886 scope.go:117] "RemoveContainer" containerID="e9e5da22d6f503b164f934a14d5dfa5723b622287d3192f538cd5fb4bb3d237c" Nov 24 09:21:24 crc kubenswrapper[4886]: E1124 09:21:24.856212 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zc46q_openshift-machine-config-operator(23cb993e-0360-4449-b604-8ddd825a6502)\"" pod="openshift-machine-config-operator/machine-config-daemon-zc46q" podUID="23cb993e-0360-4449-b604-8ddd825a6502" Nov 24 09:21:29 crc kubenswrapper[4886]: I1124 09:21:29.837843 4886 generic.go:334] "Generic (PLEG): container finished" podID="80cccca8-e8d6-4772-b514-83482acf917e" containerID="d038e9bd972b731ab7e52c744787c517235d15e8eba87db11f47c1918e58f147" exitCode=0 Nov 24 09:21:29 crc kubenswrapper[4886]: I1124 09:21:29.837871 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-m2f4l" event={"ID":"80cccca8-e8d6-4772-b514-83482acf917e","Type":"ContainerDied","Data":"d038e9bd972b731ab7e52c744787c517235d15e8eba87db11f47c1918e58f147"} Nov 24 09:21:31 crc kubenswrapper[4886]: I1124 09:21:31.233099 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-m2f4l" Nov 24 09:21:31 crc kubenswrapper[4886]: I1124 09:21:31.341399 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/80cccca8-e8d6-4772-b514-83482acf917e-ssh-key\") pod \"80cccca8-e8d6-4772-b514-83482acf917e\" (UID: \"80cccca8-e8d6-4772-b514-83482acf917e\") " Nov 24 09:21:31 crc kubenswrapper[4886]: I1124 09:21:31.341533 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rvf7f\" (UniqueName: \"kubernetes.io/projected/80cccca8-e8d6-4772-b514-83482acf917e-kube-api-access-rvf7f\") pod \"80cccca8-e8d6-4772-b514-83482acf917e\" (UID: \"80cccca8-e8d6-4772-b514-83482acf917e\") " Nov 24 09:21:31 crc kubenswrapper[4886]: I1124 09:21:31.342020 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/80cccca8-e8d6-4772-b514-83482acf917e-inventory\") pod \"80cccca8-e8d6-4772-b514-83482acf917e\" (UID: \"80cccca8-e8d6-4772-b514-83482acf917e\") " Nov 24 09:21:31 crc kubenswrapper[4886]: I1124 09:21:31.350464 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/80cccca8-e8d6-4772-b514-83482acf917e-kube-api-access-rvf7f" (OuterVolumeSpecName: "kube-api-access-rvf7f") pod "80cccca8-e8d6-4772-b514-83482acf917e" (UID: "80cccca8-e8d6-4772-b514-83482acf917e"). InnerVolumeSpecName "kube-api-access-rvf7f". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 09:21:31 crc kubenswrapper[4886]: I1124 09:21:31.369993 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/80cccca8-e8d6-4772-b514-83482acf917e-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "80cccca8-e8d6-4772-b514-83482acf917e" (UID: "80cccca8-e8d6-4772-b514-83482acf917e"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:21:31 crc kubenswrapper[4886]: I1124 09:21:31.373527 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/80cccca8-e8d6-4772-b514-83482acf917e-inventory" (OuterVolumeSpecName: "inventory") pod "80cccca8-e8d6-4772-b514-83482acf917e" (UID: "80cccca8-e8d6-4772-b514-83482acf917e"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:21:31 crc kubenswrapper[4886]: I1124 09:21:31.444780 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rvf7f\" (UniqueName: \"kubernetes.io/projected/80cccca8-e8d6-4772-b514-83482acf917e-kube-api-access-rvf7f\") on node \"crc\" DevicePath \"\"" Nov 24 09:21:31 crc kubenswrapper[4886]: I1124 09:21:31.445105 4886 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/80cccca8-e8d6-4772-b514-83482acf917e-inventory\") on node \"crc\" DevicePath \"\"" Nov 24 09:21:31 crc kubenswrapper[4886]: I1124 09:21:31.445117 4886 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/80cccca8-e8d6-4772-b514-83482acf917e-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 24 09:21:31 crc kubenswrapper[4886]: I1124 09:21:31.856105 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-m2f4l" event={"ID":"80cccca8-e8d6-4772-b514-83482acf917e","Type":"ContainerDied","Data":"48a4d2c12ac976f73ca760deb279fa48324e9dfa6a370be393f005707e37e45f"} Nov 24 09:21:31 crc kubenswrapper[4886]: I1124 09:21:31.856143 4886 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="48a4d2c12ac976f73ca760deb279fa48324e9dfa6a370be393f005707e37e45f" Nov 24 09:21:31 crc kubenswrapper[4886]: I1124 09:21:31.856204 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-m2f4l" Nov 24 09:21:31 crc kubenswrapper[4886]: I1124 09:21:31.925019 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-7mg9l"] Nov 24 09:21:31 crc kubenswrapper[4886]: E1124 09:21:31.925542 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80cccca8-e8d6-4772-b514-83482acf917e" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Nov 24 09:21:31 crc kubenswrapper[4886]: I1124 09:21:31.925567 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="80cccca8-e8d6-4772-b514-83482acf917e" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Nov 24 09:21:31 crc kubenswrapper[4886]: I1124 09:21:31.925826 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="80cccca8-e8d6-4772-b514-83482acf917e" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Nov 24 09:21:31 crc kubenswrapper[4886]: I1124 09:21:31.926724 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-7mg9l" Nov 24 09:21:31 crc kubenswrapper[4886]: I1124 09:21:31.930286 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 24 09:21:31 crc kubenswrapper[4886]: I1124 09:21:31.930487 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-n8v49" Nov 24 09:21:31 crc kubenswrapper[4886]: I1124 09:21:31.930710 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 24 09:21:31 crc kubenswrapper[4886]: I1124 09:21:31.930489 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 24 09:21:31 crc kubenswrapper[4886]: I1124 09:21:31.935549 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-7mg9l"] Nov 24 09:21:32 crc kubenswrapper[4886]: I1124 09:21:32.055370 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qtnpb\" (UniqueName: \"kubernetes.io/projected/cc0c00e3-1e23-4800-9a47-8d86397ba6f3-kube-api-access-qtnpb\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-7mg9l\" (UID: \"cc0c00e3-1e23-4800-9a47-8d86397ba6f3\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-7mg9l" Nov 24 09:21:32 crc kubenswrapper[4886]: I1124 09:21:32.055422 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/cc0c00e3-1e23-4800-9a47-8d86397ba6f3-ssh-key\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-7mg9l\" (UID: \"cc0c00e3-1e23-4800-9a47-8d86397ba6f3\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-7mg9l" Nov 24 09:21:32 crc kubenswrapper[4886]: I1124 09:21:32.055456 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cc0c00e3-1e23-4800-9a47-8d86397ba6f3-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-7mg9l\" (UID: \"cc0c00e3-1e23-4800-9a47-8d86397ba6f3\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-7mg9l" Nov 24 09:21:32 crc kubenswrapper[4886]: I1124 09:21:32.157538 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qtnpb\" (UniqueName: \"kubernetes.io/projected/cc0c00e3-1e23-4800-9a47-8d86397ba6f3-kube-api-access-qtnpb\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-7mg9l\" (UID: \"cc0c00e3-1e23-4800-9a47-8d86397ba6f3\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-7mg9l" Nov 24 09:21:32 crc kubenswrapper[4886]: I1124 09:21:32.157609 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/cc0c00e3-1e23-4800-9a47-8d86397ba6f3-ssh-key\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-7mg9l\" (UID: \"cc0c00e3-1e23-4800-9a47-8d86397ba6f3\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-7mg9l" Nov 24 09:21:32 crc kubenswrapper[4886]: I1124 09:21:32.157654 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cc0c00e3-1e23-4800-9a47-8d86397ba6f3-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-7mg9l\" (UID: \"cc0c00e3-1e23-4800-9a47-8d86397ba6f3\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-7mg9l" Nov 24 09:21:32 crc kubenswrapper[4886]: I1124 09:21:32.163762 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cc0c00e3-1e23-4800-9a47-8d86397ba6f3-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-7mg9l\" (UID: \"cc0c00e3-1e23-4800-9a47-8d86397ba6f3\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-7mg9l" Nov 24 09:21:32 crc kubenswrapper[4886]: I1124 09:21:32.170837 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/cc0c00e3-1e23-4800-9a47-8d86397ba6f3-ssh-key\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-7mg9l\" (UID: \"cc0c00e3-1e23-4800-9a47-8d86397ba6f3\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-7mg9l" Nov 24 09:21:32 crc kubenswrapper[4886]: I1124 09:21:32.173174 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qtnpb\" (UniqueName: \"kubernetes.io/projected/cc0c00e3-1e23-4800-9a47-8d86397ba6f3-kube-api-access-qtnpb\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-7mg9l\" (UID: \"cc0c00e3-1e23-4800-9a47-8d86397ba6f3\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-7mg9l" Nov 24 09:21:32 crc kubenswrapper[4886]: I1124 09:21:32.243513 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-7mg9l" Nov 24 09:21:32 crc kubenswrapper[4886]: I1124 09:21:32.811329 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-7mg9l"] Nov 24 09:21:32 crc kubenswrapper[4886]: I1124 09:21:32.820106 4886 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 24 09:21:32 crc kubenswrapper[4886]: I1124 09:21:32.871661 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-7mg9l" event={"ID":"cc0c00e3-1e23-4800-9a47-8d86397ba6f3","Type":"ContainerStarted","Data":"0a9fa0a11ab0e0f3b62b10d9d851627069ded6b3ad2650fdce2e79bead9f3690"} Nov 24 09:21:33 crc kubenswrapper[4886]: I1124 09:21:33.881935 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-7mg9l" event={"ID":"cc0c00e3-1e23-4800-9a47-8d86397ba6f3","Type":"ContainerStarted","Data":"88d0e55f0069a98d06fc81abe6e9fcd905047f5bae25987ca2c1c28c80a53de4"} Nov 24 09:21:35 crc kubenswrapper[4886]: I1124 09:21:35.849822 4886 scope.go:117] "RemoveContainer" containerID="e9e5da22d6f503b164f934a14d5dfa5723b622287d3192f538cd5fb4bb3d237c" Nov 24 09:21:35 crc kubenswrapper[4886]: E1124 09:21:35.850439 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zc46q_openshift-machine-config-operator(23cb993e-0360-4449-b604-8ddd825a6502)\"" pod="openshift-machine-config-operator/machine-config-daemon-zc46q" podUID="23cb993e-0360-4449-b604-8ddd825a6502" Nov 24 09:21:43 crc kubenswrapper[4886]: I1124 09:21:43.963064 4886 generic.go:334] "Generic (PLEG): container finished" podID="cc0c00e3-1e23-4800-9a47-8d86397ba6f3" containerID="88d0e55f0069a98d06fc81abe6e9fcd905047f5bae25987ca2c1c28c80a53de4" exitCode=0 Nov 24 09:21:43 crc kubenswrapper[4886]: I1124 09:21:43.963300 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-7mg9l" event={"ID":"cc0c00e3-1e23-4800-9a47-8d86397ba6f3","Type":"ContainerDied","Data":"88d0e55f0069a98d06fc81abe6e9fcd905047f5bae25987ca2c1c28c80a53de4"} Nov 24 09:21:45 crc kubenswrapper[4886]: I1124 09:21:45.458740 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-7mg9l" Nov 24 09:21:45 crc kubenswrapper[4886]: I1124 09:21:45.559971 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/cc0c00e3-1e23-4800-9a47-8d86397ba6f3-ssh-key\") pod \"cc0c00e3-1e23-4800-9a47-8d86397ba6f3\" (UID: \"cc0c00e3-1e23-4800-9a47-8d86397ba6f3\") " Nov 24 09:21:45 crc kubenswrapper[4886]: I1124 09:21:45.560069 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qtnpb\" (UniqueName: \"kubernetes.io/projected/cc0c00e3-1e23-4800-9a47-8d86397ba6f3-kube-api-access-qtnpb\") pod \"cc0c00e3-1e23-4800-9a47-8d86397ba6f3\" (UID: \"cc0c00e3-1e23-4800-9a47-8d86397ba6f3\") " Nov 24 09:21:45 crc kubenswrapper[4886]: I1124 09:21:45.560231 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cc0c00e3-1e23-4800-9a47-8d86397ba6f3-inventory\") pod \"cc0c00e3-1e23-4800-9a47-8d86397ba6f3\" (UID: \"cc0c00e3-1e23-4800-9a47-8d86397ba6f3\") " Nov 24 09:21:45 crc kubenswrapper[4886]: I1124 09:21:45.565695 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cc0c00e3-1e23-4800-9a47-8d86397ba6f3-kube-api-access-qtnpb" (OuterVolumeSpecName: "kube-api-access-qtnpb") pod "cc0c00e3-1e23-4800-9a47-8d86397ba6f3" (UID: "cc0c00e3-1e23-4800-9a47-8d86397ba6f3"). InnerVolumeSpecName "kube-api-access-qtnpb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 09:21:45 crc kubenswrapper[4886]: I1124 09:21:45.590506 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cc0c00e3-1e23-4800-9a47-8d86397ba6f3-inventory" (OuterVolumeSpecName: "inventory") pod "cc0c00e3-1e23-4800-9a47-8d86397ba6f3" (UID: "cc0c00e3-1e23-4800-9a47-8d86397ba6f3"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:21:45 crc kubenswrapper[4886]: I1124 09:21:45.596843 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cc0c00e3-1e23-4800-9a47-8d86397ba6f3-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "cc0c00e3-1e23-4800-9a47-8d86397ba6f3" (UID: "cc0c00e3-1e23-4800-9a47-8d86397ba6f3"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:21:45 crc kubenswrapper[4886]: I1124 09:21:45.663741 4886 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/cc0c00e3-1e23-4800-9a47-8d86397ba6f3-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 24 09:21:45 crc kubenswrapper[4886]: I1124 09:21:45.663795 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qtnpb\" (UniqueName: \"kubernetes.io/projected/cc0c00e3-1e23-4800-9a47-8d86397ba6f3-kube-api-access-qtnpb\") on node \"crc\" DevicePath \"\"" Nov 24 09:21:45 crc kubenswrapper[4886]: I1124 09:21:45.663808 4886 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cc0c00e3-1e23-4800-9a47-8d86397ba6f3-inventory\") on node \"crc\" DevicePath \"\"" Nov 24 09:21:45 crc kubenswrapper[4886]: I1124 09:21:45.984248 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-7mg9l" event={"ID":"cc0c00e3-1e23-4800-9a47-8d86397ba6f3","Type":"ContainerDied","Data":"0a9fa0a11ab0e0f3b62b10d9d851627069ded6b3ad2650fdce2e79bead9f3690"} Nov 24 09:21:45 crc kubenswrapper[4886]: I1124 09:21:45.984560 4886 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0a9fa0a11ab0e0f3b62b10d9d851627069ded6b3ad2650fdce2e79bead9f3690" Nov 24 09:21:45 crc kubenswrapper[4886]: I1124 09:21:45.984307 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-7mg9l" Nov 24 09:21:46 crc kubenswrapper[4886]: I1124 09:21:46.063197 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-twcff"] Nov 24 09:21:46 crc kubenswrapper[4886]: E1124 09:21:46.063606 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc0c00e3-1e23-4800-9a47-8d86397ba6f3" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Nov 24 09:21:46 crc kubenswrapper[4886]: I1124 09:21:46.063623 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc0c00e3-1e23-4800-9a47-8d86397ba6f3" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Nov 24 09:21:46 crc kubenswrapper[4886]: I1124 09:21:46.063835 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="cc0c00e3-1e23-4800-9a47-8d86397ba6f3" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Nov 24 09:21:46 crc kubenswrapper[4886]: I1124 09:21:46.064464 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-twcff" Nov 24 09:21:46 crc kubenswrapper[4886]: I1124 09:21:46.066525 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 24 09:21:46 crc kubenswrapper[4886]: I1124 09:21:46.066796 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-n8v49" Nov 24 09:21:46 crc kubenswrapper[4886]: I1124 09:21:46.066929 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-ovn-default-certs-0" Nov 24 09:21:46 crc kubenswrapper[4886]: I1124 09:21:46.067851 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 24 09:21:46 crc kubenswrapper[4886]: I1124 09:21:46.068044 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 24 09:21:46 crc kubenswrapper[4886]: I1124 09:21:46.072459 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-libvirt-default-certs-0" Nov 24 09:21:46 crc kubenswrapper[4886]: I1124 09:21:46.072523 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-neutron-metadata-default-certs-0" Nov 24 09:21:46 crc kubenswrapper[4886]: I1124 09:21:46.073240 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-telemetry-default-certs-0" Nov 24 09:21:46 crc kubenswrapper[4886]: I1124 09:21:46.082778 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-twcff"] Nov 24 09:21:46 crc kubenswrapper[4886]: I1124 09:21:46.174945 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06314c58-da5f-46e4-ac6d-63f95ca6a6f9-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-twcff\" (UID: \"06314c58-da5f-46e4-ac6d-63f95ca6a6f9\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-twcff" Nov 24 09:21:46 crc kubenswrapper[4886]: I1124 09:21:46.175318 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06314c58-da5f-46e4-ac6d-63f95ca6a6f9-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-twcff\" (UID: \"06314c58-da5f-46e4-ac6d-63f95ca6a6f9\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-twcff" Nov 24 09:21:46 crc kubenswrapper[4886]: I1124 09:21:46.175432 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/06314c58-da5f-46e4-ac6d-63f95ca6a6f9-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-twcff\" (UID: \"06314c58-da5f-46e4-ac6d-63f95ca6a6f9\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-twcff" Nov 24 09:21:46 crc kubenswrapper[4886]: I1124 09:21:46.175583 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06314c58-da5f-46e4-ac6d-63f95ca6a6f9-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-twcff\" (UID: \"06314c58-da5f-46e4-ac6d-63f95ca6a6f9\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-twcff" Nov 24 09:21:46 crc kubenswrapper[4886]: I1124 09:21:46.175894 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/06314c58-da5f-46e4-ac6d-63f95ca6a6f9-ssh-key\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-twcff\" (UID: \"06314c58-da5f-46e4-ac6d-63f95ca6a6f9\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-twcff" Nov 24 09:21:46 crc kubenswrapper[4886]: I1124 09:21:46.176018 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/06314c58-da5f-46e4-ac6d-63f95ca6a6f9-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-twcff\" (UID: \"06314c58-da5f-46e4-ac6d-63f95ca6a6f9\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-twcff" Nov 24 09:21:46 crc kubenswrapper[4886]: I1124 09:21:46.176142 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pzv56\" (UniqueName: \"kubernetes.io/projected/06314c58-da5f-46e4-ac6d-63f95ca6a6f9-kube-api-access-pzv56\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-twcff\" (UID: \"06314c58-da5f-46e4-ac6d-63f95ca6a6f9\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-twcff" Nov 24 09:21:46 crc kubenswrapper[4886]: I1124 09:21:46.176322 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06314c58-da5f-46e4-ac6d-63f95ca6a6f9-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-twcff\" (UID: \"06314c58-da5f-46e4-ac6d-63f95ca6a6f9\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-twcff" Nov 24 09:21:46 crc kubenswrapper[4886]: I1124 09:21:46.176506 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06314c58-da5f-46e4-ac6d-63f95ca6a6f9-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-twcff\" (UID: \"06314c58-da5f-46e4-ac6d-63f95ca6a6f9\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-twcff" Nov 24 09:21:46 crc kubenswrapper[4886]: I1124 09:21:46.176620 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/06314c58-da5f-46e4-ac6d-63f95ca6a6f9-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-twcff\" (UID: \"06314c58-da5f-46e4-ac6d-63f95ca6a6f9\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-twcff" Nov 24 09:21:46 crc kubenswrapper[4886]: I1124 09:21:46.176722 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/06314c58-da5f-46e4-ac6d-63f95ca6a6f9-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-twcff\" (UID: \"06314c58-da5f-46e4-ac6d-63f95ca6a6f9\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-twcff" Nov 24 09:21:46 crc kubenswrapper[4886]: I1124 09:21:46.176798 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06314c58-da5f-46e4-ac6d-63f95ca6a6f9-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-twcff\" (UID: \"06314c58-da5f-46e4-ac6d-63f95ca6a6f9\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-twcff" Nov 24 09:21:46 crc kubenswrapper[4886]: I1124 09:21:46.176835 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/06314c58-da5f-46e4-ac6d-63f95ca6a6f9-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-twcff\" (UID: \"06314c58-da5f-46e4-ac6d-63f95ca6a6f9\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-twcff" Nov 24 09:21:46 crc kubenswrapper[4886]: I1124 09:21:46.176859 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06314c58-da5f-46e4-ac6d-63f95ca6a6f9-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-twcff\" (UID: \"06314c58-da5f-46e4-ac6d-63f95ca6a6f9\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-twcff" Nov 24 09:21:46 crc kubenswrapper[4886]: I1124 09:21:46.279340 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06314c58-da5f-46e4-ac6d-63f95ca6a6f9-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-twcff\" (UID: \"06314c58-da5f-46e4-ac6d-63f95ca6a6f9\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-twcff" Nov 24 09:21:46 crc kubenswrapper[4886]: I1124 09:21:46.279414 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/06314c58-da5f-46e4-ac6d-63f95ca6a6f9-ssh-key\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-twcff\" (UID: \"06314c58-da5f-46e4-ac6d-63f95ca6a6f9\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-twcff" Nov 24 09:21:46 crc kubenswrapper[4886]: I1124 09:21:46.279434 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/06314c58-da5f-46e4-ac6d-63f95ca6a6f9-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-twcff\" (UID: \"06314c58-da5f-46e4-ac6d-63f95ca6a6f9\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-twcff" Nov 24 09:21:46 crc kubenswrapper[4886]: I1124 09:21:46.279473 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pzv56\" (UniqueName: \"kubernetes.io/projected/06314c58-da5f-46e4-ac6d-63f95ca6a6f9-kube-api-access-pzv56\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-twcff\" (UID: \"06314c58-da5f-46e4-ac6d-63f95ca6a6f9\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-twcff" Nov 24 09:21:46 crc kubenswrapper[4886]: I1124 09:21:46.279493 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06314c58-da5f-46e4-ac6d-63f95ca6a6f9-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-twcff\" (UID: \"06314c58-da5f-46e4-ac6d-63f95ca6a6f9\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-twcff" Nov 24 09:21:46 crc kubenswrapper[4886]: I1124 09:21:46.279523 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06314c58-da5f-46e4-ac6d-63f95ca6a6f9-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-twcff\" (UID: \"06314c58-da5f-46e4-ac6d-63f95ca6a6f9\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-twcff" Nov 24 09:21:46 crc kubenswrapper[4886]: I1124 09:21:46.279621 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/06314c58-da5f-46e4-ac6d-63f95ca6a6f9-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-twcff\" (UID: \"06314c58-da5f-46e4-ac6d-63f95ca6a6f9\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-twcff" Nov 24 09:21:46 crc kubenswrapper[4886]: I1124 09:21:46.279656 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/06314c58-da5f-46e4-ac6d-63f95ca6a6f9-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-twcff\" (UID: \"06314c58-da5f-46e4-ac6d-63f95ca6a6f9\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-twcff" Nov 24 09:21:46 crc kubenswrapper[4886]: I1124 09:21:46.279685 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06314c58-da5f-46e4-ac6d-63f95ca6a6f9-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-twcff\" (UID: \"06314c58-da5f-46e4-ac6d-63f95ca6a6f9\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-twcff" Nov 24 09:21:46 crc kubenswrapper[4886]: I1124 09:21:46.279712 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/06314c58-da5f-46e4-ac6d-63f95ca6a6f9-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-twcff\" (UID: \"06314c58-da5f-46e4-ac6d-63f95ca6a6f9\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-twcff" Nov 24 09:21:46 crc kubenswrapper[4886]: I1124 09:21:46.279734 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06314c58-da5f-46e4-ac6d-63f95ca6a6f9-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-twcff\" (UID: \"06314c58-da5f-46e4-ac6d-63f95ca6a6f9\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-twcff" Nov 24 09:21:46 crc kubenswrapper[4886]: I1124 09:21:46.279781 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06314c58-da5f-46e4-ac6d-63f95ca6a6f9-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-twcff\" (UID: \"06314c58-da5f-46e4-ac6d-63f95ca6a6f9\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-twcff" Nov 24 09:21:46 crc kubenswrapper[4886]: I1124 09:21:46.279805 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06314c58-da5f-46e4-ac6d-63f95ca6a6f9-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-twcff\" (UID: \"06314c58-da5f-46e4-ac6d-63f95ca6a6f9\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-twcff" Nov 24 09:21:46 crc kubenswrapper[4886]: I1124 09:21:46.279822 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/06314c58-da5f-46e4-ac6d-63f95ca6a6f9-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-twcff\" (UID: \"06314c58-da5f-46e4-ac6d-63f95ca6a6f9\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-twcff" Nov 24 09:21:46 crc kubenswrapper[4886]: I1124 09:21:46.285761 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/06314c58-da5f-46e4-ac6d-63f95ca6a6f9-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-twcff\" (UID: \"06314c58-da5f-46e4-ac6d-63f95ca6a6f9\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-twcff" Nov 24 09:21:46 crc kubenswrapper[4886]: I1124 09:21:46.285687 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06314c58-da5f-46e4-ac6d-63f95ca6a6f9-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-twcff\" (UID: \"06314c58-da5f-46e4-ac6d-63f95ca6a6f9\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-twcff" Nov 24 09:21:46 crc kubenswrapper[4886]: I1124 09:21:46.286557 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/06314c58-da5f-46e4-ac6d-63f95ca6a6f9-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-twcff\" (UID: \"06314c58-da5f-46e4-ac6d-63f95ca6a6f9\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-twcff" Nov 24 09:21:46 crc kubenswrapper[4886]: I1124 09:21:46.287450 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06314c58-da5f-46e4-ac6d-63f95ca6a6f9-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-twcff\" (UID: \"06314c58-da5f-46e4-ac6d-63f95ca6a6f9\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-twcff" Nov 24 09:21:46 crc kubenswrapper[4886]: I1124 09:21:46.287626 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06314c58-da5f-46e4-ac6d-63f95ca6a6f9-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-twcff\" (UID: \"06314c58-da5f-46e4-ac6d-63f95ca6a6f9\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-twcff" Nov 24 09:21:46 crc kubenswrapper[4886]: I1124 09:21:46.287895 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06314c58-da5f-46e4-ac6d-63f95ca6a6f9-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-twcff\" (UID: \"06314c58-da5f-46e4-ac6d-63f95ca6a6f9\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-twcff" Nov 24 09:21:46 crc kubenswrapper[4886]: I1124 09:21:46.288040 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/06314c58-da5f-46e4-ac6d-63f95ca6a6f9-ssh-key\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-twcff\" (UID: \"06314c58-da5f-46e4-ac6d-63f95ca6a6f9\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-twcff" Nov 24 09:21:46 crc kubenswrapper[4886]: I1124 09:21:46.290752 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/06314c58-da5f-46e4-ac6d-63f95ca6a6f9-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-twcff\" (UID: \"06314c58-da5f-46e4-ac6d-63f95ca6a6f9\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-twcff" Nov 24 09:21:46 crc kubenswrapper[4886]: I1124 09:21:46.291278 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06314c58-da5f-46e4-ac6d-63f95ca6a6f9-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-twcff\" (UID: \"06314c58-da5f-46e4-ac6d-63f95ca6a6f9\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-twcff" Nov 24 09:21:46 crc kubenswrapper[4886]: I1124 09:21:46.291808 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/06314c58-da5f-46e4-ac6d-63f95ca6a6f9-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-twcff\" (UID: \"06314c58-da5f-46e4-ac6d-63f95ca6a6f9\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-twcff" Nov 24 09:21:46 crc kubenswrapper[4886]: I1124 09:21:46.292363 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06314c58-da5f-46e4-ac6d-63f95ca6a6f9-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-twcff\" (UID: \"06314c58-da5f-46e4-ac6d-63f95ca6a6f9\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-twcff" Nov 24 09:21:46 crc kubenswrapper[4886]: I1124 09:21:46.295044 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/06314c58-da5f-46e4-ac6d-63f95ca6a6f9-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-twcff\" (UID: \"06314c58-da5f-46e4-ac6d-63f95ca6a6f9\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-twcff" Nov 24 09:21:46 crc kubenswrapper[4886]: I1124 09:21:46.298534 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06314c58-da5f-46e4-ac6d-63f95ca6a6f9-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-twcff\" (UID: \"06314c58-da5f-46e4-ac6d-63f95ca6a6f9\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-twcff" Nov 24 09:21:46 crc kubenswrapper[4886]: I1124 09:21:46.300015 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pzv56\" (UniqueName: \"kubernetes.io/projected/06314c58-da5f-46e4-ac6d-63f95ca6a6f9-kube-api-access-pzv56\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-twcff\" (UID: \"06314c58-da5f-46e4-ac6d-63f95ca6a6f9\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-twcff" Nov 24 09:21:46 crc kubenswrapper[4886]: I1124 09:21:46.395839 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-twcff" Nov 24 09:21:46 crc kubenswrapper[4886]: I1124 09:21:46.850109 4886 scope.go:117] "RemoveContainer" containerID="e9e5da22d6f503b164f934a14d5dfa5723b622287d3192f538cd5fb4bb3d237c" Nov 24 09:21:46 crc kubenswrapper[4886]: E1124 09:21:46.850710 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zc46q_openshift-machine-config-operator(23cb993e-0360-4449-b604-8ddd825a6502)\"" pod="openshift-machine-config-operator/machine-config-daemon-zc46q" podUID="23cb993e-0360-4449-b604-8ddd825a6502" Nov 24 09:21:46 crc kubenswrapper[4886]: I1124 09:21:46.923135 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-twcff"] Nov 24 09:21:46 crc kubenswrapper[4886]: I1124 09:21:46.993659 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-twcff" event={"ID":"06314c58-da5f-46e4-ac6d-63f95ca6a6f9","Type":"ContainerStarted","Data":"369846eac106c9281fa9dd43fa507dd2cb2fc9b559379d7abb7c2613e29fab04"} Nov 24 09:21:50 crc kubenswrapper[4886]: I1124 09:21:50.021514 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-twcff" event={"ID":"06314c58-da5f-46e4-ac6d-63f95ca6a6f9","Type":"ContainerStarted","Data":"0053f556347e68d50b080c0d5d24f8f59f406d12f431d46b89c09440bd52cdf7"} Nov 24 09:21:50 crc kubenswrapper[4886]: I1124 09:21:50.047110 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-twcff" podStartSLOduration=2.346863382 podStartE2EDuration="4.047087866s" podCreationTimestamp="2025-11-24 09:21:46 +0000 UTC" firstStartedPulling="2025-11-24 09:21:46.931693089 +0000 UTC m=+1962.818431214" lastFinishedPulling="2025-11-24 09:21:48.631917563 +0000 UTC m=+1964.518655698" observedRunningTime="2025-11-24 09:21:50.040275801 +0000 UTC m=+1965.927013936" watchObservedRunningTime="2025-11-24 09:21:50.047087866 +0000 UTC m=+1965.933826001" Nov 24 09:22:01 crc kubenswrapper[4886]: I1124 09:22:01.850077 4886 scope.go:117] "RemoveContainer" containerID="e9e5da22d6f503b164f934a14d5dfa5723b622287d3192f538cd5fb4bb3d237c" Nov 24 09:22:02 crc kubenswrapper[4886]: I1124 09:22:02.125626 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zc46q" event={"ID":"23cb993e-0360-4449-b604-8ddd825a6502","Type":"ContainerStarted","Data":"67754098990f83baab3456c52fb1373130119f025267270bb714b30248f4edbb"} Nov 24 09:22:27 crc kubenswrapper[4886]: I1124 09:22:27.374802 4886 generic.go:334] "Generic (PLEG): container finished" podID="06314c58-da5f-46e4-ac6d-63f95ca6a6f9" containerID="0053f556347e68d50b080c0d5d24f8f59f406d12f431d46b89c09440bd52cdf7" exitCode=0 Nov 24 09:22:27 crc kubenswrapper[4886]: I1124 09:22:27.374893 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-twcff" event={"ID":"06314c58-da5f-46e4-ac6d-63f95ca6a6f9","Type":"ContainerDied","Data":"0053f556347e68d50b080c0d5d24f8f59f406d12f431d46b89c09440bd52cdf7"} Nov 24 09:22:28 crc kubenswrapper[4886]: I1124 09:22:28.799958 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-twcff" Nov 24 09:22:28 crc kubenswrapper[4886]: I1124 09:22:28.952597 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06314c58-da5f-46e4-ac6d-63f95ca6a6f9-repo-setup-combined-ca-bundle\") pod \"06314c58-da5f-46e4-ac6d-63f95ca6a6f9\" (UID: \"06314c58-da5f-46e4-ac6d-63f95ca6a6f9\") " Nov 24 09:22:28 crc kubenswrapper[4886]: I1124 09:22:28.952693 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/06314c58-da5f-46e4-ac6d-63f95ca6a6f9-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"06314c58-da5f-46e4-ac6d-63f95ca6a6f9\" (UID: \"06314c58-da5f-46e4-ac6d-63f95ca6a6f9\") " Nov 24 09:22:28 crc kubenswrapper[4886]: I1124 09:22:28.953811 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06314c58-da5f-46e4-ac6d-63f95ca6a6f9-ovn-combined-ca-bundle\") pod \"06314c58-da5f-46e4-ac6d-63f95ca6a6f9\" (UID: \"06314c58-da5f-46e4-ac6d-63f95ca6a6f9\") " Nov 24 09:22:28 crc kubenswrapper[4886]: I1124 09:22:28.953854 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06314c58-da5f-46e4-ac6d-63f95ca6a6f9-telemetry-combined-ca-bundle\") pod \"06314c58-da5f-46e4-ac6d-63f95ca6a6f9\" (UID: \"06314c58-da5f-46e4-ac6d-63f95ca6a6f9\") " Nov 24 09:22:28 crc kubenswrapper[4886]: I1124 09:22:28.953875 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/06314c58-da5f-46e4-ac6d-63f95ca6a6f9-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"06314c58-da5f-46e4-ac6d-63f95ca6a6f9\" (UID: \"06314c58-da5f-46e4-ac6d-63f95ca6a6f9\") " Nov 24 09:22:28 crc kubenswrapper[4886]: I1124 09:22:28.953931 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pzv56\" (UniqueName: \"kubernetes.io/projected/06314c58-da5f-46e4-ac6d-63f95ca6a6f9-kube-api-access-pzv56\") pod \"06314c58-da5f-46e4-ac6d-63f95ca6a6f9\" (UID: \"06314c58-da5f-46e4-ac6d-63f95ca6a6f9\") " Nov 24 09:22:28 crc kubenswrapper[4886]: I1124 09:22:28.954033 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06314c58-da5f-46e4-ac6d-63f95ca6a6f9-neutron-metadata-combined-ca-bundle\") pod \"06314c58-da5f-46e4-ac6d-63f95ca6a6f9\" (UID: \"06314c58-da5f-46e4-ac6d-63f95ca6a6f9\") " Nov 24 09:22:28 crc kubenswrapper[4886]: I1124 09:22:28.954079 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06314c58-da5f-46e4-ac6d-63f95ca6a6f9-nova-combined-ca-bundle\") pod \"06314c58-da5f-46e4-ac6d-63f95ca6a6f9\" (UID: \"06314c58-da5f-46e4-ac6d-63f95ca6a6f9\") " Nov 24 09:22:28 crc kubenswrapper[4886]: I1124 09:22:28.954100 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/06314c58-da5f-46e4-ac6d-63f95ca6a6f9-ssh-key\") pod \"06314c58-da5f-46e4-ac6d-63f95ca6a6f9\" (UID: \"06314c58-da5f-46e4-ac6d-63f95ca6a6f9\") " Nov 24 09:22:28 crc kubenswrapper[4886]: I1124 09:22:28.954144 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06314c58-da5f-46e4-ac6d-63f95ca6a6f9-bootstrap-combined-ca-bundle\") pod \"06314c58-da5f-46e4-ac6d-63f95ca6a6f9\" (UID: \"06314c58-da5f-46e4-ac6d-63f95ca6a6f9\") " Nov 24 09:22:28 crc kubenswrapper[4886]: I1124 09:22:28.954191 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/06314c58-da5f-46e4-ac6d-63f95ca6a6f9-openstack-edpm-ipam-ovn-default-certs-0\") pod \"06314c58-da5f-46e4-ac6d-63f95ca6a6f9\" (UID: \"06314c58-da5f-46e4-ac6d-63f95ca6a6f9\") " Nov 24 09:22:28 crc kubenswrapper[4886]: I1124 09:22:28.954231 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06314c58-da5f-46e4-ac6d-63f95ca6a6f9-libvirt-combined-ca-bundle\") pod \"06314c58-da5f-46e4-ac6d-63f95ca6a6f9\" (UID: \"06314c58-da5f-46e4-ac6d-63f95ca6a6f9\") " Nov 24 09:22:28 crc kubenswrapper[4886]: I1124 09:22:28.954273 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/06314c58-da5f-46e4-ac6d-63f95ca6a6f9-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"06314c58-da5f-46e4-ac6d-63f95ca6a6f9\" (UID: \"06314c58-da5f-46e4-ac6d-63f95ca6a6f9\") " Nov 24 09:22:28 crc kubenswrapper[4886]: I1124 09:22:28.954316 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/06314c58-da5f-46e4-ac6d-63f95ca6a6f9-inventory\") pod \"06314c58-da5f-46e4-ac6d-63f95ca6a6f9\" (UID: \"06314c58-da5f-46e4-ac6d-63f95ca6a6f9\") " Nov 24 09:22:28 crc kubenswrapper[4886]: I1124 09:22:28.960251 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/06314c58-da5f-46e4-ac6d-63f95ca6a6f9-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "06314c58-da5f-46e4-ac6d-63f95ca6a6f9" (UID: "06314c58-da5f-46e4-ac6d-63f95ca6a6f9"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:22:28 crc kubenswrapper[4886]: I1124 09:22:28.962593 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/06314c58-da5f-46e4-ac6d-63f95ca6a6f9-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "06314c58-da5f-46e4-ac6d-63f95ca6a6f9" (UID: "06314c58-da5f-46e4-ac6d-63f95ca6a6f9"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:22:28 crc kubenswrapper[4886]: I1124 09:22:28.962794 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/06314c58-da5f-46e4-ac6d-63f95ca6a6f9-openstack-edpm-ipam-libvirt-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-libvirt-default-certs-0") pod "06314c58-da5f-46e4-ac6d-63f95ca6a6f9" (UID: "06314c58-da5f-46e4-ac6d-63f95ca6a6f9"). InnerVolumeSpecName "openstack-edpm-ipam-libvirt-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 09:22:28 crc kubenswrapper[4886]: I1124 09:22:28.962985 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/06314c58-da5f-46e4-ac6d-63f95ca6a6f9-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "06314c58-da5f-46e4-ac6d-63f95ca6a6f9" (UID: "06314c58-da5f-46e4-ac6d-63f95ca6a6f9"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:22:28 crc kubenswrapper[4886]: I1124 09:22:28.963115 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/06314c58-da5f-46e4-ac6d-63f95ca6a6f9-openstack-edpm-ipam-ovn-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-ovn-default-certs-0") pod "06314c58-da5f-46e4-ac6d-63f95ca6a6f9" (UID: "06314c58-da5f-46e4-ac6d-63f95ca6a6f9"). InnerVolumeSpecName "openstack-edpm-ipam-ovn-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 09:22:28 crc kubenswrapper[4886]: I1124 09:22:28.963242 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/06314c58-da5f-46e4-ac6d-63f95ca6a6f9-openstack-edpm-ipam-telemetry-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-telemetry-default-certs-0") pod "06314c58-da5f-46e4-ac6d-63f95ca6a6f9" (UID: "06314c58-da5f-46e4-ac6d-63f95ca6a6f9"). InnerVolumeSpecName "openstack-edpm-ipam-telemetry-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 09:22:28 crc kubenswrapper[4886]: I1124 09:22:28.963386 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/06314c58-da5f-46e4-ac6d-63f95ca6a6f9-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "06314c58-da5f-46e4-ac6d-63f95ca6a6f9" (UID: "06314c58-da5f-46e4-ac6d-63f95ca6a6f9"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:22:28 crc kubenswrapper[4886]: I1124 09:22:28.963542 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/06314c58-da5f-46e4-ac6d-63f95ca6a6f9-kube-api-access-pzv56" (OuterVolumeSpecName: "kube-api-access-pzv56") pod "06314c58-da5f-46e4-ac6d-63f95ca6a6f9" (UID: "06314c58-da5f-46e4-ac6d-63f95ca6a6f9"). InnerVolumeSpecName "kube-api-access-pzv56". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 09:22:28 crc kubenswrapper[4886]: I1124 09:22:28.963911 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/06314c58-da5f-46e4-ac6d-63f95ca6a6f9-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "06314c58-da5f-46e4-ac6d-63f95ca6a6f9" (UID: "06314c58-da5f-46e4-ac6d-63f95ca6a6f9"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:22:28 crc kubenswrapper[4886]: I1124 09:22:28.964467 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/06314c58-da5f-46e4-ac6d-63f95ca6a6f9-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "06314c58-da5f-46e4-ac6d-63f95ca6a6f9" (UID: "06314c58-da5f-46e4-ac6d-63f95ca6a6f9"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:22:28 crc kubenswrapper[4886]: I1124 09:22:28.965824 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/06314c58-da5f-46e4-ac6d-63f95ca6a6f9-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "06314c58-da5f-46e4-ac6d-63f95ca6a6f9" (UID: "06314c58-da5f-46e4-ac6d-63f95ca6a6f9"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:22:28 crc kubenswrapper[4886]: I1124 09:22:28.966325 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/06314c58-da5f-46e4-ac6d-63f95ca6a6f9-openstack-edpm-ipam-neutron-metadata-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-neutron-metadata-default-certs-0") pod "06314c58-da5f-46e4-ac6d-63f95ca6a6f9" (UID: "06314c58-da5f-46e4-ac6d-63f95ca6a6f9"). InnerVolumeSpecName "openstack-edpm-ipam-neutron-metadata-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 09:22:28 crc kubenswrapper[4886]: I1124 09:22:28.989429 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/06314c58-da5f-46e4-ac6d-63f95ca6a6f9-inventory" (OuterVolumeSpecName: "inventory") pod "06314c58-da5f-46e4-ac6d-63f95ca6a6f9" (UID: "06314c58-da5f-46e4-ac6d-63f95ca6a6f9"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:22:28 crc kubenswrapper[4886]: I1124 09:22:28.989364 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/06314c58-da5f-46e4-ac6d-63f95ca6a6f9-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "06314c58-da5f-46e4-ac6d-63f95ca6a6f9" (UID: "06314c58-da5f-46e4-ac6d-63f95ca6a6f9"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:22:29 crc kubenswrapper[4886]: I1124 09:22:29.057189 4886 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06314c58-da5f-46e4-ac6d-63f95ca6a6f9-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 09:22:29 crc kubenswrapper[4886]: I1124 09:22:29.057240 4886 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/06314c58-da5f-46e4-ac6d-63f95ca6a6f9-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 24 09:22:29 crc kubenswrapper[4886]: I1124 09:22:29.057254 4886 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06314c58-da5f-46e4-ac6d-63f95ca6a6f9-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 09:22:29 crc kubenswrapper[4886]: I1124 09:22:29.057265 4886 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/06314c58-da5f-46e4-ac6d-63f95ca6a6f9-openstack-edpm-ipam-ovn-default-certs-0\") on node \"crc\" DevicePath \"\"" Nov 24 09:22:29 crc kubenswrapper[4886]: I1124 09:22:29.057275 4886 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06314c58-da5f-46e4-ac6d-63f95ca6a6f9-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 09:22:29 crc kubenswrapper[4886]: I1124 09:22:29.057287 4886 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/06314c58-da5f-46e4-ac6d-63f95ca6a6f9-openstack-edpm-ipam-telemetry-default-certs-0\") on node \"crc\" DevicePath \"\"" Nov 24 09:22:29 crc kubenswrapper[4886]: I1124 09:22:29.057297 4886 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/06314c58-da5f-46e4-ac6d-63f95ca6a6f9-inventory\") on node \"crc\" DevicePath \"\"" Nov 24 09:22:29 crc kubenswrapper[4886]: I1124 09:22:29.057306 4886 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06314c58-da5f-46e4-ac6d-63f95ca6a6f9-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 09:22:29 crc kubenswrapper[4886]: I1124 09:22:29.057315 4886 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/06314c58-da5f-46e4-ac6d-63f95ca6a6f9-openstack-edpm-ipam-libvirt-default-certs-0\") on node \"crc\" DevicePath \"\"" Nov 24 09:22:29 crc kubenswrapper[4886]: I1124 09:22:29.057326 4886 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06314c58-da5f-46e4-ac6d-63f95ca6a6f9-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 09:22:29 crc kubenswrapper[4886]: I1124 09:22:29.057334 4886 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06314c58-da5f-46e4-ac6d-63f95ca6a6f9-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 09:22:29 crc kubenswrapper[4886]: I1124 09:22:29.057343 4886 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/06314c58-da5f-46e4-ac6d-63f95ca6a6f9-openstack-edpm-ipam-neutron-metadata-default-certs-0\") on node \"crc\" DevicePath \"\"" Nov 24 09:22:29 crc kubenswrapper[4886]: I1124 09:22:29.057353 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pzv56\" (UniqueName: \"kubernetes.io/projected/06314c58-da5f-46e4-ac6d-63f95ca6a6f9-kube-api-access-pzv56\") on node \"crc\" DevicePath \"\"" Nov 24 09:22:29 crc kubenswrapper[4886]: I1124 09:22:29.057361 4886 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06314c58-da5f-46e4-ac6d-63f95ca6a6f9-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 09:22:29 crc kubenswrapper[4886]: I1124 09:22:29.396072 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-twcff" event={"ID":"06314c58-da5f-46e4-ac6d-63f95ca6a6f9","Type":"ContainerDied","Data":"369846eac106c9281fa9dd43fa507dd2cb2fc9b559379d7abb7c2613e29fab04"} Nov 24 09:22:29 crc kubenswrapper[4886]: I1124 09:22:29.396434 4886 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="369846eac106c9281fa9dd43fa507dd2cb2fc9b559379d7abb7c2613e29fab04" Nov 24 09:22:29 crc kubenswrapper[4886]: I1124 09:22:29.396134 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-twcff" Nov 24 09:22:29 crc kubenswrapper[4886]: I1124 09:22:29.489006 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-mqb42"] Nov 24 09:22:29 crc kubenswrapper[4886]: E1124 09:22:29.489526 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06314c58-da5f-46e4-ac6d-63f95ca6a6f9" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Nov 24 09:22:29 crc kubenswrapper[4886]: I1124 09:22:29.489547 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="06314c58-da5f-46e4-ac6d-63f95ca6a6f9" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Nov 24 09:22:29 crc kubenswrapper[4886]: I1124 09:22:29.489736 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="06314c58-da5f-46e4-ac6d-63f95ca6a6f9" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Nov 24 09:22:29 crc kubenswrapper[4886]: I1124 09:22:29.490412 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-mqb42" Nov 24 09:22:29 crc kubenswrapper[4886]: I1124 09:22:29.493677 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 24 09:22:29 crc kubenswrapper[4886]: I1124 09:22:29.493709 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 24 09:22:29 crc kubenswrapper[4886]: I1124 09:22:29.493886 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-n8v49" Nov 24 09:22:29 crc kubenswrapper[4886]: I1124 09:22:29.494039 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-config" Nov 24 09:22:29 crc kubenswrapper[4886]: I1124 09:22:29.494078 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 24 09:22:29 crc kubenswrapper[4886]: I1124 09:22:29.498539 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-mqb42"] Nov 24 09:22:29 crc kubenswrapper[4886]: I1124 09:22:29.668722 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cde6df39-d639-4855-a34f-29ff9af5c870-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-mqb42\" (UID: \"cde6df39-d639-4855-a34f-29ff9af5c870\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-mqb42" Nov 24 09:22:29 crc kubenswrapper[4886]: I1124 09:22:29.668807 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/cde6df39-d639-4855-a34f-29ff9af5c870-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-mqb42\" (UID: \"cde6df39-d639-4855-a34f-29ff9af5c870\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-mqb42" Nov 24 09:22:29 crc kubenswrapper[4886]: I1124 09:22:29.669036 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5xzkn\" (UniqueName: \"kubernetes.io/projected/cde6df39-d639-4855-a34f-29ff9af5c870-kube-api-access-5xzkn\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-mqb42\" (UID: \"cde6df39-d639-4855-a34f-29ff9af5c870\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-mqb42" Nov 24 09:22:29 crc kubenswrapper[4886]: I1124 09:22:29.669374 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/cde6df39-d639-4855-a34f-29ff9af5c870-ssh-key\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-mqb42\" (UID: \"cde6df39-d639-4855-a34f-29ff9af5c870\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-mqb42" Nov 24 09:22:29 crc kubenswrapper[4886]: I1124 09:22:29.669478 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cde6df39-d639-4855-a34f-29ff9af5c870-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-mqb42\" (UID: \"cde6df39-d639-4855-a34f-29ff9af5c870\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-mqb42" Nov 24 09:22:29 crc kubenswrapper[4886]: I1124 09:22:29.771091 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cde6df39-d639-4855-a34f-29ff9af5c870-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-mqb42\" (UID: \"cde6df39-d639-4855-a34f-29ff9af5c870\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-mqb42" Nov 24 09:22:29 crc kubenswrapper[4886]: I1124 09:22:29.771186 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/cde6df39-d639-4855-a34f-29ff9af5c870-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-mqb42\" (UID: \"cde6df39-d639-4855-a34f-29ff9af5c870\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-mqb42" Nov 24 09:22:29 crc kubenswrapper[4886]: I1124 09:22:29.771232 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5xzkn\" (UniqueName: \"kubernetes.io/projected/cde6df39-d639-4855-a34f-29ff9af5c870-kube-api-access-5xzkn\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-mqb42\" (UID: \"cde6df39-d639-4855-a34f-29ff9af5c870\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-mqb42" Nov 24 09:22:29 crc kubenswrapper[4886]: I1124 09:22:29.771283 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/cde6df39-d639-4855-a34f-29ff9af5c870-ssh-key\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-mqb42\" (UID: \"cde6df39-d639-4855-a34f-29ff9af5c870\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-mqb42" Nov 24 09:22:29 crc kubenswrapper[4886]: I1124 09:22:29.771326 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cde6df39-d639-4855-a34f-29ff9af5c870-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-mqb42\" (UID: \"cde6df39-d639-4855-a34f-29ff9af5c870\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-mqb42" Nov 24 09:22:29 crc kubenswrapper[4886]: I1124 09:22:29.772441 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/cde6df39-d639-4855-a34f-29ff9af5c870-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-mqb42\" (UID: \"cde6df39-d639-4855-a34f-29ff9af5c870\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-mqb42" Nov 24 09:22:29 crc kubenswrapper[4886]: I1124 09:22:29.777123 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cde6df39-d639-4855-a34f-29ff9af5c870-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-mqb42\" (UID: \"cde6df39-d639-4855-a34f-29ff9af5c870\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-mqb42" Nov 24 09:22:29 crc kubenswrapper[4886]: I1124 09:22:29.777427 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cde6df39-d639-4855-a34f-29ff9af5c870-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-mqb42\" (UID: \"cde6df39-d639-4855-a34f-29ff9af5c870\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-mqb42" Nov 24 09:22:29 crc kubenswrapper[4886]: I1124 09:22:29.778091 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/cde6df39-d639-4855-a34f-29ff9af5c870-ssh-key\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-mqb42\" (UID: \"cde6df39-d639-4855-a34f-29ff9af5c870\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-mqb42" Nov 24 09:22:29 crc kubenswrapper[4886]: I1124 09:22:29.793795 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5xzkn\" (UniqueName: \"kubernetes.io/projected/cde6df39-d639-4855-a34f-29ff9af5c870-kube-api-access-5xzkn\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-mqb42\" (UID: \"cde6df39-d639-4855-a34f-29ff9af5c870\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-mqb42" Nov 24 09:22:29 crc kubenswrapper[4886]: I1124 09:22:29.806409 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-mqb42" Nov 24 09:22:30 crc kubenswrapper[4886]: I1124 09:22:30.189918 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-mqb42"] Nov 24 09:22:30 crc kubenswrapper[4886]: W1124 09:22:30.199771 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcde6df39_d639_4855_a34f_29ff9af5c870.slice/crio-a1d655b4e2d38c2381a09023fa5bd8447a542c0f4adb3fe7a1b3ad02fdc1639f WatchSource:0}: Error finding container a1d655b4e2d38c2381a09023fa5bd8447a542c0f4adb3fe7a1b3ad02fdc1639f: Status 404 returned error can't find the container with id a1d655b4e2d38c2381a09023fa5bd8447a542c0f4adb3fe7a1b3ad02fdc1639f Nov 24 09:22:30 crc kubenswrapper[4886]: I1124 09:22:30.406706 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-mqb42" event={"ID":"cde6df39-d639-4855-a34f-29ff9af5c870","Type":"ContainerStarted","Data":"a1d655b4e2d38c2381a09023fa5bd8447a542c0f4adb3fe7a1b3ad02fdc1639f"} Nov 24 09:22:31 crc kubenswrapper[4886]: I1124 09:22:31.451119 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-mqb42" event={"ID":"cde6df39-d639-4855-a34f-29ff9af5c870","Type":"ContainerStarted","Data":"3d281cbaddf3af25e132613ab6a3355fb7755d4784de16c541c0b3655ef94ddb"} Nov 24 09:22:31 crc kubenswrapper[4886]: I1124 09:22:31.481891 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-mqb42" podStartSLOduration=1.9633165369999999 podStartE2EDuration="2.481877092s" podCreationTimestamp="2025-11-24 09:22:29 +0000 UTC" firstStartedPulling="2025-11-24 09:22:30.202927668 +0000 UTC m=+2006.089665803" lastFinishedPulling="2025-11-24 09:22:30.721488223 +0000 UTC m=+2006.608226358" observedRunningTime="2025-11-24 09:22:31.479550505 +0000 UTC m=+2007.366288640" watchObservedRunningTime="2025-11-24 09:22:31.481877092 +0000 UTC m=+2007.368615227" Nov 24 09:22:53 crc kubenswrapper[4886]: I1124 09:22:53.173591 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-mjq7h"] Nov 24 09:22:53 crc kubenswrapper[4886]: I1124 09:22:53.179939 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mjq7h" Nov 24 09:22:53 crc kubenswrapper[4886]: I1124 09:22:53.188953 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-mjq7h"] Nov 24 09:22:53 crc kubenswrapper[4886]: I1124 09:22:53.273780 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cac65de9-7fce-4fbf-8a67-cd792546e130-utilities\") pod \"community-operators-mjq7h\" (UID: \"cac65de9-7fce-4fbf-8a67-cd792546e130\") " pod="openshift-marketplace/community-operators-mjq7h" Nov 24 09:22:53 crc kubenswrapper[4886]: I1124 09:22:53.274052 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cac65de9-7fce-4fbf-8a67-cd792546e130-catalog-content\") pod \"community-operators-mjq7h\" (UID: \"cac65de9-7fce-4fbf-8a67-cd792546e130\") " pod="openshift-marketplace/community-operators-mjq7h" Nov 24 09:22:53 crc kubenswrapper[4886]: I1124 09:22:53.274186 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9fbfn\" (UniqueName: \"kubernetes.io/projected/cac65de9-7fce-4fbf-8a67-cd792546e130-kube-api-access-9fbfn\") pod \"community-operators-mjq7h\" (UID: \"cac65de9-7fce-4fbf-8a67-cd792546e130\") " pod="openshift-marketplace/community-operators-mjq7h" Nov 24 09:22:53 crc kubenswrapper[4886]: I1124 09:22:53.376437 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cac65de9-7fce-4fbf-8a67-cd792546e130-utilities\") pod \"community-operators-mjq7h\" (UID: \"cac65de9-7fce-4fbf-8a67-cd792546e130\") " pod="openshift-marketplace/community-operators-mjq7h" Nov 24 09:22:53 crc kubenswrapper[4886]: I1124 09:22:53.376798 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cac65de9-7fce-4fbf-8a67-cd792546e130-catalog-content\") pod \"community-operators-mjq7h\" (UID: \"cac65de9-7fce-4fbf-8a67-cd792546e130\") " pod="openshift-marketplace/community-operators-mjq7h" Nov 24 09:22:53 crc kubenswrapper[4886]: I1124 09:22:53.376834 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9fbfn\" (UniqueName: \"kubernetes.io/projected/cac65de9-7fce-4fbf-8a67-cd792546e130-kube-api-access-9fbfn\") pod \"community-operators-mjq7h\" (UID: \"cac65de9-7fce-4fbf-8a67-cd792546e130\") " pod="openshift-marketplace/community-operators-mjq7h" Nov 24 09:22:53 crc kubenswrapper[4886]: I1124 09:22:53.377027 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cac65de9-7fce-4fbf-8a67-cd792546e130-utilities\") pod \"community-operators-mjq7h\" (UID: \"cac65de9-7fce-4fbf-8a67-cd792546e130\") " pod="openshift-marketplace/community-operators-mjq7h" Nov 24 09:22:53 crc kubenswrapper[4886]: I1124 09:22:53.377189 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cac65de9-7fce-4fbf-8a67-cd792546e130-catalog-content\") pod \"community-operators-mjq7h\" (UID: \"cac65de9-7fce-4fbf-8a67-cd792546e130\") " pod="openshift-marketplace/community-operators-mjq7h" Nov 24 09:22:53 crc kubenswrapper[4886]: I1124 09:22:53.399101 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9fbfn\" (UniqueName: \"kubernetes.io/projected/cac65de9-7fce-4fbf-8a67-cd792546e130-kube-api-access-9fbfn\") pod \"community-operators-mjq7h\" (UID: \"cac65de9-7fce-4fbf-8a67-cd792546e130\") " pod="openshift-marketplace/community-operators-mjq7h" Nov 24 09:22:53 crc kubenswrapper[4886]: I1124 09:22:53.516835 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mjq7h" Nov 24 09:22:54 crc kubenswrapper[4886]: I1124 09:22:54.068831 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-mjq7h"] Nov 24 09:22:54 crc kubenswrapper[4886]: I1124 09:22:54.663046 4886 generic.go:334] "Generic (PLEG): container finished" podID="cac65de9-7fce-4fbf-8a67-cd792546e130" containerID="9194879d614c4d3a593f47553b691e4e816e74d9c39203ac22c5d8f29ef67d91" exitCode=0 Nov 24 09:22:54 crc kubenswrapper[4886]: I1124 09:22:54.663229 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mjq7h" event={"ID":"cac65de9-7fce-4fbf-8a67-cd792546e130","Type":"ContainerDied","Data":"9194879d614c4d3a593f47553b691e4e816e74d9c39203ac22c5d8f29ef67d91"} Nov 24 09:22:54 crc kubenswrapper[4886]: I1124 09:22:54.663391 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mjq7h" event={"ID":"cac65de9-7fce-4fbf-8a67-cd792546e130","Type":"ContainerStarted","Data":"e65de8c05792190538685a61ca28448cf19c7c455b79057a81d3dfa9d02cce04"} Nov 24 09:22:55 crc kubenswrapper[4886]: I1124 09:22:55.675226 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mjq7h" event={"ID":"cac65de9-7fce-4fbf-8a67-cd792546e130","Type":"ContainerStarted","Data":"0776da5744ae23a2b7662d61210c819ba64e40d1da136b62f55dbd0f53f7d529"} Nov 24 09:22:56 crc kubenswrapper[4886]: I1124 09:22:56.686106 4886 generic.go:334] "Generic (PLEG): container finished" podID="cac65de9-7fce-4fbf-8a67-cd792546e130" containerID="0776da5744ae23a2b7662d61210c819ba64e40d1da136b62f55dbd0f53f7d529" exitCode=0 Nov 24 09:22:56 crc kubenswrapper[4886]: I1124 09:22:56.686204 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mjq7h" event={"ID":"cac65de9-7fce-4fbf-8a67-cd792546e130","Type":"ContainerDied","Data":"0776da5744ae23a2b7662d61210c819ba64e40d1da136b62f55dbd0f53f7d529"} Nov 24 09:22:57 crc kubenswrapper[4886]: I1124 09:22:57.697323 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mjq7h" event={"ID":"cac65de9-7fce-4fbf-8a67-cd792546e130","Type":"ContainerStarted","Data":"800babbbaf1ad9058f9fff5ae85ceaf15c07cc4ef7ca820f73b05b9a401488e3"} Nov 24 09:22:57 crc kubenswrapper[4886]: I1124 09:22:57.718290 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-mjq7h" podStartSLOduration=2.220976252 podStartE2EDuration="4.718266893s" podCreationTimestamp="2025-11-24 09:22:53 +0000 UTC" firstStartedPulling="2025-11-24 09:22:54.665720876 +0000 UTC m=+2030.552459011" lastFinishedPulling="2025-11-24 09:22:57.163011517 +0000 UTC m=+2033.049749652" observedRunningTime="2025-11-24 09:22:57.717869262 +0000 UTC m=+2033.604607407" watchObservedRunningTime="2025-11-24 09:22:57.718266893 +0000 UTC m=+2033.605005028" Nov 24 09:23:03 crc kubenswrapper[4886]: I1124 09:23:03.517506 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-mjq7h" Nov 24 09:23:03 crc kubenswrapper[4886]: I1124 09:23:03.518060 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-mjq7h" Nov 24 09:23:03 crc kubenswrapper[4886]: I1124 09:23:03.574047 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-mjq7h" Nov 24 09:23:03 crc kubenswrapper[4886]: I1124 09:23:03.799775 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-mjq7h" Nov 24 09:23:03 crc kubenswrapper[4886]: I1124 09:23:03.857674 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-mjq7h"] Nov 24 09:23:05 crc kubenswrapper[4886]: I1124 09:23:05.769872 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-mjq7h" podUID="cac65de9-7fce-4fbf-8a67-cd792546e130" containerName="registry-server" containerID="cri-o://800babbbaf1ad9058f9fff5ae85ceaf15c07cc4ef7ca820f73b05b9a401488e3" gracePeriod=2 Nov 24 09:23:06 crc kubenswrapper[4886]: I1124 09:23:06.743823 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mjq7h" Nov 24 09:23:06 crc kubenswrapper[4886]: I1124 09:23:06.788909 4886 generic.go:334] "Generic (PLEG): container finished" podID="cac65de9-7fce-4fbf-8a67-cd792546e130" containerID="800babbbaf1ad9058f9fff5ae85ceaf15c07cc4ef7ca820f73b05b9a401488e3" exitCode=0 Nov 24 09:23:06 crc kubenswrapper[4886]: I1124 09:23:06.788983 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mjq7h" Nov 24 09:23:06 crc kubenswrapper[4886]: I1124 09:23:06.788981 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mjq7h" event={"ID":"cac65de9-7fce-4fbf-8a67-cd792546e130","Type":"ContainerDied","Data":"800babbbaf1ad9058f9fff5ae85ceaf15c07cc4ef7ca820f73b05b9a401488e3"} Nov 24 09:23:06 crc kubenswrapper[4886]: I1124 09:23:06.789049 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mjq7h" event={"ID":"cac65de9-7fce-4fbf-8a67-cd792546e130","Type":"ContainerDied","Data":"e65de8c05792190538685a61ca28448cf19c7c455b79057a81d3dfa9d02cce04"} Nov 24 09:23:06 crc kubenswrapper[4886]: I1124 09:23:06.789073 4886 scope.go:117] "RemoveContainer" containerID="800babbbaf1ad9058f9fff5ae85ceaf15c07cc4ef7ca820f73b05b9a401488e3" Nov 24 09:23:06 crc kubenswrapper[4886]: I1124 09:23:06.814549 4886 scope.go:117] "RemoveContainer" containerID="0776da5744ae23a2b7662d61210c819ba64e40d1da136b62f55dbd0f53f7d529" Nov 24 09:23:06 crc kubenswrapper[4886]: I1124 09:23:06.836626 4886 scope.go:117] "RemoveContainer" containerID="9194879d614c4d3a593f47553b691e4e816e74d9c39203ac22c5d8f29ef67d91" Nov 24 09:23:06 crc kubenswrapper[4886]: I1124 09:23:06.871749 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cac65de9-7fce-4fbf-8a67-cd792546e130-utilities\") pod \"cac65de9-7fce-4fbf-8a67-cd792546e130\" (UID: \"cac65de9-7fce-4fbf-8a67-cd792546e130\") " Nov 24 09:23:06 crc kubenswrapper[4886]: I1124 09:23:06.872089 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9fbfn\" (UniqueName: \"kubernetes.io/projected/cac65de9-7fce-4fbf-8a67-cd792546e130-kube-api-access-9fbfn\") pod \"cac65de9-7fce-4fbf-8a67-cd792546e130\" (UID: \"cac65de9-7fce-4fbf-8a67-cd792546e130\") " Nov 24 09:23:06 crc kubenswrapper[4886]: I1124 09:23:06.872210 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cac65de9-7fce-4fbf-8a67-cd792546e130-catalog-content\") pod \"cac65de9-7fce-4fbf-8a67-cd792546e130\" (UID: \"cac65de9-7fce-4fbf-8a67-cd792546e130\") " Nov 24 09:23:06 crc kubenswrapper[4886]: I1124 09:23:06.872737 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cac65de9-7fce-4fbf-8a67-cd792546e130-utilities" (OuterVolumeSpecName: "utilities") pod "cac65de9-7fce-4fbf-8a67-cd792546e130" (UID: "cac65de9-7fce-4fbf-8a67-cd792546e130"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 09:23:06 crc kubenswrapper[4886]: I1124 09:23:06.877630 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cac65de9-7fce-4fbf-8a67-cd792546e130-kube-api-access-9fbfn" (OuterVolumeSpecName: "kube-api-access-9fbfn") pod "cac65de9-7fce-4fbf-8a67-cd792546e130" (UID: "cac65de9-7fce-4fbf-8a67-cd792546e130"). InnerVolumeSpecName "kube-api-access-9fbfn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 09:23:06 crc kubenswrapper[4886]: I1124 09:23:06.878938 4886 scope.go:117] "RemoveContainer" containerID="800babbbaf1ad9058f9fff5ae85ceaf15c07cc4ef7ca820f73b05b9a401488e3" Nov 24 09:23:06 crc kubenswrapper[4886]: E1124 09:23:06.879901 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"800babbbaf1ad9058f9fff5ae85ceaf15c07cc4ef7ca820f73b05b9a401488e3\": container with ID starting with 800babbbaf1ad9058f9fff5ae85ceaf15c07cc4ef7ca820f73b05b9a401488e3 not found: ID does not exist" containerID="800babbbaf1ad9058f9fff5ae85ceaf15c07cc4ef7ca820f73b05b9a401488e3" Nov 24 09:23:06 crc kubenswrapper[4886]: I1124 09:23:06.879940 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"800babbbaf1ad9058f9fff5ae85ceaf15c07cc4ef7ca820f73b05b9a401488e3"} err="failed to get container status \"800babbbaf1ad9058f9fff5ae85ceaf15c07cc4ef7ca820f73b05b9a401488e3\": rpc error: code = NotFound desc = could not find container \"800babbbaf1ad9058f9fff5ae85ceaf15c07cc4ef7ca820f73b05b9a401488e3\": container with ID starting with 800babbbaf1ad9058f9fff5ae85ceaf15c07cc4ef7ca820f73b05b9a401488e3 not found: ID does not exist" Nov 24 09:23:06 crc kubenswrapper[4886]: I1124 09:23:06.879961 4886 scope.go:117] "RemoveContainer" containerID="0776da5744ae23a2b7662d61210c819ba64e40d1da136b62f55dbd0f53f7d529" Nov 24 09:23:06 crc kubenswrapper[4886]: E1124 09:23:06.880265 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0776da5744ae23a2b7662d61210c819ba64e40d1da136b62f55dbd0f53f7d529\": container with ID starting with 0776da5744ae23a2b7662d61210c819ba64e40d1da136b62f55dbd0f53f7d529 not found: ID does not exist" containerID="0776da5744ae23a2b7662d61210c819ba64e40d1da136b62f55dbd0f53f7d529" Nov 24 09:23:06 crc kubenswrapper[4886]: I1124 09:23:06.880302 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0776da5744ae23a2b7662d61210c819ba64e40d1da136b62f55dbd0f53f7d529"} err="failed to get container status \"0776da5744ae23a2b7662d61210c819ba64e40d1da136b62f55dbd0f53f7d529\": rpc error: code = NotFound desc = could not find container \"0776da5744ae23a2b7662d61210c819ba64e40d1da136b62f55dbd0f53f7d529\": container with ID starting with 0776da5744ae23a2b7662d61210c819ba64e40d1da136b62f55dbd0f53f7d529 not found: ID does not exist" Nov 24 09:23:06 crc kubenswrapper[4886]: I1124 09:23:06.880323 4886 scope.go:117] "RemoveContainer" containerID="9194879d614c4d3a593f47553b691e4e816e74d9c39203ac22c5d8f29ef67d91" Nov 24 09:23:06 crc kubenswrapper[4886]: E1124 09:23:06.881642 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9194879d614c4d3a593f47553b691e4e816e74d9c39203ac22c5d8f29ef67d91\": container with ID starting with 9194879d614c4d3a593f47553b691e4e816e74d9c39203ac22c5d8f29ef67d91 not found: ID does not exist" containerID="9194879d614c4d3a593f47553b691e4e816e74d9c39203ac22c5d8f29ef67d91" Nov 24 09:23:06 crc kubenswrapper[4886]: I1124 09:23:06.881672 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9194879d614c4d3a593f47553b691e4e816e74d9c39203ac22c5d8f29ef67d91"} err="failed to get container status \"9194879d614c4d3a593f47553b691e4e816e74d9c39203ac22c5d8f29ef67d91\": rpc error: code = NotFound desc = could not find container \"9194879d614c4d3a593f47553b691e4e816e74d9c39203ac22c5d8f29ef67d91\": container with ID starting with 9194879d614c4d3a593f47553b691e4e816e74d9c39203ac22c5d8f29ef67d91 not found: ID does not exist" Nov 24 09:23:06 crc kubenswrapper[4886]: I1124 09:23:06.975259 4886 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cac65de9-7fce-4fbf-8a67-cd792546e130-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 09:23:06 crc kubenswrapper[4886]: I1124 09:23:06.975294 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9fbfn\" (UniqueName: \"kubernetes.io/projected/cac65de9-7fce-4fbf-8a67-cd792546e130-kube-api-access-9fbfn\") on node \"crc\" DevicePath \"\"" Nov 24 09:23:07 crc kubenswrapper[4886]: I1124 09:23:07.293091 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cac65de9-7fce-4fbf-8a67-cd792546e130-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cac65de9-7fce-4fbf-8a67-cd792546e130" (UID: "cac65de9-7fce-4fbf-8a67-cd792546e130"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 09:23:07 crc kubenswrapper[4886]: I1124 09:23:07.383050 4886 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cac65de9-7fce-4fbf-8a67-cd792546e130-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 09:23:07 crc kubenswrapper[4886]: I1124 09:23:07.422244 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-mjq7h"] Nov 24 09:23:07 crc kubenswrapper[4886]: I1124 09:23:07.429957 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-mjq7h"] Nov 24 09:23:08 crc kubenswrapper[4886]: I1124 09:23:08.864488 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cac65de9-7fce-4fbf-8a67-cd792546e130" path="/var/lib/kubelet/pods/cac65de9-7fce-4fbf-8a67-cd792546e130/volumes" Nov 24 09:23:13 crc kubenswrapper[4886]: I1124 09:23:13.388386 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-9cwrh"] Nov 24 09:23:13 crc kubenswrapper[4886]: E1124 09:23:13.390842 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cac65de9-7fce-4fbf-8a67-cd792546e130" containerName="extract-content" Nov 24 09:23:13 crc kubenswrapper[4886]: I1124 09:23:13.390874 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="cac65de9-7fce-4fbf-8a67-cd792546e130" containerName="extract-content" Nov 24 09:23:13 crc kubenswrapper[4886]: E1124 09:23:13.390891 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cac65de9-7fce-4fbf-8a67-cd792546e130" containerName="registry-server" Nov 24 09:23:13 crc kubenswrapper[4886]: I1124 09:23:13.390903 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="cac65de9-7fce-4fbf-8a67-cd792546e130" containerName="registry-server" Nov 24 09:23:13 crc kubenswrapper[4886]: E1124 09:23:13.391101 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cac65de9-7fce-4fbf-8a67-cd792546e130" containerName="extract-utilities" Nov 24 09:23:13 crc kubenswrapper[4886]: I1124 09:23:13.391112 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="cac65de9-7fce-4fbf-8a67-cd792546e130" containerName="extract-utilities" Nov 24 09:23:13 crc kubenswrapper[4886]: I1124 09:23:13.391323 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="cac65de9-7fce-4fbf-8a67-cd792546e130" containerName="registry-server" Nov 24 09:23:13 crc kubenswrapper[4886]: I1124 09:23:13.395607 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9cwrh" Nov 24 09:23:13 crc kubenswrapper[4886]: I1124 09:23:13.403680 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-9cwrh"] Nov 24 09:23:13 crc kubenswrapper[4886]: I1124 09:23:13.511659 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/181ee121-09c6-4cdc-b225-3b7e28d322a9-utilities\") pod \"redhat-marketplace-9cwrh\" (UID: \"181ee121-09c6-4cdc-b225-3b7e28d322a9\") " pod="openshift-marketplace/redhat-marketplace-9cwrh" Nov 24 09:23:13 crc kubenswrapper[4886]: I1124 09:23:13.511736 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7kxgb\" (UniqueName: \"kubernetes.io/projected/181ee121-09c6-4cdc-b225-3b7e28d322a9-kube-api-access-7kxgb\") pod \"redhat-marketplace-9cwrh\" (UID: \"181ee121-09c6-4cdc-b225-3b7e28d322a9\") " pod="openshift-marketplace/redhat-marketplace-9cwrh" Nov 24 09:23:13 crc kubenswrapper[4886]: I1124 09:23:13.511881 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/181ee121-09c6-4cdc-b225-3b7e28d322a9-catalog-content\") pod \"redhat-marketplace-9cwrh\" (UID: \"181ee121-09c6-4cdc-b225-3b7e28d322a9\") " pod="openshift-marketplace/redhat-marketplace-9cwrh" Nov 24 09:23:13 crc kubenswrapper[4886]: I1124 09:23:13.614005 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/181ee121-09c6-4cdc-b225-3b7e28d322a9-catalog-content\") pod \"redhat-marketplace-9cwrh\" (UID: \"181ee121-09c6-4cdc-b225-3b7e28d322a9\") " pod="openshift-marketplace/redhat-marketplace-9cwrh" Nov 24 09:23:13 crc kubenswrapper[4886]: I1124 09:23:13.614314 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/181ee121-09c6-4cdc-b225-3b7e28d322a9-utilities\") pod \"redhat-marketplace-9cwrh\" (UID: \"181ee121-09c6-4cdc-b225-3b7e28d322a9\") " pod="openshift-marketplace/redhat-marketplace-9cwrh" Nov 24 09:23:13 crc kubenswrapper[4886]: I1124 09:23:13.614360 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7kxgb\" (UniqueName: \"kubernetes.io/projected/181ee121-09c6-4cdc-b225-3b7e28d322a9-kube-api-access-7kxgb\") pod \"redhat-marketplace-9cwrh\" (UID: \"181ee121-09c6-4cdc-b225-3b7e28d322a9\") " pod="openshift-marketplace/redhat-marketplace-9cwrh" Nov 24 09:23:13 crc kubenswrapper[4886]: I1124 09:23:13.614680 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/181ee121-09c6-4cdc-b225-3b7e28d322a9-catalog-content\") pod \"redhat-marketplace-9cwrh\" (UID: \"181ee121-09c6-4cdc-b225-3b7e28d322a9\") " pod="openshift-marketplace/redhat-marketplace-9cwrh" Nov 24 09:23:13 crc kubenswrapper[4886]: I1124 09:23:13.614753 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/181ee121-09c6-4cdc-b225-3b7e28d322a9-utilities\") pod \"redhat-marketplace-9cwrh\" (UID: \"181ee121-09c6-4cdc-b225-3b7e28d322a9\") " pod="openshift-marketplace/redhat-marketplace-9cwrh" Nov 24 09:23:13 crc kubenswrapper[4886]: I1124 09:23:13.639494 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7kxgb\" (UniqueName: \"kubernetes.io/projected/181ee121-09c6-4cdc-b225-3b7e28d322a9-kube-api-access-7kxgb\") pod \"redhat-marketplace-9cwrh\" (UID: \"181ee121-09c6-4cdc-b225-3b7e28d322a9\") " pod="openshift-marketplace/redhat-marketplace-9cwrh" Nov 24 09:23:13 crc kubenswrapper[4886]: I1124 09:23:13.739443 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9cwrh" Nov 24 09:23:14 crc kubenswrapper[4886]: I1124 09:23:14.214686 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-9cwrh"] Nov 24 09:23:14 crc kubenswrapper[4886]: W1124 09:23:14.230307 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod181ee121_09c6_4cdc_b225_3b7e28d322a9.slice/crio-ad4e780ceeb6c7b600ac2b1d0c9c796059b360d7fae7d85e945b5c9b39d893b8 WatchSource:0}: Error finding container ad4e780ceeb6c7b600ac2b1d0c9c796059b360d7fae7d85e945b5c9b39d893b8: Status 404 returned error can't find the container with id ad4e780ceeb6c7b600ac2b1d0c9c796059b360d7fae7d85e945b5c9b39d893b8 Nov 24 09:23:14 crc kubenswrapper[4886]: I1124 09:23:14.887450 4886 generic.go:334] "Generic (PLEG): container finished" podID="181ee121-09c6-4cdc-b225-3b7e28d322a9" containerID="11bef09e9a29fdb09f0470e4cf13320d32f675c03e582972f006a91f764c2571" exitCode=0 Nov 24 09:23:14 crc kubenswrapper[4886]: I1124 09:23:14.887496 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9cwrh" event={"ID":"181ee121-09c6-4cdc-b225-3b7e28d322a9","Type":"ContainerDied","Data":"11bef09e9a29fdb09f0470e4cf13320d32f675c03e582972f006a91f764c2571"} Nov 24 09:23:14 crc kubenswrapper[4886]: I1124 09:23:14.887816 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9cwrh" event={"ID":"181ee121-09c6-4cdc-b225-3b7e28d322a9","Type":"ContainerStarted","Data":"ad4e780ceeb6c7b600ac2b1d0c9c796059b360d7fae7d85e945b5c9b39d893b8"} Nov 24 09:23:15 crc kubenswrapper[4886]: I1124 09:23:15.900203 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9cwrh" event={"ID":"181ee121-09c6-4cdc-b225-3b7e28d322a9","Type":"ContainerStarted","Data":"2ff4324188041bdaa0ff0aa4bcf0b34b9f208e57d89cf1a2e7f149542d577e65"} Nov 24 09:23:16 crc kubenswrapper[4886]: I1124 09:23:16.910469 4886 generic.go:334] "Generic (PLEG): container finished" podID="181ee121-09c6-4cdc-b225-3b7e28d322a9" containerID="2ff4324188041bdaa0ff0aa4bcf0b34b9f208e57d89cf1a2e7f149542d577e65" exitCode=0 Nov 24 09:23:16 crc kubenswrapper[4886]: I1124 09:23:16.910878 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9cwrh" event={"ID":"181ee121-09c6-4cdc-b225-3b7e28d322a9","Type":"ContainerDied","Data":"2ff4324188041bdaa0ff0aa4bcf0b34b9f208e57d89cf1a2e7f149542d577e65"} Nov 24 09:23:17 crc kubenswrapper[4886]: I1124 09:23:17.924306 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9cwrh" event={"ID":"181ee121-09c6-4cdc-b225-3b7e28d322a9","Type":"ContainerStarted","Data":"f0c50d536ee0a76de4943df5f742bf1ed605d48801991f3906f0ec590460b32c"} Nov 24 09:23:17 crc kubenswrapper[4886]: I1124 09:23:17.944753 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-9cwrh" podStartSLOduration=2.56305298 podStartE2EDuration="4.944731111s" podCreationTimestamp="2025-11-24 09:23:13 +0000 UTC" firstStartedPulling="2025-11-24 09:23:14.890041983 +0000 UTC m=+2050.776780118" lastFinishedPulling="2025-11-24 09:23:17.271720114 +0000 UTC m=+2053.158458249" observedRunningTime="2025-11-24 09:23:17.942394254 +0000 UTC m=+2053.829132389" watchObservedRunningTime="2025-11-24 09:23:17.944731111 +0000 UTC m=+2053.831469246" Nov 24 09:23:23 crc kubenswrapper[4886]: I1124 09:23:23.740667 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-9cwrh" Nov 24 09:23:23 crc kubenswrapper[4886]: I1124 09:23:23.741296 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-9cwrh" Nov 24 09:23:23 crc kubenswrapper[4886]: I1124 09:23:23.797794 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-9cwrh" Nov 24 09:23:24 crc kubenswrapper[4886]: I1124 09:23:24.045306 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-9cwrh" Nov 24 09:23:24 crc kubenswrapper[4886]: I1124 09:23:24.379053 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-9cwrh"] Nov 24 09:23:25 crc kubenswrapper[4886]: I1124 09:23:25.995270 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-9cwrh" podUID="181ee121-09c6-4cdc-b225-3b7e28d322a9" containerName="registry-server" containerID="cri-o://f0c50d536ee0a76de4943df5f742bf1ed605d48801991f3906f0ec590460b32c" gracePeriod=2 Nov 24 09:23:26 crc kubenswrapper[4886]: I1124 09:23:26.980351 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9cwrh" Nov 24 09:23:27 crc kubenswrapper[4886]: I1124 09:23:27.017927 4886 generic.go:334] "Generic (PLEG): container finished" podID="181ee121-09c6-4cdc-b225-3b7e28d322a9" containerID="f0c50d536ee0a76de4943df5f742bf1ed605d48801991f3906f0ec590460b32c" exitCode=0 Nov 24 09:23:27 crc kubenswrapper[4886]: I1124 09:23:27.017981 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9cwrh" event={"ID":"181ee121-09c6-4cdc-b225-3b7e28d322a9","Type":"ContainerDied","Data":"f0c50d536ee0a76de4943df5f742bf1ed605d48801991f3906f0ec590460b32c"} Nov 24 09:23:27 crc kubenswrapper[4886]: I1124 09:23:27.018011 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9cwrh" event={"ID":"181ee121-09c6-4cdc-b225-3b7e28d322a9","Type":"ContainerDied","Data":"ad4e780ceeb6c7b600ac2b1d0c9c796059b360d7fae7d85e945b5c9b39d893b8"} Nov 24 09:23:27 crc kubenswrapper[4886]: I1124 09:23:27.018030 4886 scope.go:117] "RemoveContainer" containerID="f0c50d536ee0a76de4943df5f742bf1ed605d48801991f3906f0ec590460b32c" Nov 24 09:23:27 crc kubenswrapper[4886]: I1124 09:23:27.018263 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9cwrh" Nov 24 09:23:27 crc kubenswrapper[4886]: I1124 09:23:27.046452 4886 scope.go:117] "RemoveContainer" containerID="2ff4324188041bdaa0ff0aa4bcf0b34b9f208e57d89cf1a2e7f149542d577e65" Nov 24 09:23:27 crc kubenswrapper[4886]: I1124 09:23:27.072962 4886 scope.go:117] "RemoveContainer" containerID="11bef09e9a29fdb09f0470e4cf13320d32f675c03e582972f006a91f764c2571" Nov 24 09:23:27 crc kubenswrapper[4886]: I1124 09:23:27.110336 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/181ee121-09c6-4cdc-b225-3b7e28d322a9-utilities\") pod \"181ee121-09c6-4cdc-b225-3b7e28d322a9\" (UID: \"181ee121-09c6-4cdc-b225-3b7e28d322a9\") " Nov 24 09:23:27 crc kubenswrapper[4886]: I1124 09:23:27.110428 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/181ee121-09c6-4cdc-b225-3b7e28d322a9-catalog-content\") pod \"181ee121-09c6-4cdc-b225-3b7e28d322a9\" (UID: \"181ee121-09c6-4cdc-b225-3b7e28d322a9\") " Nov 24 09:23:27 crc kubenswrapper[4886]: I1124 09:23:27.110646 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7kxgb\" (UniqueName: \"kubernetes.io/projected/181ee121-09c6-4cdc-b225-3b7e28d322a9-kube-api-access-7kxgb\") pod \"181ee121-09c6-4cdc-b225-3b7e28d322a9\" (UID: \"181ee121-09c6-4cdc-b225-3b7e28d322a9\") " Nov 24 09:23:27 crc kubenswrapper[4886]: I1124 09:23:27.113038 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/181ee121-09c6-4cdc-b225-3b7e28d322a9-utilities" (OuterVolumeSpecName: "utilities") pod "181ee121-09c6-4cdc-b225-3b7e28d322a9" (UID: "181ee121-09c6-4cdc-b225-3b7e28d322a9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 09:23:27 crc kubenswrapper[4886]: I1124 09:23:27.120310 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/181ee121-09c6-4cdc-b225-3b7e28d322a9-kube-api-access-7kxgb" (OuterVolumeSpecName: "kube-api-access-7kxgb") pod "181ee121-09c6-4cdc-b225-3b7e28d322a9" (UID: "181ee121-09c6-4cdc-b225-3b7e28d322a9"). InnerVolumeSpecName "kube-api-access-7kxgb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 09:23:27 crc kubenswrapper[4886]: I1124 09:23:27.130147 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/181ee121-09c6-4cdc-b225-3b7e28d322a9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "181ee121-09c6-4cdc-b225-3b7e28d322a9" (UID: "181ee121-09c6-4cdc-b225-3b7e28d322a9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 09:23:27 crc kubenswrapper[4886]: I1124 09:23:27.136089 4886 scope.go:117] "RemoveContainer" containerID="f0c50d536ee0a76de4943df5f742bf1ed605d48801991f3906f0ec590460b32c" Nov 24 09:23:27 crc kubenswrapper[4886]: E1124 09:23:27.137965 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f0c50d536ee0a76de4943df5f742bf1ed605d48801991f3906f0ec590460b32c\": container with ID starting with f0c50d536ee0a76de4943df5f742bf1ed605d48801991f3906f0ec590460b32c not found: ID does not exist" containerID="f0c50d536ee0a76de4943df5f742bf1ed605d48801991f3906f0ec590460b32c" Nov 24 09:23:27 crc kubenswrapper[4886]: I1124 09:23:27.138021 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f0c50d536ee0a76de4943df5f742bf1ed605d48801991f3906f0ec590460b32c"} err="failed to get container status \"f0c50d536ee0a76de4943df5f742bf1ed605d48801991f3906f0ec590460b32c\": rpc error: code = NotFound desc = could not find container \"f0c50d536ee0a76de4943df5f742bf1ed605d48801991f3906f0ec590460b32c\": container with ID starting with f0c50d536ee0a76de4943df5f742bf1ed605d48801991f3906f0ec590460b32c not found: ID does not exist" Nov 24 09:23:27 crc kubenswrapper[4886]: I1124 09:23:27.138057 4886 scope.go:117] "RemoveContainer" containerID="2ff4324188041bdaa0ff0aa4bcf0b34b9f208e57d89cf1a2e7f149542d577e65" Nov 24 09:23:27 crc kubenswrapper[4886]: E1124 09:23:27.138712 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2ff4324188041bdaa0ff0aa4bcf0b34b9f208e57d89cf1a2e7f149542d577e65\": container with ID starting with 2ff4324188041bdaa0ff0aa4bcf0b34b9f208e57d89cf1a2e7f149542d577e65 not found: ID does not exist" containerID="2ff4324188041bdaa0ff0aa4bcf0b34b9f208e57d89cf1a2e7f149542d577e65" Nov 24 09:23:27 crc kubenswrapper[4886]: I1124 09:23:27.138738 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2ff4324188041bdaa0ff0aa4bcf0b34b9f208e57d89cf1a2e7f149542d577e65"} err="failed to get container status \"2ff4324188041bdaa0ff0aa4bcf0b34b9f208e57d89cf1a2e7f149542d577e65\": rpc error: code = NotFound desc = could not find container \"2ff4324188041bdaa0ff0aa4bcf0b34b9f208e57d89cf1a2e7f149542d577e65\": container with ID starting with 2ff4324188041bdaa0ff0aa4bcf0b34b9f208e57d89cf1a2e7f149542d577e65 not found: ID does not exist" Nov 24 09:23:27 crc kubenswrapper[4886]: I1124 09:23:27.138754 4886 scope.go:117] "RemoveContainer" containerID="11bef09e9a29fdb09f0470e4cf13320d32f675c03e582972f006a91f764c2571" Nov 24 09:23:27 crc kubenswrapper[4886]: E1124 09:23:27.139267 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"11bef09e9a29fdb09f0470e4cf13320d32f675c03e582972f006a91f764c2571\": container with ID starting with 11bef09e9a29fdb09f0470e4cf13320d32f675c03e582972f006a91f764c2571 not found: ID does not exist" containerID="11bef09e9a29fdb09f0470e4cf13320d32f675c03e582972f006a91f764c2571" Nov 24 09:23:27 crc kubenswrapper[4886]: I1124 09:23:27.139314 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"11bef09e9a29fdb09f0470e4cf13320d32f675c03e582972f006a91f764c2571"} err="failed to get container status \"11bef09e9a29fdb09f0470e4cf13320d32f675c03e582972f006a91f764c2571\": rpc error: code = NotFound desc = could not find container \"11bef09e9a29fdb09f0470e4cf13320d32f675c03e582972f006a91f764c2571\": container with ID starting with 11bef09e9a29fdb09f0470e4cf13320d32f675c03e582972f006a91f764c2571 not found: ID does not exist" Nov 24 09:23:27 crc kubenswrapper[4886]: I1124 09:23:27.213333 4886 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/181ee121-09c6-4cdc-b225-3b7e28d322a9-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 09:23:27 crc kubenswrapper[4886]: I1124 09:23:27.213367 4886 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/181ee121-09c6-4cdc-b225-3b7e28d322a9-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 09:23:27 crc kubenswrapper[4886]: I1124 09:23:27.213380 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7kxgb\" (UniqueName: \"kubernetes.io/projected/181ee121-09c6-4cdc-b225-3b7e28d322a9-kube-api-access-7kxgb\") on node \"crc\" DevicePath \"\"" Nov 24 09:23:27 crc kubenswrapper[4886]: I1124 09:23:27.360816 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-9cwrh"] Nov 24 09:23:27 crc kubenswrapper[4886]: I1124 09:23:27.371299 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-9cwrh"] Nov 24 09:23:28 crc kubenswrapper[4886]: I1124 09:23:28.860758 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="181ee121-09c6-4cdc-b225-3b7e28d322a9" path="/var/lib/kubelet/pods/181ee121-09c6-4cdc-b225-3b7e28d322a9/volumes" Nov 24 09:23:32 crc kubenswrapper[4886]: I1124 09:23:32.065564 4886 generic.go:334] "Generic (PLEG): container finished" podID="cde6df39-d639-4855-a34f-29ff9af5c870" containerID="3d281cbaddf3af25e132613ab6a3355fb7755d4784de16c541c0b3655ef94ddb" exitCode=0 Nov 24 09:23:32 crc kubenswrapper[4886]: I1124 09:23:32.065633 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-mqb42" event={"ID":"cde6df39-d639-4855-a34f-29ff9af5c870","Type":"ContainerDied","Data":"3d281cbaddf3af25e132613ab6a3355fb7755d4784de16c541c0b3655ef94ddb"} Nov 24 09:23:33 crc kubenswrapper[4886]: I1124 09:23:33.478694 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-mqb42" Nov 24 09:23:33 crc kubenswrapper[4886]: I1124 09:23:33.645274 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cde6df39-d639-4855-a34f-29ff9af5c870-inventory\") pod \"cde6df39-d639-4855-a34f-29ff9af5c870\" (UID: \"cde6df39-d639-4855-a34f-29ff9af5c870\") " Nov 24 09:23:33 crc kubenswrapper[4886]: I1124 09:23:33.646287 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/cde6df39-d639-4855-a34f-29ff9af5c870-ovncontroller-config-0\") pod \"cde6df39-d639-4855-a34f-29ff9af5c870\" (UID: \"cde6df39-d639-4855-a34f-29ff9af5c870\") " Nov 24 09:23:33 crc kubenswrapper[4886]: I1124 09:23:33.646426 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5xzkn\" (UniqueName: \"kubernetes.io/projected/cde6df39-d639-4855-a34f-29ff9af5c870-kube-api-access-5xzkn\") pod \"cde6df39-d639-4855-a34f-29ff9af5c870\" (UID: \"cde6df39-d639-4855-a34f-29ff9af5c870\") " Nov 24 09:23:33 crc kubenswrapper[4886]: I1124 09:23:33.646609 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cde6df39-d639-4855-a34f-29ff9af5c870-ovn-combined-ca-bundle\") pod \"cde6df39-d639-4855-a34f-29ff9af5c870\" (UID: \"cde6df39-d639-4855-a34f-29ff9af5c870\") " Nov 24 09:23:33 crc kubenswrapper[4886]: I1124 09:23:33.647201 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/cde6df39-d639-4855-a34f-29ff9af5c870-ssh-key\") pod \"cde6df39-d639-4855-a34f-29ff9af5c870\" (UID: \"cde6df39-d639-4855-a34f-29ff9af5c870\") " Nov 24 09:23:33 crc kubenswrapper[4886]: I1124 09:23:33.651687 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cde6df39-d639-4855-a34f-29ff9af5c870-kube-api-access-5xzkn" (OuterVolumeSpecName: "kube-api-access-5xzkn") pod "cde6df39-d639-4855-a34f-29ff9af5c870" (UID: "cde6df39-d639-4855-a34f-29ff9af5c870"). InnerVolumeSpecName "kube-api-access-5xzkn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 09:23:33 crc kubenswrapper[4886]: I1124 09:23:33.651912 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cde6df39-d639-4855-a34f-29ff9af5c870-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "cde6df39-d639-4855-a34f-29ff9af5c870" (UID: "cde6df39-d639-4855-a34f-29ff9af5c870"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:23:33 crc kubenswrapper[4886]: I1124 09:23:33.674357 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cde6df39-d639-4855-a34f-29ff9af5c870-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "cde6df39-d639-4855-a34f-29ff9af5c870" (UID: "cde6df39-d639-4855-a34f-29ff9af5c870"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:23:33 crc kubenswrapper[4886]: I1124 09:23:33.676495 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cde6df39-d639-4855-a34f-29ff9af5c870-ovncontroller-config-0" (OuterVolumeSpecName: "ovncontroller-config-0") pod "cde6df39-d639-4855-a34f-29ff9af5c870" (UID: "cde6df39-d639-4855-a34f-29ff9af5c870"). InnerVolumeSpecName "ovncontroller-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 09:23:33 crc kubenswrapper[4886]: I1124 09:23:33.678776 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cde6df39-d639-4855-a34f-29ff9af5c870-inventory" (OuterVolumeSpecName: "inventory") pod "cde6df39-d639-4855-a34f-29ff9af5c870" (UID: "cde6df39-d639-4855-a34f-29ff9af5c870"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:23:33 crc kubenswrapper[4886]: I1124 09:23:33.749863 4886 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cde6df39-d639-4855-a34f-29ff9af5c870-inventory\") on node \"crc\" DevicePath \"\"" Nov 24 09:23:33 crc kubenswrapper[4886]: I1124 09:23:33.749896 4886 reconciler_common.go:293] "Volume detached for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/cde6df39-d639-4855-a34f-29ff9af5c870-ovncontroller-config-0\") on node \"crc\" DevicePath \"\"" Nov 24 09:23:33 crc kubenswrapper[4886]: I1124 09:23:33.749906 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5xzkn\" (UniqueName: \"kubernetes.io/projected/cde6df39-d639-4855-a34f-29ff9af5c870-kube-api-access-5xzkn\") on node \"crc\" DevicePath \"\"" Nov 24 09:23:33 crc kubenswrapper[4886]: I1124 09:23:33.749916 4886 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cde6df39-d639-4855-a34f-29ff9af5c870-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 09:23:33 crc kubenswrapper[4886]: I1124 09:23:33.749928 4886 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/cde6df39-d639-4855-a34f-29ff9af5c870-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 24 09:23:34 crc kubenswrapper[4886]: I1124 09:23:34.084767 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-mqb42" event={"ID":"cde6df39-d639-4855-a34f-29ff9af5c870","Type":"ContainerDied","Data":"a1d655b4e2d38c2381a09023fa5bd8447a542c0f4adb3fe7a1b3ad02fdc1639f"} Nov 24 09:23:34 crc kubenswrapper[4886]: I1124 09:23:34.084808 4886 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a1d655b4e2d38c2381a09023fa5bd8447a542c0f4adb3fe7a1b3ad02fdc1639f" Nov 24 09:23:34 crc kubenswrapper[4886]: I1124 09:23:34.084871 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-mqb42" Nov 24 09:23:34 crc kubenswrapper[4886]: I1124 09:23:34.189573 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-n778s"] Nov 24 09:23:34 crc kubenswrapper[4886]: E1124 09:23:34.189993 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cde6df39-d639-4855-a34f-29ff9af5c870" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Nov 24 09:23:34 crc kubenswrapper[4886]: I1124 09:23:34.190009 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="cde6df39-d639-4855-a34f-29ff9af5c870" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Nov 24 09:23:34 crc kubenswrapper[4886]: E1124 09:23:34.190028 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="181ee121-09c6-4cdc-b225-3b7e28d322a9" containerName="extract-content" Nov 24 09:23:34 crc kubenswrapper[4886]: I1124 09:23:34.190035 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="181ee121-09c6-4cdc-b225-3b7e28d322a9" containerName="extract-content" Nov 24 09:23:34 crc kubenswrapper[4886]: E1124 09:23:34.190053 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="181ee121-09c6-4cdc-b225-3b7e28d322a9" containerName="registry-server" Nov 24 09:23:34 crc kubenswrapper[4886]: I1124 09:23:34.190060 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="181ee121-09c6-4cdc-b225-3b7e28d322a9" containerName="registry-server" Nov 24 09:23:34 crc kubenswrapper[4886]: E1124 09:23:34.190071 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="181ee121-09c6-4cdc-b225-3b7e28d322a9" containerName="extract-utilities" Nov 24 09:23:34 crc kubenswrapper[4886]: I1124 09:23:34.190077 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="181ee121-09c6-4cdc-b225-3b7e28d322a9" containerName="extract-utilities" Nov 24 09:23:34 crc kubenswrapper[4886]: I1124 09:23:34.190672 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="cde6df39-d639-4855-a34f-29ff9af5c870" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Nov 24 09:23:34 crc kubenswrapper[4886]: I1124 09:23:34.190699 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="181ee121-09c6-4cdc-b225-3b7e28d322a9" containerName="registry-server" Nov 24 09:23:34 crc kubenswrapper[4886]: I1124 09:23:34.191571 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-n778s" Nov 24 09:23:34 crc kubenswrapper[4886]: I1124 09:23:34.196090 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-n8v49" Nov 24 09:23:34 crc kubenswrapper[4886]: I1124 09:23:34.196144 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 24 09:23:34 crc kubenswrapper[4886]: I1124 09:23:34.196263 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 24 09:23:34 crc kubenswrapper[4886]: I1124 09:23:34.196164 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-ovn-metadata-agent-neutron-config" Nov 24 09:23:34 crc kubenswrapper[4886]: I1124 09:23:34.196452 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 24 09:23:34 crc kubenswrapper[4886]: I1124 09:23:34.196564 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-neutron-config" Nov 24 09:23:34 crc kubenswrapper[4886]: I1124 09:23:34.202852 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-n778s"] Nov 24 09:23:34 crc kubenswrapper[4886]: I1124 09:23:34.261262 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-528nj\" (UniqueName: \"kubernetes.io/projected/ad4158ea-36b4-499a-bfb0-d6743c87340a-kube-api-access-528nj\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-n778s\" (UID: \"ad4158ea-36b4-499a-bfb0-d6743c87340a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-n778s" Nov 24 09:23:34 crc kubenswrapper[4886]: I1124 09:23:34.261392 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad4158ea-36b4-499a-bfb0-d6743c87340a-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-n778s\" (UID: \"ad4158ea-36b4-499a-bfb0-d6743c87340a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-n778s" Nov 24 09:23:34 crc kubenswrapper[4886]: I1124 09:23:34.261462 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ad4158ea-36b4-499a-bfb0-d6743c87340a-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-n778s\" (UID: \"ad4158ea-36b4-499a-bfb0-d6743c87340a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-n778s" Nov 24 09:23:34 crc kubenswrapper[4886]: I1124 09:23:34.261566 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ad4158ea-36b4-499a-bfb0-d6743c87340a-ssh-key\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-n778s\" (UID: \"ad4158ea-36b4-499a-bfb0-d6743c87340a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-n778s" Nov 24 09:23:34 crc kubenswrapper[4886]: I1124 09:23:34.261597 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/ad4158ea-36b4-499a-bfb0-d6743c87340a-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-n778s\" (UID: \"ad4158ea-36b4-499a-bfb0-d6743c87340a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-n778s" Nov 24 09:23:34 crc kubenswrapper[4886]: I1124 09:23:34.261617 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/ad4158ea-36b4-499a-bfb0-d6743c87340a-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-n778s\" (UID: \"ad4158ea-36b4-499a-bfb0-d6743c87340a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-n778s" Nov 24 09:23:34 crc kubenswrapper[4886]: I1124 09:23:34.363105 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ad4158ea-36b4-499a-bfb0-d6743c87340a-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-n778s\" (UID: \"ad4158ea-36b4-499a-bfb0-d6743c87340a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-n778s" Nov 24 09:23:34 crc kubenswrapper[4886]: I1124 09:23:34.363202 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ad4158ea-36b4-499a-bfb0-d6743c87340a-ssh-key\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-n778s\" (UID: \"ad4158ea-36b4-499a-bfb0-d6743c87340a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-n778s" Nov 24 09:23:34 crc kubenswrapper[4886]: I1124 09:23:34.363227 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/ad4158ea-36b4-499a-bfb0-d6743c87340a-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-n778s\" (UID: \"ad4158ea-36b4-499a-bfb0-d6743c87340a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-n778s" Nov 24 09:23:34 crc kubenswrapper[4886]: I1124 09:23:34.363248 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/ad4158ea-36b4-499a-bfb0-d6743c87340a-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-n778s\" (UID: \"ad4158ea-36b4-499a-bfb0-d6743c87340a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-n778s" Nov 24 09:23:34 crc kubenswrapper[4886]: I1124 09:23:34.363322 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-528nj\" (UniqueName: \"kubernetes.io/projected/ad4158ea-36b4-499a-bfb0-d6743c87340a-kube-api-access-528nj\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-n778s\" (UID: \"ad4158ea-36b4-499a-bfb0-d6743c87340a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-n778s" Nov 24 09:23:34 crc kubenswrapper[4886]: I1124 09:23:34.363413 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad4158ea-36b4-499a-bfb0-d6743c87340a-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-n778s\" (UID: \"ad4158ea-36b4-499a-bfb0-d6743c87340a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-n778s" Nov 24 09:23:34 crc kubenswrapper[4886]: I1124 09:23:34.366862 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/ad4158ea-36b4-499a-bfb0-d6743c87340a-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-n778s\" (UID: \"ad4158ea-36b4-499a-bfb0-d6743c87340a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-n778s" Nov 24 09:23:34 crc kubenswrapper[4886]: I1124 09:23:34.367731 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ad4158ea-36b4-499a-bfb0-d6743c87340a-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-n778s\" (UID: \"ad4158ea-36b4-499a-bfb0-d6743c87340a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-n778s" Nov 24 09:23:34 crc kubenswrapper[4886]: I1124 09:23:34.367830 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/ad4158ea-36b4-499a-bfb0-d6743c87340a-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-n778s\" (UID: \"ad4158ea-36b4-499a-bfb0-d6743c87340a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-n778s" Nov 24 09:23:34 crc kubenswrapper[4886]: I1124 09:23:34.373547 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ad4158ea-36b4-499a-bfb0-d6743c87340a-ssh-key\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-n778s\" (UID: \"ad4158ea-36b4-499a-bfb0-d6743c87340a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-n778s" Nov 24 09:23:34 crc kubenswrapper[4886]: I1124 09:23:34.374819 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad4158ea-36b4-499a-bfb0-d6743c87340a-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-n778s\" (UID: \"ad4158ea-36b4-499a-bfb0-d6743c87340a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-n778s" Nov 24 09:23:34 crc kubenswrapper[4886]: I1124 09:23:34.382663 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-528nj\" (UniqueName: \"kubernetes.io/projected/ad4158ea-36b4-499a-bfb0-d6743c87340a-kube-api-access-528nj\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-n778s\" (UID: \"ad4158ea-36b4-499a-bfb0-d6743c87340a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-n778s" Nov 24 09:23:34 crc kubenswrapper[4886]: I1124 09:23:34.510219 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-n778s" Nov 24 09:23:35 crc kubenswrapper[4886]: I1124 09:23:35.038416 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-n778s"] Nov 24 09:23:35 crc kubenswrapper[4886]: I1124 09:23:35.096955 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-n778s" event={"ID":"ad4158ea-36b4-499a-bfb0-d6743c87340a","Type":"ContainerStarted","Data":"bc574e6a42caeaebc6c76179b2e4eb555ef41335f726d83cf8edc1daa3235e3e"} Nov 24 09:23:36 crc kubenswrapper[4886]: I1124 09:23:36.121546 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-n778s" event={"ID":"ad4158ea-36b4-499a-bfb0-d6743c87340a","Type":"ContainerStarted","Data":"b4edc489571ab5e2e3f1e7a442c65cf5c3e7f9c3ddc627f69a0391654f13b6b3"} Nov 24 09:23:36 crc kubenswrapper[4886]: I1124 09:23:36.149942 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-n778s" podStartSLOduration=1.728708006 podStartE2EDuration="2.149925065s" podCreationTimestamp="2025-11-24 09:23:34 +0000 UTC" firstStartedPulling="2025-11-24 09:23:35.044928641 +0000 UTC m=+2070.931666776" lastFinishedPulling="2025-11-24 09:23:35.46614571 +0000 UTC m=+2071.352883835" observedRunningTime="2025-11-24 09:23:36.14378488 +0000 UTC m=+2072.030523035" watchObservedRunningTime="2025-11-24 09:23:36.149925065 +0000 UTC m=+2072.036663200" Nov 24 09:24:04 crc kubenswrapper[4886]: I1124 09:24:04.806831 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-rvntb"] Nov 24 09:24:04 crc kubenswrapper[4886]: I1124 09:24:04.809311 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rvntb" Nov 24 09:24:04 crc kubenswrapper[4886]: I1124 09:24:04.818591 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-rvntb"] Nov 24 09:24:04 crc kubenswrapper[4886]: I1124 09:24:04.876085 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/46391d98-0cca-4d1d-888b-01c066a4babd-utilities\") pod \"certified-operators-rvntb\" (UID: \"46391d98-0cca-4d1d-888b-01c066a4babd\") " pod="openshift-marketplace/certified-operators-rvntb" Nov 24 09:24:04 crc kubenswrapper[4886]: I1124 09:24:04.876293 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vz2pj\" (UniqueName: \"kubernetes.io/projected/46391d98-0cca-4d1d-888b-01c066a4babd-kube-api-access-vz2pj\") pod \"certified-operators-rvntb\" (UID: \"46391d98-0cca-4d1d-888b-01c066a4babd\") " pod="openshift-marketplace/certified-operators-rvntb" Nov 24 09:24:04 crc kubenswrapper[4886]: I1124 09:24:04.876360 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/46391d98-0cca-4d1d-888b-01c066a4babd-catalog-content\") pod \"certified-operators-rvntb\" (UID: \"46391d98-0cca-4d1d-888b-01c066a4babd\") " pod="openshift-marketplace/certified-operators-rvntb" Nov 24 09:24:04 crc kubenswrapper[4886]: I1124 09:24:04.980424 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/46391d98-0cca-4d1d-888b-01c066a4babd-utilities\") pod \"certified-operators-rvntb\" (UID: \"46391d98-0cca-4d1d-888b-01c066a4babd\") " pod="openshift-marketplace/certified-operators-rvntb" Nov 24 09:24:04 crc kubenswrapper[4886]: I1124 09:24:04.980598 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vz2pj\" (UniqueName: \"kubernetes.io/projected/46391d98-0cca-4d1d-888b-01c066a4babd-kube-api-access-vz2pj\") pod \"certified-operators-rvntb\" (UID: \"46391d98-0cca-4d1d-888b-01c066a4babd\") " pod="openshift-marketplace/certified-operators-rvntb" Nov 24 09:24:04 crc kubenswrapper[4886]: I1124 09:24:04.980651 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/46391d98-0cca-4d1d-888b-01c066a4babd-catalog-content\") pod \"certified-operators-rvntb\" (UID: \"46391d98-0cca-4d1d-888b-01c066a4babd\") " pod="openshift-marketplace/certified-operators-rvntb" Nov 24 09:24:04 crc kubenswrapper[4886]: I1124 09:24:04.981569 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/46391d98-0cca-4d1d-888b-01c066a4babd-utilities\") pod \"certified-operators-rvntb\" (UID: \"46391d98-0cca-4d1d-888b-01c066a4babd\") " pod="openshift-marketplace/certified-operators-rvntb" Nov 24 09:24:04 crc kubenswrapper[4886]: I1124 09:24:04.982007 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/46391d98-0cca-4d1d-888b-01c066a4babd-catalog-content\") pod \"certified-operators-rvntb\" (UID: \"46391d98-0cca-4d1d-888b-01c066a4babd\") " pod="openshift-marketplace/certified-operators-rvntb" Nov 24 09:24:05 crc kubenswrapper[4886]: I1124 09:24:05.018512 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vz2pj\" (UniqueName: \"kubernetes.io/projected/46391d98-0cca-4d1d-888b-01c066a4babd-kube-api-access-vz2pj\") pod \"certified-operators-rvntb\" (UID: \"46391d98-0cca-4d1d-888b-01c066a4babd\") " pod="openshift-marketplace/certified-operators-rvntb" Nov 24 09:24:05 crc kubenswrapper[4886]: I1124 09:24:05.130898 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rvntb" Nov 24 09:24:05 crc kubenswrapper[4886]: I1124 09:24:05.685717 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-rvntb"] Nov 24 09:24:06 crc kubenswrapper[4886]: I1124 09:24:06.430193 4886 generic.go:334] "Generic (PLEG): container finished" podID="46391d98-0cca-4d1d-888b-01c066a4babd" containerID="60f69a6728e1f49cc35704a6136b970c4774636ce388002b6fcb1b0b9d874faf" exitCode=0 Nov 24 09:24:06 crc kubenswrapper[4886]: I1124 09:24:06.430342 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rvntb" event={"ID":"46391d98-0cca-4d1d-888b-01c066a4babd","Type":"ContainerDied","Data":"60f69a6728e1f49cc35704a6136b970c4774636ce388002b6fcb1b0b9d874faf"} Nov 24 09:24:06 crc kubenswrapper[4886]: I1124 09:24:06.430698 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rvntb" event={"ID":"46391d98-0cca-4d1d-888b-01c066a4babd","Type":"ContainerStarted","Data":"83b321cc9c337e41d2b4d254f2f7c6106f29b13e63ff5165654f34b8ccd4dae1"} Nov 24 09:24:07 crc kubenswrapper[4886]: I1124 09:24:07.442711 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rvntb" event={"ID":"46391d98-0cca-4d1d-888b-01c066a4babd","Type":"ContainerStarted","Data":"dd328d2e3e06f46617b6ee0b62fcb9af985e3d1d2d11f3c8f870b350e5a58557"} Nov 24 09:24:08 crc kubenswrapper[4886]: I1124 09:24:08.454583 4886 generic.go:334] "Generic (PLEG): container finished" podID="46391d98-0cca-4d1d-888b-01c066a4babd" containerID="dd328d2e3e06f46617b6ee0b62fcb9af985e3d1d2d11f3c8f870b350e5a58557" exitCode=0 Nov 24 09:24:08 crc kubenswrapper[4886]: I1124 09:24:08.454831 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rvntb" event={"ID":"46391d98-0cca-4d1d-888b-01c066a4babd","Type":"ContainerDied","Data":"dd328d2e3e06f46617b6ee0b62fcb9af985e3d1d2d11f3c8f870b350e5a58557"} Nov 24 09:24:09 crc kubenswrapper[4886]: I1124 09:24:09.467145 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rvntb" event={"ID":"46391d98-0cca-4d1d-888b-01c066a4babd","Type":"ContainerStarted","Data":"6f2296728686d2b4d4d0f1ea89b81c28fa919cc5972333c6225ebfccad8fb796"} Nov 24 09:24:10 crc kubenswrapper[4886]: I1124 09:24:10.494839 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-rvntb" podStartSLOduration=3.737988693 podStartE2EDuration="6.494819115s" podCreationTimestamp="2025-11-24 09:24:04 +0000 UTC" firstStartedPulling="2025-11-24 09:24:06.432785199 +0000 UTC m=+2102.319523334" lastFinishedPulling="2025-11-24 09:24:09.189615621 +0000 UTC m=+2105.076353756" observedRunningTime="2025-11-24 09:24:10.49325604 +0000 UTC m=+2106.379994185" watchObservedRunningTime="2025-11-24 09:24:10.494819115 +0000 UTC m=+2106.381557240" Nov 24 09:24:15 crc kubenswrapper[4886]: I1124 09:24:15.132136 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-rvntb" Nov 24 09:24:15 crc kubenswrapper[4886]: I1124 09:24:15.132779 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-rvntb" Nov 24 09:24:15 crc kubenswrapper[4886]: I1124 09:24:15.188257 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-rvntb" Nov 24 09:24:15 crc kubenswrapper[4886]: I1124 09:24:15.574675 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-rvntb" Nov 24 09:24:15 crc kubenswrapper[4886]: I1124 09:24:15.620859 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-rvntb"] Nov 24 09:24:17 crc kubenswrapper[4886]: I1124 09:24:17.546987 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-rvntb" podUID="46391d98-0cca-4d1d-888b-01c066a4babd" containerName="registry-server" containerID="cri-o://6f2296728686d2b4d4d0f1ea89b81c28fa919cc5972333c6225ebfccad8fb796" gracePeriod=2 Nov 24 09:24:18 crc kubenswrapper[4886]: I1124 09:24:18.562423 4886 generic.go:334] "Generic (PLEG): container finished" podID="46391d98-0cca-4d1d-888b-01c066a4babd" containerID="6f2296728686d2b4d4d0f1ea89b81c28fa919cc5972333c6225ebfccad8fb796" exitCode=0 Nov 24 09:24:18 crc kubenswrapper[4886]: I1124 09:24:18.562484 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rvntb" event={"ID":"46391d98-0cca-4d1d-888b-01c066a4babd","Type":"ContainerDied","Data":"6f2296728686d2b4d4d0f1ea89b81c28fa919cc5972333c6225ebfccad8fb796"} Nov 24 09:24:18 crc kubenswrapper[4886]: I1124 09:24:18.687543 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rvntb" Nov 24 09:24:18 crc kubenswrapper[4886]: I1124 09:24:18.793326 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/46391d98-0cca-4d1d-888b-01c066a4babd-catalog-content\") pod \"46391d98-0cca-4d1d-888b-01c066a4babd\" (UID: \"46391d98-0cca-4d1d-888b-01c066a4babd\") " Nov 24 09:24:18 crc kubenswrapper[4886]: I1124 09:24:18.793409 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/46391d98-0cca-4d1d-888b-01c066a4babd-utilities\") pod \"46391d98-0cca-4d1d-888b-01c066a4babd\" (UID: \"46391d98-0cca-4d1d-888b-01c066a4babd\") " Nov 24 09:24:18 crc kubenswrapper[4886]: I1124 09:24:18.793450 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vz2pj\" (UniqueName: \"kubernetes.io/projected/46391d98-0cca-4d1d-888b-01c066a4babd-kube-api-access-vz2pj\") pod \"46391d98-0cca-4d1d-888b-01c066a4babd\" (UID: \"46391d98-0cca-4d1d-888b-01c066a4babd\") " Nov 24 09:24:18 crc kubenswrapper[4886]: I1124 09:24:18.794550 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/46391d98-0cca-4d1d-888b-01c066a4babd-utilities" (OuterVolumeSpecName: "utilities") pod "46391d98-0cca-4d1d-888b-01c066a4babd" (UID: "46391d98-0cca-4d1d-888b-01c066a4babd"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 09:24:18 crc kubenswrapper[4886]: I1124 09:24:18.798986 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/46391d98-0cca-4d1d-888b-01c066a4babd-kube-api-access-vz2pj" (OuterVolumeSpecName: "kube-api-access-vz2pj") pod "46391d98-0cca-4d1d-888b-01c066a4babd" (UID: "46391d98-0cca-4d1d-888b-01c066a4babd"). InnerVolumeSpecName "kube-api-access-vz2pj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 09:24:18 crc kubenswrapper[4886]: I1124 09:24:18.896069 4886 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/46391d98-0cca-4d1d-888b-01c066a4babd-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 09:24:18 crc kubenswrapper[4886]: I1124 09:24:18.896115 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vz2pj\" (UniqueName: \"kubernetes.io/projected/46391d98-0cca-4d1d-888b-01c066a4babd-kube-api-access-vz2pj\") on node \"crc\" DevicePath \"\"" Nov 24 09:24:19 crc kubenswrapper[4886]: I1124 09:24:19.257986 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/46391d98-0cca-4d1d-888b-01c066a4babd-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "46391d98-0cca-4d1d-888b-01c066a4babd" (UID: "46391d98-0cca-4d1d-888b-01c066a4babd"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 09:24:19 crc kubenswrapper[4886]: I1124 09:24:19.303973 4886 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/46391d98-0cca-4d1d-888b-01c066a4babd-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 09:24:19 crc kubenswrapper[4886]: I1124 09:24:19.574808 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rvntb" event={"ID":"46391d98-0cca-4d1d-888b-01c066a4babd","Type":"ContainerDied","Data":"83b321cc9c337e41d2b4d254f2f7c6106f29b13e63ff5165654f34b8ccd4dae1"} Nov 24 09:24:19 crc kubenswrapper[4886]: I1124 09:24:19.574865 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rvntb" Nov 24 09:24:19 crc kubenswrapper[4886]: I1124 09:24:19.574870 4886 scope.go:117] "RemoveContainer" containerID="6f2296728686d2b4d4d0f1ea89b81c28fa919cc5972333c6225ebfccad8fb796" Nov 24 09:24:19 crc kubenswrapper[4886]: I1124 09:24:19.607660 4886 scope.go:117] "RemoveContainer" containerID="dd328d2e3e06f46617b6ee0b62fcb9af985e3d1d2d11f3c8f870b350e5a58557" Nov 24 09:24:19 crc kubenswrapper[4886]: I1124 09:24:19.616837 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-rvntb"] Nov 24 09:24:19 crc kubenswrapper[4886]: I1124 09:24:19.634013 4886 scope.go:117] "RemoveContainer" containerID="60f69a6728e1f49cc35704a6136b970c4774636ce388002b6fcb1b0b9d874faf" Nov 24 09:24:19 crc kubenswrapper[4886]: I1124 09:24:19.652286 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-rvntb"] Nov 24 09:24:20 crc kubenswrapper[4886]: I1124 09:24:20.866620 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="46391d98-0cca-4d1d-888b-01c066a4babd" path="/var/lib/kubelet/pods/46391d98-0cca-4d1d-888b-01c066a4babd/volumes" Nov 24 09:24:24 crc kubenswrapper[4886]: I1124 09:24:24.341751 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-lvvtb"] Nov 24 09:24:24 crc kubenswrapper[4886]: E1124 09:24:24.347056 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46391d98-0cca-4d1d-888b-01c066a4babd" containerName="registry-server" Nov 24 09:24:24 crc kubenswrapper[4886]: I1124 09:24:24.347118 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="46391d98-0cca-4d1d-888b-01c066a4babd" containerName="registry-server" Nov 24 09:24:24 crc kubenswrapper[4886]: E1124 09:24:24.347287 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46391d98-0cca-4d1d-888b-01c066a4babd" containerName="extract-content" Nov 24 09:24:24 crc kubenswrapper[4886]: I1124 09:24:24.347309 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="46391d98-0cca-4d1d-888b-01c066a4babd" containerName="extract-content" Nov 24 09:24:24 crc kubenswrapper[4886]: E1124 09:24:24.347397 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46391d98-0cca-4d1d-888b-01c066a4babd" containerName="extract-utilities" Nov 24 09:24:24 crc kubenswrapper[4886]: I1124 09:24:24.347416 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="46391d98-0cca-4d1d-888b-01c066a4babd" containerName="extract-utilities" Nov 24 09:24:24 crc kubenswrapper[4886]: I1124 09:24:24.349993 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="46391d98-0cca-4d1d-888b-01c066a4babd" containerName="registry-server" Nov 24 09:24:24 crc kubenswrapper[4886]: I1124 09:24:24.356616 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lvvtb" Nov 24 09:24:24 crc kubenswrapper[4886]: I1124 09:24:24.368646 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-lvvtb"] Nov 24 09:24:24 crc kubenswrapper[4886]: I1124 09:24:24.442302 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-txjz4\" (UniqueName: \"kubernetes.io/projected/7472f270-eb34-4f9e-b332-d7f53ff1e014-kube-api-access-txjz4\") pod \"redhat-operators-lvvtb\" (UID: \"7472f270-eb34-4f9e-b332-d7f53ff1e014\") " pod="openshift-marketplace/redhat-operators-lvvtb" Nov 24 09:24:24 crc kubenswrapper[4886]: I1124 09:24:24.442450 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7472f270-eb34-4f9e-b332-d7f53ff1e014-utilities\") pod \"redhat-operators-lvvtb\" (UID: \"7472f270-eb34-4f9e-b332-d7f53ff1e014\") " pod="openshift-marketplace/redhat-operators-lvvtb" Nov 24 09:24:24 crc kubenswrapper[4886]: I1124 09:24:24.442519 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7472f270-eb34-4f9e-b332-d7f53ff1e014-catalog-content\") pod \"redhat-operators-lvvtb\" (UID: \"7472f270-eb34-4f9e-b332-d7f53ff1e014\") " pod="openshift-marketplace/redhat-operators-lvvtb" Nov 24 09:24:24 crc kubenswrapper[4886]: I1124 09:24:24.545438 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7472f270-eb34-4f9e-b332-d7f53ff1e014-utilities\") pod \"redhat-operators-lvvtb\" (UID: \"7472f270-eb34-4f9e-b332-d7f53ff1e014\") " pod="openshift-marketplace/redhat-operators-lvvtb" Nov 24 09:24:24 crc kubenswrapper[4886]: I1124 09:24:24.545532 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7472f270-eb34-4f9e-b332-d7f53ff1e014-catalog-content\") pod \"redhat-operators-lvvtb\" (UID: \"7472f270-eb34-4f9e-b332-d7f53ff1e014\") " pod="openshift-marketplace/redhat-operators-lvvtb" Nov 24 09:24:24 crc kubenswrapper[4886]: I1124 09:24:24.545686 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-txjz4\" (UniqueName: \"kubernetes.io/projected/7472f270-eb34-4f9e-b332-d7f53ff1e014-kube-api-access-txjz4\") pod \"redhat-operators-lvvtb\" (UID: \"7472f270-eb34-4f9e-b332-d7f53ff1e014\") " pod="openshift-marketplace/redhat-operators-lvvtb" Nov 24 09:24:24 crc kubenswrapper[4886]: I1124 09:24:24.546765 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7472f270-eb34-4f9e-b332-d7f53ff1e014-utilities\") pod \"redhat-operators-lvvtb\" (UID: \"7472f270-eb34-4f9e-b332-d7f53ff1e014\") " pod="openshift-marketplace/redhat-operators-lvvtb" Nov 24 09:24:24 crc kubenswrapper[4886]: I1124 09:24:24.546908 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7472f270-eb34-4f9e-b332-d7f53ff1e014-catalog-content\") pod \"redhat-operators-lvvtb\" (UID: \"7472f270-eb34-4f9e-b332-d7f53ff1e014\") " pod="openshift-marketplace/redhat-operators-lvvtb" Nov 24 09:24:24 crc kubenswrapper[4886]: I1124 09:24:24.570589 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-txjz4\" (UniqueName: \"kubernetes.io/projected/7472f270-eb34-4f9e-b332-d7f53ff1e014-kube-api-access-txjz4\") pod \"redhat-operators-lvvtb\" (UID: \"7472f270-eb34-4f9e-b332-d7f53ff1e014\") " pod="openshift-marketplace/redhat-operators-lvvtb" Nov 24 09:24:24 crc kubenswrapper[4886]: I1124 09:24:24.621361 4886 generic.go:334] "Generic (PLEG): container finished" podID="ad4158ea-36b4-499a-bfb0-d6743c87340a" containerID="b4edc489571ab5e2e3f1e7a442c65cf5c3e7f9c3ddc627f69a0391654f13b6b3" exitCode=0 Nov 24 09:24:24 crc kubenswrapper[4886]: I1124 09:24:24.621526 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-n778s" event={"ID":"ad4158ea-36b4-499a-bfb0-d6743c87340a","Type":"ContainerDied","Data":"b4edc489571ab5e2e3f1e7a442c65cf5c3e7f9c3ddc627f69a0391654f13b6b3"} Nov 24 09:24:24 crc kubenswrapper[4886]: I1124 09:24:24.685916 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lvvtb" Nov 24 09:24:25 crc kubenswrapper[4886]: I1124 09:24:25.171284 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-lvvtb"] Nov 24 09:24:25 crc kubenswrapper[4886]: W1124 09:24:25.179697 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7472f270_eb34_4f9e_b332_d7f53ff1e014.slice/crio-2d553e52d362d79b3e767c980689a6f5cf32880d49b6eaff99c84462a41b663d WatchSource:0}: Error finding container 2d553e52d362d79b3e767c980689a6f5cf32880d49b6eaff99c84462a41b663d: Status 404 returned error can't find the container with id 2d553e52d362d79b3e767c980689a6f5cf32880d49b6eaff99c84462a41b663d Nov 24 09:24:25 crc kubenswrapper[4886]: I1124 09:24:25.629598 4886 generic.go:334] "Generic (PLEG): container finished" podID="7472f270-eb34-4f9e-b332-d7f53ff1e014" containerID="cbdba894aeb9a2fce4379571c0e1549e080f24013bf702f5da91bdaff437c0ea" exitCode=0 Nov 24 09:24:25 crc kubenswrapper[4886]: I1124 09:24:25.629698 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lvvtb" event={"ID":"7472f270-eb34-4f9e-b332-d7f53ff1e014","Type":"ContainerDied","Data":"cbdba894aeb9a2fce4379571c0e1549e080f24013bf702f5da91bdaff437c0ea"} Nov 24 09:24:25 crc kubenswrapper[4886]: I1124 09:24:25.629920 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lvvtb" event={"ID":"7472f270-eb34-4f9e-b332-d7f53ff1e014","Type":"ContainerStarted","Data":"2d553e52d362d79b3e767c980689a6f5cf32880d49b6eaff99c84462a41b663d"} Nov 24 09:24:26 crc kubenswrapper[4886]: I1124 09:24:26.072551 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-n778s" Nov 24 09:24:26 crc kubenswrapper[4886]: I1124 09:24:26.177858 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ad4158ea-36b4-499a-bfb0-d6743c87340a-ssh-key\") pod \"ad4158ea-36b4-499a-bfb0-d6743c87340a\" (UID: \"ad4158ea-36b4-499a-bfb0-d6743c87340a\") " Nov 24 09:24:26 crc kubenswrapper[4886]: I1124 09:24:26.177941 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/ad4158ea-36b4-499a-bfb0-d6743c87340a-neutron-ovn-metadata-agent-neutron-config-0\") pod \"ad4158ea-36b4-499a-bfb0-d6743c87340a\" (UID: \"ad4158ea-36b4-499a-bfb0-d6743c87340a\") " Nov 24 09:24:26 crc kubenswrapper[4886]: I1124 09:24:26.178117 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/ad4158ea-36b4-499a-bfb0-d6743c87340a-nova-metadata-neutron-config-0\") pod \"ad4158ea-36b4-499a-bfb0-d6743c87340a\" (UID: \"ad4158ea-36b4-499a-bfb0-d6743c87340a\") " Nov 24 09:24:26 crc kubenswrapper[4886]: I1124 09:24:26.178206 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ad4158ea-36b4-499a-bfb0-d6743c87340a-inventory\") pod \"ad4158ea-36b4-499a-bfb0-d6743c87340a\" (UID: \"ad4158ea-36b4-499a-bfb0-d6743c87340a\") " Nov 24 09:24:26 crc kubenswrapper[4886]: I1124 09:24:26.178273 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad4158ea-36b4-499a-bfb0-d6743c87340a-neutron-metadata-combined-ca-bundle\") pod \"ad4158ea-36b4-499a-bfb0-d6743c87340a\" (UID: \"ad4158ea-36b4-499a-bfb0-d6743c87340a\") " Nov 24 09:24:26 crc kubenswrapper[4886]: I1124 09:24:26.178816 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-528nj\" (UniqueName: \"kubernetes.io/projected/ad4158ea-36b4-499a-bfb0-d6743c87340a-kube-api-access-528nj\") pod \"ad4158ea-36b4-499a-bfb0-d6743c87340a\" (UID: \"ad4158ea-36b4-499a-bfb0-d6743c87340a\") " Nov 24 09:24:26 crc kubenswrapper[4886]: I1124 09:24:26.183861 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ad4158ea-36b4-499a-bfb0-d6743c87340a-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "ad4158ea-36b4-499a-bfb0-d6743c87340a" (UID: "ad4158ea-36b4-499a-bfb0-d6743c87340a"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:24:26 crc kubenswrapper[4886]: I1124 09:24:26.196255 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ad4158ea-36b4-499a-bfb0-d6743c87340a-kube-api-access-528nj" (OuterVolumeSpecName: "kube-api-access-528nj") pod "ad4158ea-36b4-499a-bfb0-d6743c87340a" (UID: "ad4158ea-36b4-499a-bfb0-d6743c87340a"). InnerVolumeSpecName "kube-api-access-528nj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 09:24:26 crc kubenswrapper[4886]: I1124 09:24:26.207653 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ad4158ea-36b4-499a-bfb0-d6743c87340a-inventory" (OuterVolumeSpecName: "inventory") pod "ad4158ea-36b4-499a-bfb0-d6743c87340a" (UID: "ad4158ea-36b4-499a-bfb0-d6743c87340a"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:24:26 crc kubenswrapper[4886]: I1124 09:24:26.221433 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ad4158ea-36b4-499a-bfb0-d6743c87340a-neutron-ovn-metadata-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-ovn-metadata-agent-neutron-config-0") pod "ad4158ea-36b4-499a-bfb0-d6743c87340a" (UID: "ad4158ea-36b4-499a-bfb0-d6743c87340a"). InnerVolumeSpecName "neutron-ovn-metadata-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:24:26 crc kubenswrapper[4886]: I1124 09:24:26.223301 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ad4158ea-36b4-499a-bfb0-d6743c87340a-nova-metadata-neutron-config-0" (OuterVolumeSpecName: "nova-metadata-neutron-config-0") pod "ad4158ea-36b4-499a-bfb0-d6743c87340a" (UID: "ad4158ea-36b4-499a-bfb0-d6743c87340a"). InnerVolumeSpecName "nova-metadata-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:24:26 crc kubenswrapper[4886]: I1124 09:24:26.226854 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ad4158ea-36b4-499a-bfb0-d6743c87340a-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "ad4158ea-36b4-499a-bfb0-d6743c87340a" (UID: "ad4158ea-36b4-499a-bfb0-d6743c87340a"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:24:26 crc kubenswrapper[4886]: I1124 09:24:26.281512 4886 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/ad4158ea-36b4-499a-bfb0-d6743c87340a-nova-metadata-neutron-config-0\") on node \"crc\" DevicePath \"\"" Nov 24 09:24:26 crc kubenswrapper[4886]: I1124 09:24:26.281550 4886 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ad4158ea-36b4-499a-bfb0-d6743c87340a-inventory\") on node \"crc\" DevicePath \"\"" Nov 24 09:24:26 crc kubenswrapper[4886]: I1124 09:24:26.281569 4886 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad4158ea-36b4-499a-bfb0-d6743c87340a-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 09:24:26 crc kubenswrapper[4886]: I1124 09:24:26.281585 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-528nj\" (UniqueName: \"kubernetes.io/projected/ad4158ea-36b4-499a-bfb0-d6743c87340a-kube-api-access-528nj\") on node \"crc\" DevicePath \"\"" Nov 24 09:24:26 crc kubenswrapper[4886]: I1124 09:24:26.281600 4886 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ad4158ea-36b4-499a-bfb0-d6743c87340a-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 24 09:24:26 crc kubenswrapper[4886]: I1124 09:24:26.281611 4886 reconciler_common.go:293] "Volume detached for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/ad4158ea-36b4-499a-bfb0-d6743c87340a-neutron-ovn-metadata-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Nov 24 09:24:26 crc kubenswrapper[4886]: I1124 09:24:26.646389 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-n778s" event={"ID":"ad4158ea-36b4-499a-bfb0-d6743c87340a","Type":"ContainerDied","Data":"bc574e6a42caeaebc6c76179b2e4eb555ef41335f726d83cf8edc1daa3235e3e"} Nov 24 09:24:26 crc kubenswrapper[4886]: I1124 09:24:26.646826 4886 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bc574e6a42caeaebc6c76179b2e4eb555ef41335f726d83cf8edc1daa3235e3e" Nov 24 09:24:26 crc kubenswrapper[4886]: I1124 09:24:26.646478 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-n778s" Nov 24 09:24:26 crc kubenswrapper[4886]: I1124 09:24:26.800311 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-k47md"] Nov 24 09:24:26 crc kubenswrapper[4886]: E1124 09:24:26.800750 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad4158ea-36b4-499a-bfb0-d6743c87340a" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Nov 24 09:24:26 crc kubenswrapper[4886]: I1124 09:24:26.800770 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad4158ea-36b4-499a-bfb0-d6743c87340a" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Nov 24 09:24:26 crc kubenswrapper[4886]: I1124 09:24:26.800988 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="ad4158ea-36b4-499a-bfb0-d6743c87340a" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Nov 24 09:24:26 crc kubenswrapper[4886]: I1124 09:24:26.801728 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-k47md" Nov 24 09:24:26 crc kubenswrapper[4886]: I1124 09:24:26.803909 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"libvirt-secret" Nov 24 09:24:26 crc kubenswrapper[4886]: I1124 09:24:26.804057 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-n8v49" Nov 24 09:24:26 crc kubenswrapper[4886]: I1124 09:24:26.803912 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 24 09:24:26 crc kubenswrapper[4886]: I1124 09:24:26.804574 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 24 09:24:26 crc kubenswrapper[4886]: I1124 09:24:26.806262 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 24 09:24:26 crc kubenswrapper[4886]: I1124 09:24:26.808786 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-k47md"] Nov 24 09:24:26 crc kubenswrapper[4886]: I1124 09:24:26.894225 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ce68d69b-17a7-483e-be9c-5a39b0e2dee8-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-k47md\" (UID: \"ce68d69b-17a7-483e-be9c-5a39b0e2dee8\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-k47md" Nov 24 09:24:26 crc kubenswrapper[4886]: I1124 09:24:26.894386 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/ce68d69b-17a7-483e-be9c-5a39b0e2dee8-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-k47md\" (UID: \"ce68d69b-17a7-483e-be9c-5a39b0e2dee8\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-k47md" Nov 24 09:24:26 crc kubenswrapper[4886]: I1124 09:24:26.894425 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce68d69b-17a7-483e-be9c-5a39b0e2dee8-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-k47md\" (UID: \"ce68d69b-17a7-483e-be9c-5a39b0e2dee8\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-k47md" Nov 24 09:24:26 crc kubenswrapper[4886]: I1124 09:24:26.894687 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tzpt4\" (UniqueName: \"kubernetes.io/projected/ce68d69b-17a7-483e-be9c-5a39b0e2dee8-kube-api-access-tzpt4\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-k47md\" (UID: \"ce68d69b-17a7-483e-be9c-5a39b0e2dee8\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-k47md" Nov 24 09:24:26 crc kubenswrapper[4886]: I1124 09:24:26.894935 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ce68d69b-17a7-483e-be9c-5a39b0e2dee8-ssh-key\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-k47md\" (UID: \"ce68d69b-17a7-483e-be9c-5a39b0e2dee8\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-k47md" Nov 24 09:24:26 crc kubenswrapper[4886]: I1124 09:24:26.996719 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/ce68d69b-17a7-483e-be9c-5a39b0e2dee8-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-k47md\" (UID: \"ce68d69b-17a7-483e-be9c-5a39b0e2dee8\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-k47md" Nov 24 09:24:26 crc kubenswrapper[4886]: I1124 09:24:26.996810 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce68d69b-17a7-483e-be9c-5a39b0e2dee8-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-k47md\" (UID: \"ce68d69b-17a7-483e-be9c-5a39b0e2dee8\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-k47md" Nov 24 09:24:26 crc kubenswrapper[4886]: I1124 09:24:26.996846 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tzpt4\" (UniqueName: \"kubernetes.io/projected/ce68d69b-17a7-483e-be9c-5a39b0e2dee8-kube-api-access-tzpt4\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-k47md\" (UID: \"ce68d69b-17a7-483e-be9c-5a39b0e2dee8\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-k47md" Nov 24 09:24:26 crc kubenswrapper[4886]: I1124 09:24:26.996884 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ce68d69b-17a7-483e-be9c-5a39b0e2dee8-ssh-key\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-k47md\" (UID: \"ce68d69b-17a7-483e-be9c-5a39b0e2dee8\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-k47md" Nov 24 09:24:26 crc kubenswrapper[4886]: I1124 09:24:26.996941 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ce68d69b-17a7-483e-be9c-5a39b0e2dee8-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-k47md\" (UID: \"ce68d69b-17a7-483e-be9c-5a39b0e2dee8\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-k47md" Nov 24 09:24:27 crc kubenswrapper[4886]: I1124 09:24:27.001698 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/ce68d69b-17a7-483e-be9c-5a39b0e2dee8-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-k47md\" (UID: \"ce68d69b-17a7-483e-be9c-5a39b0e2dee8\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-k47md" Nov 24 09:24:27 crc kubenswrapper[4886]: I1124 09:24:27.001782 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ce68d69b-17a7-483e-be9c-5a39b0e2dee8-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-k47md\" (UID: \"ce68d69b-17a7-483e-be9c-5a39b0e2dee8\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-k47md" Nov 24 09:24:27 crc kubenswrapper[4886]: I1124 09:24:27.007735 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ce68d69b-17a7-483e-be9c-5a39b0e2dee8-ssh-key\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-k47md\" (UID: \"ce68d69b-17a7-483e-be9c-5a39b0e2dee8\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-k47md" Nov 24 09:24:27 crc kubenswrapper[4886]: I1124 09:24:27.015822 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce68d69b-17a7-483e-be9c-5a39b0e2dee8-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-k47md\" (UID: \"ce68d69b-17a7-483e-be9c-5a39b0e2dee8\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-k47md" Nov 24 09:24:27 crc kubenswrapper[4886]: I1124 09:24:27.017622 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tzpt4\" (UniqueName: \"kubernetes.io/projected/ce68d69b-17a7-483e-be9c-5a39b0e2dee8-kube-api-access-tzpt4\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-k47md\" (UID: \"ce68d69b-17a7-483e-be9c-5a39b0e2dee8\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-k47md" Nov 24 09:24:27 crc kubenswrapper[4886]: I1124 09:24:27.124515 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-k47md" Nov 24 09:24:27 crc kubenswrapper[4886]: I1124 09:24:27.701805 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-k47md"] Nov 24 09:24:27 crc kubenswrapper[4886]: W1124 09:24:27.705593 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podce68d69b_17a7_483e_be9c_5a39b0e2dee8.slice/crio-92e20418bcabf6f5acdf7f0bc8d86b3c850b400611f98f7b9cbd066e81ecfe2e WatchSource:0}: Error finding container 92e20418bcabf6f5acdf7f0bc8d86b3c850b400611f98f7b9cbd066e81ecfe2e: Status 404 returned error can't find the container with id 92e20418bcabf6f5acdf7f0bc8d86b3c850b400611f98f7b9cbd066e81ecfe2e Nov 24 09:24:28 crc kubenswrapper[4886]: I1124 09:24:28.671128 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-k47md" event={"ID":"ce68d69b-17a7-483e-be9c-5a39b0e2dee8","Type":"ContainerStarted","Data":"38f3cf53ff03c1a90d60e8afe7a31c0ca50ed451037d14f77a03f26fafc48c3a"} Nov 24 09:24:28 crc kubenswrapper[4886]: I1124 09:24:28.671594 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-k47md" event={"ID":"ce68d69b-17a7-483e-be9c-5a39b0e2dee8","Type":"ContainerStarted","Data":"92e20418bcabf6f5acdf7f0bc8d86b3c850b400611f98f7b9cbd066e81ecfe2e"} Nov 24 09:24:28 crc kubenswrapper[4886]: I1124 09:24:28.694239 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-k47md" podStartSLOduration=2.145346472 podStartE2EDuration="2.694220214s" podCreationTimestamp="2025-11-24 09:24:26 +0000 UTC" firstStartedPulling="2025-11-24 09:24:27.708867476 +0000 UTC m=+2123.595605611" lastFinishedPulling="2025-11-24 09:24:28.257741218 +0000 UTC m=+2124.144479353" observedRunningTime="2025-11-24 09:24:28.688455609 +0000 UTC m=+2124.575193764" watchObservedRunningTime="2025-11-24 09:24:28.694220214 +0000 UTC m=+2124.580958349" Nov 24 09:24:31 crc kubenswrapper[4886]: I1124 09:24:31.784908 4886 patch_prober.go:28] interesting pod/machine-config-daemon-zc46q container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 09:24:31 crc kubenswrapper[4886]: I1124 09:24:31.785504 4886 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zc46q" podUID="23cb993e-0360-4449-b604-8ddd825a6502" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 09:24:33 crc kubenswrapper[4886]: I1124 09:24:33.732467 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lvvtb" event={"ID":"7472f270-eb34-4f9e-b332-d7f53ff1e014","Type":"ContainerStarted","Data":"95c805bb2412a06deef4f41132f8e4fdc59063f672d0fe03a68ee3107a19d4c8"} Nov 24 09:24:39 crc kubenswrapper[4886]: I1124 09:24:39.786605 4886 generic.go:334] "Generic (PLEG): container finished" podID="7472f270-eb34-4f9e-b332-d7f53ff1e014" containerID="95c805bb2412a06deef4f41132f8e4fdc59063f672d0fe03a68ee3107a19d4c8" exitCode=0 Nov 24 09:24:39 crc kubenswrapper[4886]: I1124 09:24:39.786712 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lvvtb" event={"ID":"7472f270-eb34-4f9e-b332-d7f53ff1e014","Type":"ContainerDied","Data":"95c805bb2412a06deef4f41132f8e4fdc59063f672d0fe03a68ee3107a19d4c8"} Nov 24 09:24:42 crc kubenswrapper[4886]: I1124 09:24:42.816785 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lvvtb" event={"ID":"7472f270-eb34-4f9e-b332-d7f53ff1e014","Type":"ContainerStarted","Data":"bb9deda38d0d6b9c7a065b5e6bdfb7798d6d61726fadc1ec773e7992bb503647"} Nov 24 09:24:42 crc kubenswrapper[4886]: I1124 09:24:42.841815 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-lvvtb" podStartSLOduration=2.664600239 podStartE2EDuration="18.841795827s" podCreationTimestamp="2025-11-24 09:24:24 +0000 UTC" firstStartedPulling="2025-11-24 09:24:25.631916177 +0000 UTC m=+2121.518654312" lastFinishedPulling="2025-11-24 09:24:41.809111765 +0000 UTC m=+2137.695849900" observedRunningTime="2025-11-24 09:24:42.837888966 +0000 UTC m=+2138.724627111" watchObservedRunningTime="2025-11-24 09:24:42.841795827 +0000 UTC m=+2138.728533962" Nov 24 09:24:44 crc kubenswrapper[4886]: I1124 09:24:44.687071 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-lvvtb" Nov 24 09:24:44 crc kubenswrapper[4886]: I1124 09:24:44.687483 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-lvvtb" Nov 24 09:24:45 crc kubenswrapper[4886]: I1124 09:24:45.735044 4886 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-lvvtb" podUID="7472f270-eb34-4f9e-b332-d7f53ff1e014" containerName="registry-server" probeResult="failure" output=< Nov 24 09:24:45 crc kubenswrapper[4886]: timeout: failed to connect service ":50051" within 1s Nov 24 09:24:45 crc kubenswrapper[4886]: > Nov 24 09:24:54 crc kubenswrapper[4886]: I1124 09:24:54.753764 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-lvvtb" Nov 24 09:24:54 crc kubenswrapper[4886]: I1124 09:24:54.813906 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-lvvtb" Nov 24 09:24:55 crc kubenswrapper[4886]: I1124 09:24:55.329488 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-lvvtb"] Nov 24 09:24:55 crc kubenswrapper[4886]: I1124 09:24:55.513885 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-27q9d"] Nov 24 09:24:55 crc kubenswrapper[4886]: I1124 09:24:55.514375 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-27q9d" podUID="f32efa8c-706c-4a05-a3a0-6d3be84722c3" containerName="registry-server" containerID="cri-o://35ce95ccaf0cdc818aca0e15931b60e260aae73fe6e671bee84829250a7b6a43" gracePeriod=2 Nov 24 09:24:55 crc kubenswrapper[4886]: I1124 09:24:55.935096 4886 generic.go:334] "Generic (PLEG): container finished" podID="f32efa8c-706c-4a05-a3a0-6d3be84722c3" containerID="35ce95ccaf0cdc818aca0e15931b60e260aae73fe6e671bee84829250a7b6a43" exitCode=0 Nov 24 09:24:55 crc kubenswrapper[4886]: I1124 09:24:55.935489 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-27q9d" event={"ID":"f32efa8c-706c-4a05-a3a0-6d3be84722c3","Type":"ContainerDied","Data":"35ce95ccaf0cdc818aca0e15931b60e260aae73fe6e671bee84829250a7b6a43"} Nov 24 09:24:56 crc kubenswrapper[4886]: I1124 09:24:56.062791 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-27q9d" Nov 24 09:24:56 crc kubenswrapper[4886]: I1124 09:24:56.212302 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f32efa8c-706c-4a05-a3a0-6d3be84722c3-utilities\") pod \"f32efa8c-706c-4a05-a3a0-6d3be84722c3\" (UID: \"f32efa8c-706c-4a05-a3a0-6d3be84722c3\") " Nov 24 09:24:56 crc kubenswrapper[4886]: I1124 09:24:56.212418 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f32efa8c-706c-4a05-a3a0-6d3be84722c3-catalog-content\") pod \"f32efa8c-706c-4a05-a3a0-6d3be84722c3\" (UID: \"f32efa8c-706c-4a05-a3a0-6d3be84722c3\") " Nov 24 09:24:56 crc kubenswrapper[4886]: I1124 09:24:56.212585 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m5nkk\" (UniqueName: \"kubernetes.io/projected/f32efa8c-706c-4a05-a3a0-6d3be84722c3-kube-api-access-m5nkk\") pod \"f32efa8c-706c-4a05-a3a0-6d3be84722c3\" (UID: \"f32efa8c-706c-4a05-a3a0-6d3be84722c3\") " Nov 24 09:24:56 crc kubenswrapper[4886]: I1124 09:24:56.212940 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f32efa8c-706c-4a05-a3a0-6d3be84722c3-utilities" (OuterVolumeSpecName: "utilities") pod "f32efa8c-706c-4a05-a3a0-6d3be84722c3" (UID: "f32efa8c-706c-4a05-a3a0-6d3be84722c3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 09:24:56 crc kubenswrapper[4886]: I1124 09:24:56.213708 4886 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f32efa8c-706c-4a05-a3a0-6d3be84722c3-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 09:24:56 crc kubenswrapper[4886]: I1124 09:24:56.224857 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f32efa8c-706c-4a05-a3a0-6d3be84722c3-kube-api-access-m5nkk" (OuterVolumeSpecName: "kube-api-access-m5nkk") pod "f32efa8c-706c-4a05-a3a0-6d3be84722c3" (UID: "f32efa8c-706c-4a05-a3a0-6d3be84722c3"). InnerVolumeSpecName "kube-api-access-m5nkk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 09:24:56 crc kubenswrapper[4886]: I1124 09:24:56.288508 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f32efa8c-706c-4a05-a3a0-6d3be84722c3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f32efa8c-706c-4a05-a3a0-6d3be84722c3" (UID: "f32efa8c-706c-4a05-a3a0-6d3be84722c3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 09:24:56 crc kubenswrapper[4886]: I1124 09:24:56.316719 4886 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f32efa8c-706c-4a05-a3a0-6d3be84722c3-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 09:24:56 crc kubenswrapper[4886]: I1124 09:24:56.316782 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m5nkk\" (UniqueName: \"kubernetes.io/projected/f32efa8c-706c-4a05-a3a0-6d3be84722c3-kube-api-access-m5nkk\") on node \"crc\" DevicePath \"\"" Nov 24 09:24:56 crc kubenswrapper[4886]: I1124 09:24:56.947718 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-27q9d" event={"ID":"f32efa8c-706c-4a05-a3a0-6d3be84722c3","Type":"ContainerDied","Data":"2c7cdac66915e43dc803614a8ef277f448057cd3745ff04baa05d52eb63b0d95"} Nov 24 09:24:56 crc kubenswrapper[4886]: I1124 09:24:56.947772 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-27q9d" Nov 24 09:24:56 crc kubenswrapper[4886]: I1124 09:24:56.947777 4886 scope.go:117] "RemoveContainer" containerID="35ce95ccaf0cdc818aca0e15931b60e260aae73fe6e671bee84829250a7b6a43" Nov 24 09:24:56 crc kubenswrapper[4886]: I1124 09:24:56.991861 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-27q9d"] Nov 24 09:24:56 crc kubenswrapper[4886]: I1124 09:24:56.992442 4886 scope.go:117] "RemoveContainer" containerID="13108e6edd6e8a04c5fa2189f863da877fabe23761f13a3a80708e98afb20ce5" Nov 24 09:24:57 crc kubenswrapper[4886]: I1124 09:24:57.002356 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-27q9d"] Nov 24 09:24:57 crc kubenswrapper[4886]: I1124 09:24:57.034435 4886 scope.go:117] "RemoveContainer" containerID="b81cff2f00edafe81809ac5f442737d84dc29ae6ad0f8eb3829cf4c6ceaa8dc6" Nov 24 09:24:58 crc kubenswrapper[4886]: I1124 09:24:58.860044 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f32efa8c-706c-4a05-a3a0-6d3be84722c3" path="/var/lib/kubelet/pods/f32efa8c-706c-4a05-a3a0-6d3be84722c3/volumes" Nov 24 09:25:01 crc kubenswrapper[4886]: I1124 09:25:01.783817 4886 patch_prober.go:28] interesting pod/machine-config-daemon-zc46q container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 09:25:01 crc kubenswrapper[4886]: I1124 09:25:01.784346 4886 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zc46q" podUID="23cb993e-0360-4449-b604-8ddd825a6502" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 09:25:31 crc kubenswrapper[4886]: I1124 09:25:31.784393 4886 patch_prober.go:28] interesting pod/machine-config-daemon-zc46q container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 09:25:31 crc kubenswrapper[4886]: I1124 09:25:31.785000 4886 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zc46q" podUID="23cb993e-0360-4449-b604-8ddd825a6502" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 09:25:31 crc kubenswrapper[4886]: I1124 09:25:31.785068 4886 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-zc46q" Nov 24 09:25:31 crc kubenswrapper[4886]: I1124 09:25:31.786066 4886 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"67754098990f83baab3456c52fb1373130119f025267270bb714b30248f4edbb"} pod="openshift-machine-config-operator/machine-config-daemon-zc46q" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 24 09:25:31 crc kubenswrapper[4886]: I1124 09:25:31.786128 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-zc46q" podUID="23cb993e-0360-4449-b604-8ddd825a6502" containerName="machine-config-daemon" containerID="cri-o://67754098990f83baab3456c52fb1373130119f025267270bb714b30248f4edbb" gracePeriod=600 Nov 24 09:25:32 crc kubenswrapper[4886]: I1124 09:25:32.272646 4886 generic.go:334] "Generic (PLEG): container finished" podID="23cb993e-0360-4449-b604-8ddd825a6502" containerID="67754098990f83baab3456c52fb1373130119f025267270bb714b30248f4edbb" exitCode=0 Nov 24 09:25:32 crc kubenswrapper[4886]: I1124 09:25:32.272727 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zc46q" event={"ID":"23cb993e-0360-4449-b604-8ddd825a6502","Type":"ContainerDied","Data":"67754098990f83baab3456c52fb1373130119f025267270bb714b30248f4edbb"} Nov 24 09:25:32 crc kubenswrapper[4886]: I1124 09:25:32.273425 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zc46q" event={"ID":"23cb993e-0360-4449-b604-8ddd825a6502","Type":"ContainerStarted","Data":"c1638d7a83dddd919356693ac2e24d3a4c73032c91053a69ef35d33fca8c2b71"} Nov 24 09:25:32 crc kubenswrapper[4886]: I1124 09:25:32.273557 4886 scope.go:117] "RemoveContainer" containerID="e9e5da22d6f503b164f934a14d5dfa5723b622287d3192f538cd5fb4bb3d237c" Nov 24 09:28:01 crc kubenswrapper[4886]: I1124 09:28:01.784493 4886 patch_prober.go:28] interesting pod/machine-config-daemon-zc46q container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 09:28:01 crc kubenswrapper[4886]: I1124 09:28:01.785213 4886 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zc46q" podUID="23cb993e-0360-4449-b604-8ddd825a6502" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 09:28:31 crc kubenswrapper[4886]: I1124 09:28:31.784656 4886 patch_prober.go:28] interesting pod/machine-config-daemon-zc46q container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 09:28:31 crc kubenswrapper[4886]: I1124 09:28:31.785230 4886 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zc46q" podUID="23cb993e-0360-4449-b604-8ddd825a6502" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 09:28:55 crc kubenswrapper[4886]: I1124 09:28:55.143925 4886 generic.go:334] "Generic (PLEG): container finished" podID="ce68d69b-17a7-483e-be9c-5a39b0e2dee8" containerID="38f3cf53ff03c1a90d60e8afe7a31c0ca50ed451037d14f77a03f26fafc48c3a" exitCode=0 Nov 24 09:28:55 crc kubenswrapper[4886]: I1124 09:28:55.144357 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-k47md" event={"ID":"ce68d69b-17a7-483e-be9c-5a39b0e2dee8","Type":"ContainerDied","Data":"38f3cf53ff03c1a90d60e8afe7a31c0ca50ed451037d14f77a03f26fafc48c3a"} Nov 24 09:28:56 crc kubenswrapper[4886]: I1124 09:28:56.551535 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-k47md" Nov 24 09:28:56 crc kubenswrapper[4886]: I1124 09:28:56.654973 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ce68d69b-17a7-483e-be9c-5a39b0e2dee8-ssh-key\") pod \"ce68d69b-17a7-483e-be9c-5a39b0e2dee8\" (UID: \"ce68d69b-17a7-483e-be9c-5a39b0e2dee8\") " Nov 24 09:28:56 crc kubenswrapper[4886]: I1124 09:28:56.655481 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tzpt4\" (UniqueName: \"kubernetes.io/projected/ce68d69b-17a7-483e-be9c-5a39b0e2dee8-kube-api-access-tzpt4\") pod \"ce68d69b-17a7-483e-be9c-5a39b0e2dee8\" (UID: \"ce68d69b-17a7-483e-be9c-5a39b0e2dee8\") " Nov 24 09:28:56 crc kubenswrapper[4886]: I1124 09:28:56.655532 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce68d69b-17a7-483e-be9c-5a39b0e2dee8-libvirt-combined-ca-bundle\") pod \"ce68d69b-17a7-483e-be9c-5a39b0e2dee8\" (UID: \"ce68d69b-17a7-483e-be9c-5a39b0e2dee8\") " Nov 24 09:28:56 crc kubenswrapper[4886]: I1124 09:28:56.655626 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/ce68d69b-17a7-483e-be9c-5a39b0e2dee8-libvirt-secret-0\") pod \"ce68d69b-17a7-483e-be9c-5a39b0e2dee8\" (UID: \"ce68d69b-17a7-483e-be9c-5a39b0e2dee8\") " Nov 24 09:28:56 crc kubenswrapper[4886]: I1124 09:28:56.655798 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ce68d69b-17a7-483e-be9c-5a39b0e2dee8-inventory\") pod \"ce68d69b-17a7-483e-be9c-5a39b0e2dee8\" (UID: \"ce68d69b-17a7-483e-be9c-5a39b0e2dee8\") " Nov 24 09:28:56 crc kubenswrapper[4886]: I1124 09:28:56.660725 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce68d69b-17a7-483e-be9c-5a39b0e2dee8-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "ce68d69b-17a7-483e-be9c-5a39b0e2dee8" (UID: "ce68d69b-17a7-483e-be9c-5a39b0e2dee8"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:28:56 crc kubenswrapper[4886]: I1124 09:28:56.661392 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce68d69b-17a7-483e-be9c-5a39b0e2dee8-kube-api-access-tzpt4" (OuterVolumeSpecName: "kube-api-access-tzpt4") pod "ce68d69b-17a7-483e-be9c-5a39b0e2dee8" (UID: "ce68d69b-17a7-483e-be9c-5a39b0e2dee8"). InnerVolumeSpecName "kube-api-access-tzpt4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 09:28:56 crc kubenswrapper[4886]: I1124 09:28:56.681433 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce68d69b-17a7-483e-be9c-5a39b0e2dee8-libvirt-secret-0" (OuterVolumeSpecName: "libvirt-secret-0") pod "ce68d69b-17a7-483e-be9c-5a39b0e2dee8" (UID: "ce68d69b-17a7-483e-be9c-5a39b0e2dee8"). InnerVolumeSpecName "libvirt-secret-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:28:56 crc kubenswrapper[4886]: I1124 09:28:56.683073 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce68d69b-17a7-483e-be9c-5a39b0e2dee8-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "ce68d69b-17a7-483e-be9c-5a39b0e2dee8" (UID: "ce68d69b-17a7-483e-be9c-5a39b0e2dee8"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:28:56 crc kubenswrapper[4886]: I1124 09:28:56.687393 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce68d69b-17a7-483e-be9c-5a39b0e2dee8-inventory" (OuterVolumeSpecName: "inventory") pod "ce68d69b-17a7-483e-be9c-5a39b0e2dee8" (UID: "ce68d69b-17a7-483e-be9c-5a39b0e2dee8"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:28:56 crc kubenswrapper[4886]: I1124 09:28:56.757409 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tzpt4\" (UniqueName: \"kubernetes.io/projected/ce68d69b-17a7-483e-be9c-5a39b0e2dee8-kube-api-access-tzpt4\") on node \"crc\" DevicePath \"\"" Nov 24 09:28:56 crc kubenswrapper[4886]: I1124 09:28:56.757442 4886 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce68d69b-17a7-483e-be9c-5a39b0e2dee8-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 09:28:56 crc kubenswrapper[4886]: I1124 09:28:56.757453 4886 reconciler_common.go:293] "Volume detached for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/ce68d69b-17a7-483e-be9c-5a39b0e2dee8-libvirt-secret-0\") on node \"crc\" DevicePath \"\"" Nov 24 09:28:56 crc kubenswrapper[4886]: I1124 09:28:56.757465 4886 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ce68d69b-17a7-483e-be9c-5a39b0e2dee8-inventory\") on node \"crc\" DevicePath \"\"" Nov 24 09:28:56 crc kubenswrapper[4886]: I1124 09:28:56.757473 4886 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ce68d69b-17a7-483e-be9c-5a39b0e2dee8-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 24 09:28:57 crc kubenswrapper[4886]: I1124 09:28:57.171805 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-k47md" event={"ID":"ce68d69b-17a7-483e-be9c-5a39b0e2dee8","Type":"ContainerDied","Data":"92e20418bcabf6f5acdf7f0bc8d86b3c850b400611f98f7b9cbd066e81ecfe2e"} Nov 24 09:28:57 crc kubenswrapper[4886]: I1124 09:28:57.171847 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-k47md" Nov 24 09:28:57 crc kubenswrapper[4886]: I1124 09:28:57.171846 4886 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="92e20418bcabf6f5acdf7f0bc8d86b3c850b400611f98f7b9cbd066e81ecfe2e" Nov 24 09:28:57 crc kubenswrapper[4886]: I1124 09:28:57.250363 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-f2b4z"] Nov 24 09:28:57 crc kubenswrapper[4886]: E1124 09:28:57.250871 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f32efa8c-706c-4a05-a3a0-6d3be84722c3" containerName="registry-server" Nov 24 09:28:57 crc kubenswrapper[4886]: I1124 09:28:57.250894 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="f32efa8c-706c-4a05-a3a0-6d3be84722c3" containerName="registry-server" Nov 24 09:28:57 crc kubenswrapper[4886]: E1124 09:28:57.250910 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f32efa8c-706c-4a05-a3a0-6d3be84722c3" containerName="extract-content" Nov 24 09:28:57 crc kubenswrapper[4886]: I1124 09:28:57.250919 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="f32efa8c-706c-4a05-a3a0-6d3be84722c3" containerName="extract-content" Nov 24 09:28:57 crc kubenswrapper[4886]: E1124 09:28:57.250933 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f32efa8c-706c-4a05-a3a0-6d3be84722c3" containerName="extract-utilities" Nov 24 09:28:57 crc kubenswrapper[4886]: I1124 09:28:57.250941 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="f32efa8c-706c-4a05-a3a0-6d3be84722c3" containerName="extract-utilities" Nov 24 09:28:57 crc kubenswrapper[4886]: E1124 09:28:57.250972 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce68d69b-17a7-483e-be9c-5a39b0e2dee8" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Nov 24 09:28:57 crc kubenswrapper[4886]: I1124 09:28:57.250982 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce68d69b-17a7-483e-be9c-5a39b0e2dee8" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Nov 24 09:28:57 crc kubenswrapper[4886]: I1124 09:28:57.251272 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="ce68d69b-17a7-483e-be9c-5a39b0e2dee8" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Nov 24 09:28:57 crc kubenswrapper[4886]: I1124 09:28:57.251297 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="f32efa8c-706c-4a05-a3a0-6d3be84722c3" containerName="registry-server" Nov 24 09:28:57 crc kubenswrapper[4886]: I1124 09:28:57.252093 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-f2b4z" Nov 24 09:28:57 crc kubenswrapper[4886]: I1124 09:28:57.254810 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 24 09:28:57 crc kubenswrapper[4886]: I1124 09:28:57.255969 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"nova-extra-config" Nov 24 09:28:57 crc kubenswrapper[4886]: I1124 09:28:57.256098 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 24 09:28:57 crc kubenswrapper[4886]: I1124 09:28:57.256433 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-n8v49" Nov 24 09:28:57 crc kubenswrapper[4886]: I1124 09:28:57.256448 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-migration-ssh-key" Nov 24 09:28:57 crc kubenswrapper[4886]: I1124 09:28:57.257299 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 24 09:28:57 crc kubenswrapper[4886]: I1124 09:28:57.257997 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-config" Nov 24 09:28:57 crc kubenswrapper[4886]: I1124 09:28:57.264919 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-f2b4z"] Nov 24 09:28:57 crc kubenswrapper[4886]: I1124 09:28:57.367427 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/36804e58-9c67-454c-a7b2-6aca006eb481-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-f2b4z\" (UID: \"36804e58-9c67-454c-a7b2-6aca006eb481\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-f2b4z" Nov 24 09:28:57 crc kubenswrapper[4886]: I1124 09:28:57.367487 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/36804e58-9c67-454c-a7b2-6aca006eb481-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-f2b4z\" (UID: \"36804e58-9c67-454c-a7b2-6aca006eb481\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-f2b4z" Nov 24 09:28:57 crc kubenswrapper[4886]: I1124 09:28:57.367538 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/36804e58-9c67-454c-a7b2-6aca006eb481-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-f2b4z\" (UID: \"36804e58-9c67-454c-a7b2-6aca006eb481\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-f2b4z" Nov 24 09:28:57 crc kubenswrapper[4886]: I1124 09:28:57.367641 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/36804e58-9c67-454c-a7b2-6aca006eb481-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-f2b4z\" (UID: \"36804e58-9c67-454c-a7b2-6aca006eb481\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-f2b4z" Nov 24 09:28:57 crc kubenswrapper[4886]: I1124 09:28:57.368260 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/36804e58-9c67-454c-a7b2-6aca006eb481-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-f2b4z\" (UID: \"36804e58-9c67-454c-a7b2-6aca006eb481\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-f2b4z" Nov 24 09:28:57 crc kubenswrapper[4886]: I1124 09:28:57.368321 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q876g\" (UniqueName: \"kubernetes.io/projected/36804e58-9c67-454c-a7b2-6aca006eb481-kube-api-access-q876g\") pod \"nova-edpm-deployment-openstack-edpm-ipam-f2b4z\" (UID: \"36804e58-9c67-454c-a7b2-6aca006eb481\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-f2b4z" Nov 24 09:28:57 crc kubenswrapper[4886]: I1124 09:28:57.368349 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/36804e58-9c67-454c-a7b2-6aca006eb481-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-f2b4z\" (UID: \"36804e58-9c67-454c-a7b2-6aca006eb481\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-f2b4z" Nov 24 09:28:57 crc kubenswrapper[4886]: I1124 09:28:57.368399 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/36804e58-9c67-454c-a7b2-6aca006eb481-ssh-key\") pod \"nova-edpm-deployment-openstack-edpm-ipam-f2b4z\" (UID: \"36804e58-9c67-454c-a7b2-6aca006eb481\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-f2b4z" Nov 24 09:28:57 crc kubenswrapper[4886]: I1124 09:28:57.368432 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/36804e58-9c67-454c-a7b2-6aca006eb481-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-f2b4z\" (UID: \"36804e58-9c67-454c-a7b2-6aca006eb481\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-f2b4z" Nov 24 09:28:57 crc kubenswrapper[4886]: I1124 09:28:57.472526 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/36804e58-9c67-454c-a7b2-6aca006eb481-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-f2b4z\" (UID: \"36804e58-9c67-454c-a7b2-6aca006eb481\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-f2b4z" Nov 24 09:28:57 crc kubenswrapper[4886]: I1124 09:28:57.472634 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/36804e58-9c67-454c-a7b2-6aca006eb481-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-f2b4z\" (UID: \"36804e58-9c67-454c-a7b2-6aca006eb481\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-f2b4z" Nov 24 09:28:57 crc kubenswrapper[4886]: I1124 09:28:57.472662 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q876g\" (UniqueName: \"kubernetes.io/projected/36804e58-9c67-454c-a7b2-6aca006eb481-kube-api-access-q876g\") pod \"nova-edpm-deployment-openstack-edpm-ipam-f2b4z\" (UID: \"36804e58-9c67-454c-a7b2-6aca006eb481\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-f2b4z" Nov 24 09:28:57 crc kubenswrapper[4886]: I1124 09:28:57.472684 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/36804e58-9c67-454c-a7b2-6aca006eb481-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-f2b4z\" (UID: \"36804e58-9c67-454c-a7b2-6aca006eb481\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-f2b4z" Nov 24 09:28:57 crc kubenswrapper[4886]: I1124 09:28:57.472719 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/36804e58-9c67-454c-a7b2-6aca006eb481-ssh-key\") pod \"nova-edpm-deployment-openstack-edpm-ipam-f2b4z\" (UID: \"36804e58-9c67-454c-a7b2-6aca006eb481\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-f2b4z" Nov 24 09:28:57 crc kubenswrapper[4886]: I1124 09:28:57.472752 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/36804e58-9c67-454c-a7b2-6aca006eb481-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-f2b4z\" (UID: \"36804e58-9c67-454c-a7b2-6aca006eb481\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-f2b4z" Nov 24 09:28:57 crc kubenswrapper[4886]: I1124 09:28:57.472779 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/36804e58-9c67-454c-a7b2-6aca006eb481-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-f2b4z\" (UID: \"36804e58-9c67-454c-a7b2-6aca006eb481\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-f2b4z" Nov 24 09:28:57 crc kubenswrapper[4886]: I1124 09:28:57.472818 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/36804e58-9c67-454c-a7b2-6aca006eb481-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-f2b4z\" (UID: \"36804e58-9c67-454c-a7b2-6aca006eb481\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-f2b4z" Nov 24 09:28:57 crc kubenswrapper[4886]: I1124 09:28:57.472836 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/36804e58-9c67-454c-a7b2-6aca006eb481-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-f2b4z\" (UID: \"36804e58-9c67-454c-a7b2-6aca006eb481\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-f2b4z" Nov 24 09:28:57 crc kubenswrapper[4886]: I1124 09:28:57.473895 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/36804e58-9c67-454c-a7b2-6aca006eb481-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-f2b4z\" (UID: \"36804e58-9c67-454c-a7b2-6aca006eb481\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-f2b4z" Nov 24 09:28:57 crc kubenswrapper[4886]: I1124 09:28:57.477691 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/36804e58-9c67-454c-a7b2-6aca006eb481-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-f2b4z\" (UID: \"36804e58-9c67-454c-a7b2-6aca006eb481\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-f2b4z" Nov 24 09:28:57 crc kubenswrapper[4886]: I1124 09:28:57.477969 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/36804e58-9c67-454c-a7b2-6aca006eb481-ssh-key\") pod \"nova-edpm-deployment-openstack-edpm-ipam-f2b4z\" (UID: \"36804e58-9c67-454c-a7b2-6aca006eb481\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-f2b4z" Nov 24 09:28:57 crc kubenswrapper[4886]: I1124 09:28:57.478358 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/36804e58-9c67-454c-a7b2-6aca006eb481-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-f2b4z\" (UID: \"36804e58-9c67-454c-a7b2-6aca006eb481\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-f2b4z" Nov 24 09:28:57 crc kubenswrapper[4886]: I1124 09:28:57.479868 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/36804e58-9c67-454c-a7b2-6aca006eb481-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-f2b4z\" (UID: \"36804e58-9c67-454c-a7b2-6aca006eb481\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-f2b4z" Nov 24 09:28:57 crc kubenswrapper[4886]: I1124 09:28:57.480297 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/36804e58-9c67-454c-a7b2-6aca006eb481-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-f2b4z\" (UID: \"36804e58-9c67-454c-a7b2-6aca006eb481\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-f2b4z" Nov 24 09:28:57 crc kubenswrapper[4886]: I1124 09:28:57.483379 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/36804e58-9c67-454c-a7b2-6aca006eb481-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-f2b4z\" (UID: \"36804e58-9c67-454c-a7b2-6aca006eb481\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-f2b4z" Nov 24 09:28:57 crc kubenswrapper[4886]: I1124 09:28:57.483411 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/36804e58-9c67-454c-a7b2-6aca006eb481-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-f2b4z\" (UID: \"36804e58-9c67-454c-a7b2-6aca006eb481\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-f2b4z" Nov 24 09:28:57 crc kubenswrapper[4886]: I1124 09:28:57.490315 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q876g\" (UniqueName: \"kubernetes.io/projected/36804e58-9c67-454c-a7b2-6aca006eb481-kube-api-access-q876g\") pod \"nova-edpm-deployment-openstack-edpm-ipam-f2b4z\" (UID: \"36804e58-9c67-454c-a7b2-6aca006eb481\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-f2b4z" Nov 24 09:28:57 crc kubenswrapper[4886]: I1124 09:28:57.567818 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-f2b4z" Nov 24 09:28:58 crc kubenswrapper[4886]: I1124 09:28:58.085614 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-f2b4z"] Nov 24 09:28:58 crc kubenswrapper[4886]: I1124 09:28:58.097317 4886 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 24 09:28:58 crc kubenswrapper[4886]: I1124 09:28:58.182048 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-f2b4z" event={"ID":"36804e58-9c67-454c-a7b2-6aca006eb481","Type":"ContainerStarted","Data":"addb34c00d74f332fb895773043bf164f76cb4f6a4db15e4f1f98fd5c98700a3"} Nov 24 09:28:59 crc kubenswrapper[4886]: I1124 09:28:59.193397 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-f2b4z" event={"ID":"36804e58-9c67-454c-a7b2-6aca006eb481","Type":"ContainerStarted","Data":"d0aa10063c15c0ab83effacad714db78ebeef1653cfe7ffe11eb67d6a6282cae"} Nov 24 09:28:59 crc kubenswrapper[4886]: I1124 09:28:59.213962 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-f2b4z" podStartSLOduration=1.445994976 podStartE2EDuration="2.213945803s" podCreationTimestamp="2025-11-24 09:28:57 +0000 UTC" firstStartedPulling="2025-11-24 09:28:58.097020144 +0000 UTC m=+2393.983758279" lastFinishedPulling="2025-11-24 09:28:58.864970971 +0000 UTC m=+2394.751709106" observedRunningTime="2025-11-24 09:28:59.209753603 +0000 UTC m=+2395.096491748" watchObservedRunningTime="2025-11-24 09:28:59.213945803 +0000 UTC m=+2395.100683938" Nov 24 09:29:01 crc kubenswrapper[4886]: I1124 09:29:01.784726 4886 patch_prober.go:28] interesting pod/machine-config-daemon-zc46q container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 09:29:01 crc kubenswrapper[4886]: I1124 09:29:01.785056 4886 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zc46q" podUID="23cb993e-0360-4449-b604-8ddd825a6502" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 09:29:01 crc kubenswrapper[4886]: I1124 09:29:01.785105 4886 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-zc46q" Nov 24 09:29:01 crc kubenswrapper[4886]: I1124 09:29:01.785639 4886 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c1638d7a83dddd919356693ac2e24d3a4c73032c91053a69ef35d33fca8c2b71"} pod="openshift-machine-config-operator/machine-config-daemon-zc46q" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 24 09:29:01 crc kubenswrapper[4886]: I1124 09:29:01.785690 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-zc46q" podUID="23cb993e-0360-4449-b604-8ddd825a6502" containerName="machine-config-daemon" containerID="cri-o://c1638d7a83dddd919356693ac2e24d3a4c73032c91053a69ef35d33fca8c2b71" gracePeriod=600 Nov 24 09:29:01 crc kubenswrapper[4886]: E1124 09:29:01.921827 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zc46q_openshift-machine-config-operator(23cb993e-0360-4449-b604-8ddd825a6502)\"" pod="openshift-machine-config-operator/machine-config-daemon-zc46q" podUID="23cb993e-0360-4449-b604-8ddd825a6502" Nov 24 09:29:02 crc kubenswrapper[4886]: I1124 09:29:02.600703 4886 generic.go:334] "Generic (PLEG): container finished" podID="23cb993e-0360-4449-b604-8ddd825a6502" containerID="c1638d7a83dddd919356693ac2e24d3a4c73032c91053a69ef35d33fca8c2b71" exitCode=0 Nov 24 09:29:02 crc kubenswrapper[4886]: I1124 09:29:02.600761 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zc46q" event={"ID":"23cb993e-0360-4449-b604-8ddd825a6502","Type":"ContainerDied","Data":"c1638d7a83dddd919356693ac2e24d3a4c73032c91053a69ef35d33fca8c2b71"} Nov 24 09:29:02 crc kubenswrapper[4886]: I1124 09:29:02.600804 4886 scope.go:117] "RemoveContainer" containerID="67754098990f83baab3456c52fb1373130119f025267270bb714b30248f4edbb" Nov 24 09:29:02 crc kubenswrapper[4886]: I1124 09:29:02.601706 4886 scope.go:117] "RemoveContainer" containerID="c1638d7a83dddd919356693ac2e24d3a4c73032c91053a69ef35d33fca8c2b71" Nov 24 09:29:02 crc kubenswrapper[4886]: E1124 09:29:02.602119 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zc46q_openshift-machine-config-operator(23cb993e-0360-4449-b604-8ddd825a6502)\"" pod="openshift-machine-config-operator/machine-config-daemon-zc46q" podUID="23cb993e-0360-4449-b604-8ddd825a6502" Nov 24 09:29:15 crc kubenswrapper[4886]: I1124 09:29:15.848992 4886 scope.go:117] "RemoveContainer" containerID="c1638d7a83dddd919356693ac2e24d3a4c73032c91053a69ef35d33fca8c2b71" Nov 24 09:29:15 crc kubenswrapper[4886]: E1124 09:29:15.849947 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zc46q_openshift-machine-config-operator(23cb993e-0360-4449-b604-8ddd825a6502)\"" pod="openshift-machine-config-operator/machine-config-daemon-zc46q" podUID="23cb993e-0360-4449-b604-8ddd825a6502" Nov 24 09:29:26 crc kubenswrapper[4886]: I1124 09:29:26.852201 4886 scope.go:117] "RemoveContainer" containerID="c1638d7a83dddd919356693ac2e24d3a4c73032c91053a69ef35d33fca8c2b71" Nov 24 09:29:26 crc kubenswrapper[4886]: E1124 09:29:26.853188 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zc46q_openshift-machine-config-operator(23cb993e-0360-4449-b604-8ddd825a6502)\"" pod="openshift-machine-config-operator/machine-config-daemon-zc46q" podUID="23cb993e-0360-4449-b604-8ddd825a6502" Nov 24 09:29:40 crc kubenswrapper[4886]: I1124 09:29:40.849755 4886 scope.go:117] "RemoveContainer" containerID="c1638d7a83dddd919356693ac2e24d3a4c73032c91053a69ef35d33fca8c2b71" Nov 24 09:29:40 crc kubenswrapper[4886]: E1124 09:29:40.850702 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zc46q_openshift-machine-config-operator(23cb993e-0360-4449-b604-8ddd825a6502)\"" pod="openshift-machine-config-operator/machine-config-daemon-zc46q" podUID="23cb993e-0360-4449-b604-8ddd825a6502" Nov 24 09:29:54 crc kubenswrapper[4886]: I1124 09:29:54.856208 4886 scope.go:117] "RemoveContainer" containerID="c1638d7a83dddd919356693ac2e24d3a4c73032c91053a69ef35d33fca8c2b71" Nov 24 09:29:54 crc kubenswrapper[4886]: E1124 09:29:54.857423 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zc46q_openshift-machine-config-operator(23cb993e-0360-4449-b604-8ddd825a6502)\"" pod="openshift-machine-config-operator/machine-config-daemon-zc46q" podUID="23cb993e-0360-4449-b604-8ddd825a6502" Nov 24 09:30:00 crc kubenswrapper[4886]: I1124 09:30:00.168019 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29399610-k482d"] Nov 24 09:30:00 crc kubenswrapper[4886]: I1124 09:30:00.170439 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29399610-k482d" Nov 24 09:30:00 crc kubenswrapper[4886]: I1124 09:30:00.172757 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 24 09:30:00 crc kubenswrapper[4886]: I1124 09:30:00.176136 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 24 09:30:00 crc kubenswrapper[4886]: I1124 09:30:00.186325 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29399610-k482d"] Nov 24 09:30:00 crc kubenswrapper[4886]: I1124 09:30:00.326444 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-74z2w\" (UniqueName: \"kubernetes.io/projected/6be116aa-edcd-4924-a722-5b12e4ae7eb5-kube-api-access-74z2w\") pod \"collect-profiles-29399610-k482d\" (UID: \"6be116aa-edcd-4924-a722-5b12e4ae7eb5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399610-k482d" Nov 24 09:30:00 crc kubenswrapper[4886]: I1124 09:30:00.326746 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6be116aa-edcd-4924-a722-5b12e4ae7eb5-config-volume\") pod \"collect-profiles-29399610-k482d\" (UID: \"6be116aa-edcd-4924-a722-5b12e4ae7eb5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399610-k482d" Nov 24 09:30:00 crc kubenswrapper[4886]: I1124 09:30:00.327103 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6be116aa-edcd-4924-a722-5b12e4ae7eb5-secret-volume\") pod \"collect-profiles-29399610-k482d\" (UID: \"6be116aa-edcd-4924-a722-5b12e4ae7eb5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399610-k482d" Nov 24 09:30:00 crc kubenswrapper[4886]: I1124 09:30:00.428850 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-74z2w\" (UniqueName: \"kubernetes.io/projected/6be116aa-edcd-4924-a722-5b12e4ae7eb5-kube-api-access-74z2w\") pod \"collect-profiles-29399610-k482d\" (UID: \"6be116aa-edcd-4924-a722-5b12e4ae7eb5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399610-k482d" Nov 24 09:30:00 crc kubenswrapper[4886]: I1124 09:30:00.428967 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6be116aa-edcd-4924-a722-5b12e4ae7eb5-config-volume\") pod \"collect-profiles-29399610-k482d\" (UID: \"6be116aa-edcd-4924-a722-5b12e4ae7eb5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399610-k482d" Nov 24 09:30:00 crc kubenswrapper[4886]: I1124 09:30:00.429067 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6be116aa-edcd-4924-a722-5b12e4ae7eb5-secret-volume\") pod \"collect-profiles-29399610-k482d\" (UID: \"6be116aa-edcd-4924-a722-5b12e4ae7eb5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399610-k482d" Nov 24 09:30:00 crc kubenswrapper[4886]: I1124 09:30:00.429956 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6be116aa-edcd-4924-a722-5b12e4ae7eb5-config-volume\") pod \"collect-profiles-29399610-k482d\" (UID: \"6be116aa-edcd-4924-a722-5b12e4ae7eb5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399610-k482d" Nov 24 09:30:00 crc kubenswrapper[4886]: I1124 09:30:00.435783 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6be116aa-edcd-4924-a722-5b12e4ae7eb5-secret-volume\") pod \"collect-profiles-29399610-k482d\" (UID: \"6be116aa-edcd-4924-a722-5b12e4ae7eb5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399610-k482d" Nov 24 09:30:00 crc kubenswrapper[4886]: I1124 09:30:00.446133 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-74z2w\" (UniqueName: \"kubernetes.io/projected/6be116aa-edcd-4924-a722-5b12e4ae7eb5-kube-api-access-74z2w\") pod \"collect-profiles-29399610-k482d\" (UID: \"6be116aa-edcd-4924-a722-5b12e4ae7eb5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399610-k482d" Nov 24 09:30:00 crc kubenswrapper[4886]: I1124 09:30:00.511310 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29399610-k482d" Nov 24 09:30:00 crc kubenswrapper[4886]: I1124 09:30:00.937989 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29399610-k482d"] Nov 24 09:30:01 crc kubenswrapper[4886]: I1124 09:30:01.149093 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29399610-k482d" event={"ID":"6be116aa-edcd-4924-a722-5b12e4ae7eb5","Type":"ContainerStarted","Data":"29bad52f10fcf40226e27b979843d69a9a9c2f64f99cdeea5d1f5665f96fcead"} Nov 24 09:30:01 crc kubenswrapper[4886]: I1124 09:30:01.149243 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29399610-k482d" event={"ID":"6be116aa-edcd-4924-a722-5b12e4ae7eb5","Type":"ContainerStarted","Data":"2c55c40b230cb721ac3bbd68e3196244872261fa8a8be9a2d23e5d1cad5ef4c9"} Nov 24 09:30:01 crc kubenswrapper[4886]: I1124 09:30:01.166023 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29399610-k482d" podStartSLOduration=1.166004259 podStartE2EDuration="1.166004259s" podCreationTimestamp="2025-11-24 09:30:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 09:30:01.163472047 +0000 UTC m=+2457.050210182" watchObservedRunningTime="2025-11-24 09:30:01.166004259 +0000 UTC m=+2457.052742394" Nov 24 09:30:02 crc kubenswrapper[4886]: I1124 09:30:02.161227 4886 generic.go:334] "Generic (PLEG): container finished" podID="6be116aa-edcd-4924-a722-5b12e4ae7eb5" containerID="29bad52f10fcf40226e27b979843d69a9a9c2f64f99cdeea5d1f5665f96fcead" exitCode=0 Nov 24 09:30:02 crc kubenswrapper[4886]: I1124 09:30:02.161324 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29399610-k482d" event={"ID":"6be116aa-edcd-4924-a722-5b12e4ae7eb5","Type":"ContainerDied","Data":"29bad52f10fcf40226e27b979843d69a9a9c2f64f99cdeea5d1f5665f96fcead"} Nov 24 09:30:03 crc kubenswrapper[4886]: I1124 09:30:03.451559 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29399610-k482d" Nov 24 09:30:03 crc kubenswrapper[4886]: I1124 09:30:03.580946 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6be116aa-edcd-4924-a722-5b12e4ae7eb5-secret-volume\") pod \"6be116aa-edcd-4924-a722-5b12e4ae7eb5\" (UID: \"6be116aa-edcd-4924-a722-5b12e4ae7eb5\") " Nov 24 09:30:03 crc kubenswrapper[4886]: I1124 09:30:03.581431 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-74z2w\" (UniqueName: \"kubernetes.io/projected/6be116aa-edcd-4924-a722-5b12e4ae7eb5-kube-api-access-74z2w\") pod \"6be116aa-edcd-4924-a722-5b12e4ae7eb5\" (UID: \"6be116aa-edcd-4924-a722-5b12e4ae7eb5\") " Nov 24 09:30:03 crc kubenswrapper[4886]: I1124 09:30:03.581497 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6be116aa-edcd-4924-a722-5b12e4ae7eb5-config-volume\") pod \"6be116aa-edcd-4924-a722-5b12e4ae7eb5\" (UID: \"6be116aa-edcd-4924-a722-5b12e4ae7eb5\") " Nov 24 09:30:03 crc kubenswrapper[4886]: I1124 09:30:03.582748 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6be116aa-edcd-4924-a722-5b12e4ae7eb5-config-volume" (OuterVolumeSpecName: "config-volume") pod "6be116aa-edcd-4924-a722-5b12e4ae7eb5" (UID: "6be116aa-edcd-4924-a722-5b12e4ae7eb5"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 09:30:03 crc kubenswrapper[4886]: I1124 09:30:03.586350 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6be116aa-edcd-4924-a722-5b12e4ae7eb5-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "6be116aa-edcd-4924-a722-5b12e4ae7eb5" (UID: "6be116aa-edcd-4924-a722-5b12e4ae7eb5"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:30:03 crc kubenswrapper[4886]: I1124 09:30:03.586375 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6be116aa-edcd-4924-a722-5b12e4ae7eb5-kube-api-access-74z2w" (OuterVolumeSpecName: "kube-api-access-74z2w") pod "6be116aa-edcd-4924-a722-5b12e4ae7eb5" (UID: "6be116aa-edcd-4924-a722-5b12e4ae7eb5"). InnerVolumeSpecName "kube-api-access-74z2w". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 09:30:03 crc kubenswrapper[4886]: I1124 09:30:03.684607 4886 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6be116aa-edcd-4924-a722-5b12e4ae7eb5-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 24 09:30:03 crc kubenswrapper[4886]: I1124 09:30:03.684653 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-74z2w\" (UniqueName: \"kubernetes.io/projected/6be116aa-edcd-4924-a722-5b12e4ae7eb5-kube-api-access-74z2w\") on node \"crc\" DevicePath \"\"" Nov 24 09:30:03 crc kubenswrapper[4886]: I1124 09:30:03.684662 4886 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6be116aa-edcd-4924-a722-5b12e4ae7eb5-config-volume\") on node \"crc\" DevicePath \"\"" Nov 24 09:30:04 crc kubenswrapper[4886]: I1124 09:30:04.180489 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29399610-k482d" event={"ID":"6be116aa-edcd-4924-a722-5b12e4ae7eb5","Type":"ContainerDied","Data":"2c55c40b230cb721ac3bbd68e3196244872261fa8a8be9a2d23e5d1cad5ef4c9"} Nov 24 09:30:04 crc kubenswrapper[4886]: I1124 09:30:04.180528 4886 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2c55c40b230cb721ac3bbd68e3196244872261fa8a8be9a2d23e5d1cad5ef4c9" Nov 24 09:30:04 crc kubenswrapper[4886]: I1124 09:30:04.180799 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29399610-k482d" Nov 24 09:30:04 crc kubenswrapper[4886]: I1124 09:30:04.243907 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29399565-qwh9r"] Nov 24 09:30:04 crc kubenswrapper[4886]: I1124 09:30:04.253038 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29399565-qwh9r"] Nov 24 09:30:04 crc kubenswrapper[4886]: I1124 09:30:04.860808 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1afd949e-d0f2-41b8-9632-917df3468232" path="/var/lib/kubelet/pods/1afd949e-d0f2-41b8-9632-917df3468232/volumes" Nov 24 09:30:08 crc kubenswrapper[4886]: I1124 09:30:08.849892 4886 scope.go:117] "RemoveContainer" containerID="c1638d7a83dddd919356693ac2e24d3a4c73032c91053a69ef35d33fca8c2b71" Nov 24 09:30:08 crc kubenswrapper[4886]: E1124 09:30:08.850703 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zc46q_openshift-machine-config-operator(23cb993e-0360-4449-b604-8ddd825a6502)\"" pod="openshift-machine-config-operator/machine-config-daemon-zc46q" podUID="23cb993e-0360-4449-b604-8ddd825a6502" Nov 24 09:30:20 crc kubenswrapper[4886]: I1124 09:30:20.850026 4886 scope.go:117] "RemoveContainer" containerID="c1638d7a83dddd919356693ac2e24d3a4c73032c91053a69ef35d33fca8c2b71" Nov 24 09:30:20 crc kubenswrapper[4886]: E1124 09:30:20.851085 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zc46q_openshift-machine-config-operator(23cb993e-0360-4449-b604-8ddd825a6502)\"" pod="openshift-machine-config-operator/machine-config-daemon-zc46q" podUID="23cb993e-0360-4449-b604-8ddd825a6502" Nov 24 09:30:24 crc kubenswrapper[4886]: I1124 09:30:24.655340 4886 scope.go:117] "RemoveContainer" containerID="7b3dca954eaf4c9da537eafa5a081d7a3b680b6d0ded76a6a85c95f22d76af83" Nov 24 09:30:34 crc kubenswrapper[4886]: I1124 09:30:34.856615 4886 scope.go:117] "RemoveContainer" containerID="c1638d7a83dddd919356693ac2e24d3a4c73032c91053a69ef35d33fca8c2b71" Nov 24 09:30:34 crc kubenswrapper[4886]: E1124 09:30:34.857575 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zc46q_openshift-machine-config-operator(23cb993e-0360-4449-b604-8ddd825a6502)\"" pod="openshift-machine-config-operator/machine-config-daemon-zc46q" podUID="23cb993e-0360-4449-b604-8ddd825a6502" Nov 24 09:30:45 crc kubenswrapper[4886]: I1124 09:30:45.849988 4886 scope.go:117] "RemoveContainer" containerID="c1638d7a83dddd919356693ac2e24d3a4c73032c91053a69ef35d33fca8c2b71" Nov 24 09:30:45 crc kubenswrapper[4886]: E1124 09:30:45.850771 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zc46q_openshift-machine-config-operator(23cb993e-0360-4449-b604-8ddd825a6502)\"" pod="openshift-machine-config-operator/machine-config-daemon-zc46q" podUID="23cb993e-0360-4449-b604-8ddd825a6502" Nov 24 09:30:59 crc kubenswrapper[4886]: I1124 09:30:59.849444 4886 scope.go:117] "RemoveContainer" containerID="c1638d7a83dddd919356693ac2e24d3a4c73032c91053a69ef35d33fca8c2b71" Nov 24 09:30:59 crc kubenswrapper[4886]: E1124 09:30:59.850250 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zc46q_openshift-machine-config-operator(23cb993e-0360-4449-b604-8ddd825a6502)\"" pod="openshift-machine-config-operator/machine-config-daemon-zc46q" podUID="23cb993e-0360-4449-b604-8ddd825a6502" Nov 24 09:31:10 crc kubenswrapper[4886]: I1124 09:31:10.850645 4886 scope.go:117] "RemoveContainer" containerID="c1638d7a83dddd919356693ac2e24d3a4c73032c91053a69ef35d33fca8c2b71" Nov 24 09:31:10 crc kubenswrapper[4886]: E1124 09:31:10.851527 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zc46q_openshift-machine-config-operator(23cb993e-0360-4449-b604-8ddd825a6502)\"" pod="openshift-machine-config-operator/machine-config-daemon-zc46q" podUID="23cb993e-0360-4449-b604-8ddd825a6502" Nov 24 09:31:23 crc kubenswrapper[4886]: I1124 09:31:23.850022 4886 scope.go:117] "RemoveContainer" containerID="c1638d7a83dddd919356693ac2e24d3a4c73032c91053a69ef35d33fca8c2b71" Nov 24 09:31:23 crc kubenswrapper[4886]: E1124 09:31:23.850885 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zc46q_openshift-machine-config-operator(23cb993e-0360-4449-b604-8ddd825a6502)\"" pod="openshift-machine-config-operator/machine-config-daemon-zc46q" podUID="23cb993e-0360-4449-b604-8ddd825a6502" Nov 24 09:31:36 crc kubenswrapper[4886]: I1124 09:31:36.849986 4886 scope.go:117] "RemoveContainer" containerID="c1638d7a83dddd919356693ac2e24d3a4c73032c91053a69ef35d33fca8c2b71" Nov 24 09:31:36 crc kubenswrapper[4886]: E1124 09:31:36.850835 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zc46q_openshift-machine-config-operator(23cb993e-0360-4449-b604-8ddd825a6502)\"" pod="openshift-machine-config-operator/machine-config-daemon-zc46q" podUID="23cb993e-0360-4449-b604-8ddd825a6502" Nov 24 09:31:45 crc kubenswrapper[4886]: I1124 09:31:45.131634 4886 generic.go:334] "Generic (PLEG): container finished" podID="36804e58-9c67-454c-a7b2-6aca006eb481" containerID="d0aa10063c15c0ab83effacad714db78ebeef1653cfe7ffe11eb67d6a6282cae" exitCode=0 Nov 24 09:31:45 crc kubenswrapper[4886]: I1124 09:31:45.131733 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-f2b4z" event={"ID":"36804e58-9c67-454c-a7b2-6aca006eb481","Type":"ContainerDied","Data":"d0aa10063c15c0ab83effacad714db78ebeef1653cfe7ffe11eb67d6a6282cae"} Nov 24 09:31:46 crc kubenswrapper[4886]: I1124 09:31:46.547686 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-f2b4z" Nov 24 09:31:46 crc kubenswrapper[4886]: I1124 09:31:46.645041 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q876g\" (UniqueName: \"kubernetes.io/projected/36804e58-9c67-454c-a7b2-6aca006eb481-kube-api-access-q876g\") pod \"36804e58-9c67-454c-a7b2-6aca006eb481\" (UID: \"36804e58-9c67-454c-a7b2-6aca006eb481\") " Nov 24 09:31:46 crc kubenswrapper[4886]: I1124 09:31:46.645141 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/36804e58-9c67-454c-a7b2-6aca006eb481-ssh-key\") pod \"36804e58-9c67-454c-a7b2-6aca006eb481\" (UID: \"36804e58-9c67-454c-a7b2-6aca006eb481\") " Nov 24 09:31:46 crc kubenswrapper[4886]: I1124 09:31:46.645287 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/36804e58-9c67-454c-a7b2-6aca006eb481-nova-migration-ssh-key-0\") pod \"36804e58-9c67-454c-a7b2-6aca006eb481\" (UID: \"36804e58-9c67-454c-a7b2-6aca006eb481\") " Nov 24 09:31:46 crc kubenswrapper[4886]: I1124 09:31:46.645325 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/36804e58-9c67-454c-a7b2-6aca006eb481-nova-cell1-compute-config-0\") pod \"36804e58-9c67-454c-a7b2-6aca006eb481\" (UID: \"36804e58-9c67-454c-a7b2-6aca006eb481\") " Nov 24 09:31:46 crc kubenswrapper[4886]: I1124 09:31:46.645366 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/36804e58-9c67-454c-a7b2-6aca006eb481-inventory\") pod \"36804e58-9c67-454c-a7b2-6aca006eb481\" (UID: \"36804e58-9c67-454c-a7b2-6aca006eb481\") " Nov 24 09:31:46 crc kubenswrapper[4886]: I1124 09:31:46.645424 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/36804e58-9c67-454c-a7b2-6aca006eb481-nova-cell1-compute-config-1\") pod \"36804e58-9c67-454c-a7b2-6aca006eb481\" (UID: \"36804e58-9c67-454c-a7b2-6aca006eb481\") " Nov 24 09:31:46 crc kubenswrapper[4886]: I1124 09:31:46.645445 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/36804e58-9c67-454c-a7b2-6aca006eb481-nova-extra-config-0\") pod \"36804e58-9c67-454c-a7b2-6aca006eb481\" (UID: \"36804e58-9c67-454c-a7b2-6aca006eb481\") " Nov 24 09:31:46 crc kubenswrapper[4886]: I1124 09:31:46.645474 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/36804e58-9c67-454c-a7b2-6aca006eb481-nova-migration-ssh-key-1\") pod \"36804e58-9c67-454c-a7b2-6aca006eb481\" (UID: \"36804e58-9c67-454c-a7b2-6aca006eb481\") " Nov 24 09:31:46 crc kubenswrapper[4886]: I1124 09:31:46.645505 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/36804e58-9c67-454c-a7b2-6aca006eb481-nova-combined-ca-bundle\") pod \"36804e58-9c67-454c-a7b2-6aca006eb481\" (UID: \"36804e58-9c67-454c-a7b2-6aca006eb481\") " Nov 24 09:31:46 crc kubenswrapper[4886]: I1124 09:31:46.650926 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/36804e58-9c67-454c-a7b2-6aca006eb481-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "36804e58-9c67-454c-a7b2-6aca006eb481" (UID: "36804e58-9c67-454c-a7b2-6aca006eb481"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:31:46 crc kubenswrapper[4886]: I1124 09:31:46.662817 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/36804e58-9c67-454c-a7b2-6aca006eb481-kube-api-access-q876g" (OuterVolumeSpecName: "kube-api-access-q876g") pod "36804e58-9c67-454c-a7b2-6aca006eb481" (UID: "36804e58-9c67-454c-a7b2-6aca006eb481"). InnerVolumeSpecName "kube-api-access-q876g". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 09:31:46 crc kubenswrapper[4886]: I1124 09:31:46.679551 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/36804e58-9c67-454c-a7b2-6aca006eb481-nova-extra-config-0" (OuterVolumeSpecName: "nova-extra-config-0") pod "36804e58-9c67-454c-a7b2-6aca006eb481" (UID: "36804e58-9c67-454c-a7b2-6aca006eb481"). InnerVolumeSpecName "nova-extra-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 09:31:46 crc kubenswrapper[4886]: I1124 09:31:46.679902 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/36804e58-9c67-454c-a7b2-6aca006eb481-nova-cell1-compute-config-0" (OuterVolumeSpecName: "nova-cell1-compute-config-0") pod "36804e58-9c67-454c-a7b2-6aca006eb481" (UID: "36804e58-9c67-454c-a7b2-6aca006eb481"). InnerVolumeSpecName "nova-cell1-compute-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:31:46 crc kubenswrapper[4886]: I1124 09:31:46.683335 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/36804e58-9c67-454c-a7b2-6aca006eb481-nova-cell1-compute-config-1" (OuterVolumeSpecName: "nova-cell1-compute-config-1") pod "36804e58-9c67-454c-a7b2-6aca006eb481" (UID: "36804e58-9c67-454c-a7b2-6aca006eb481"). InnerVolumeSpecName "nova-cell1-compute-config-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:31:46 crc kubenswrapper[4886]: I1124 09:31:46.686054 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/36804e58-9c67-454c-a7b2-6aca006eb481-nova-migration-ssh-key-1" (OuterVolumeSpecName: "nova-migration-ssh-key-1") pod "36804e58-9c67-454c-a7b2-6aca006eb481" (UID: "36804e58-9c67-454c-a7b2-6aca006eb481"). InnerVolumeSpecName "nova-migration-ssh-key-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:31:46 crc kubenswrapper[4886]: I1124 09:31:46.688811 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/36804e58-9c67-454c-a7b2-6aca006eb481-inventory" (OuterVolumeSpecName: "inventory") pod "36804e58-9c67-454c-a7b2-6aca006eb481" (UID: "36804e58-9c67-454c-a7b2-6aca006eb481"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:31:46 crc kubenswrapper[4886]: I1124 09:31:46.691520 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/36804e58-9c67-454c-a7b2-6aca006eb481-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "36804e58-9c67-454c-a7b2-6aca006eb481" (UID: "36804e58-9c67-454c-a7b2-6aca006eb481"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:31:46 crc kubenswrapper[4886]: I1124 09:31:46.703621 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/36804e58-9c67-454c-a7b2-6aca006eb481-nova-migration-ssh-key-0" (OuterVolumeSpecName: "nova-migration-ssh-key-0") pod "36804e58-9c67-454c-a7b2-6aca006eb481" (UID: "36804e58-9c67-454c-a7b2-6aca006eb481"). InnerVolumeSpecName "nova-migration-ssh-key-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:31:46 crc kubenswrapper[4886]: I1124 09:31:46.748221 4886 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/36804e58-9c67-454c-a7b2-6aca006eb481-nova-migration-ssh-key-0\") on node \"crc\" DevicePath \"\"" Nov 24 09:31:46 crc kubenswrapper[4886]: I1124 09:31:46.748255 4886 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/36804e58-9c67-454c-a7b2-6aca006eb481-nova-cell1-compute-config-0\") on node \"crc\" DevicePath \"\"" Nov 24 09:31:46 crc kubenswrapper[4886]: I1124 09:31:46.748268 4886 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/36804e58-9c67-454c-a7b2-6aca006eb481-inventory\") on node \"crc\" DevicePath \"\"" Nov 24 09:31:46 crc kubenswrapper[4886]: I1124 09:31:46.748280 4886 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/36804e58-9c67-454c-a7b2-6aca006eb481-nova-cell1-compute-config-1\") on node \"crc\" DevicePath \"\"" Nov 24 09:31:46 crc kubenswrapper[4886]: I1124 09:31:46.748293 4886 reconciler_common.go:293] "Volume detached for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/36804e58-9c67-454c-a7b2-6aca006eb481-nova-extra-config-0\") on node \"crc\" DevicePath \"\"" Nov 24 09:31:46 crc kubenswrapper[4886]: I1124 09:31:46.748304 4886 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/36804e58-9c67-454c-a7b2-6aca006eb481-nova-migration-ssh-key-1\") on node \"crc\" DevicePath \"\"" Nov 24 09:31:46 crc kubenswrapper[4886]: I1124 09:31:46.748315 4886 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/36804e58-9c67-454c-a7b2-6aca006eb481-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 09:31:46 crc kubenswrapper[4886]: I1124 09:31:46.748325 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q876g\" (UniqueName: \"kubernetes.io/projected/36804e58-9c67-454c-a7b2-6aca006eb481-kube-api-access-q876g\") on node \"crc\" DevicePath \"\"" Nov 24 09:31:46 crc kubenswrapper[4886]: I1124 09:31:46.748335 4886 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/36804e58-9c67-454c-a7b2-6aca006eb481-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 24 09:31:47 crc kubenswrapper[4886]: I1124 09:31:47.150234 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-f2b4z" event={"ID":"36804e58-9c67-454c-a7b2-6aca006eb481","Type":"ContainerDied","Data":"addb34c00d74f332fb895773043bf164f76cb4f6a4db15e4f1f98fd5c98700a3"} Nov 24 09:31:47 crc kubenswrapper[4886]: I1124 09:31:47.150276 4886 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="addb34c00d74f332fb895773043bf164f76cb4f6a4db15e4f1f98fd5c98700a3" Nov 24 09:31:47 crc kubenswrapper[4886]: I1124 09:31:47.150339 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-f2b4z" Nov 24 09:31:47 crc kubenswrapper[4886]: I1124 09:31:47.296514 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-5f9t4"] Nov 24 09:31:47 crc kubenswrapper[4886]: E1124 09:31:47.297204 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="36804e58-9c67-454c-a7b2-6aca006eb481" containerName="nova-edpm-deployment-openstack-edpm-ipam" Nov 24 09:31:47 crc kubenswrapper[4886]: I1124 09:31:47.297291 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="36804e58-9c67-454c-a7b2-6aca006eb481" containerName="nova-edpm-deployment-openstack-edpm-ipam" Nov 24 09:31:47 crc kubenswrapper[4886]: E1124 09:31:47.297384 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6be116aa-edcd-4924-a722-5b12e4ae7eb5" containerName="collect-profiles" Nov 24 09:31:47 crc kubenswrapper[4886]: I1124 09:31:47.297437 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="6be116aa-edcd-4924-a722-5b12e4ae7eb5" containerName="collect-profiles" Nov 24 09:31:47 crc kubenswrapper[4886]: I1124 09:31:47.297677 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="36804e58-9c67-454c-a7b2-6aca006eb481" containerName="nova-edpm-deployment-openstack-edpm-ipam" Nov 24 09:31:47 crc kubenswrapper[4886]: I1124 09:31:47.297755 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="6be116aa-edcd-4924-a722-5b12e4ae7eb5" containerName="collect-profiles" Nov 24 09:31:47 crc kubenswrapper[4886]: I1124 09:31:47.298406 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-5f9t4" Nov 24 09:31:47 crc kubenswrapper[4886]: I1124 09:31:47.304324 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-n8v49" Nov 24 09:31:47 crc kubenswrapper[4886]: I1124 09:31:47.304380 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 24 09:31:47 crc kubenswrapper[4886]: I1124 09:31:47.304634 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 24 09:31:47 crc kubenswrapper[4886]: I1124 09:31:47.304637 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-compute-config-data" Nov 24 09:31:47 crc kubenswrapper[4886]: I1124 09:31:47.304694 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 24 09:31:47 crc kubenswrapper[4886]: I1124 09:31:47.328521 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-5f9t4"] Nov 24 09:31:47 crc kubenswrapper[4886]: I1124 09:31:47.459692 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x6ktm\" (UniqueName: \"kubernetes.io/projected/ecc70fd6-3ec9-4488-89c6-2a9a6d803bcb-kube-api-access-x6ktm\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-5f9t4\" (UID: \"ecc70fd6-3ec9-4488-89c6-2a9a6d803bcb\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-5f9t4" Nov 24 09:31:47 crc kubenswrapper[4886]: I1124 09:31:47.460139 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ecc70fd6-3ec9-4488-89c6-2a9a6d803bcb-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-5f9t4\" (UID: \"ecc70fd6-3ec9-4488-89c6-2a9a6d803bcb\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-5f9t4" Nov 24 09:31:47 crc kubenswrapper[4886]: I1124 09:31:47.460205 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/ecc70fd6-3ec9-4488-89c6-2a9a6d803bcb-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-5f9t4\" (UID: \"ecc70fd6-3ec9-4488-89c6-2a9a6d803bcb\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-5f9t4" Nov 24 09:31:47 crc kubenswrapper[4886]: I1124 09:31:47.460245 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/ecc70fd6-3ec9-4488-89c6-2a9a6d803bcb-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-5f9t4\" (UID: \"ecc70fd6-3ec9-4488-89c6-2a9a6d803bcb\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-5f9t4" Nov 24 09:31:47 crc kubenswrapper[4886]: I1124 09:31:47.460362 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ecc70fd6-3ec9-4488-89c6-2a9a6d803bcb-ssh-key\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-5f9t4\" (UID: \"ecc70fd6-3ec9-4488-89c6-2a9a6d803bcb\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-5f9t4" Nov 24 09:31:47 crc kubenswrapper[4886]: I1124 09:31:47.460426 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ecc70fd6-3ec9-4488-89c6-2a9a6d803bcb-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-5f9t4\" (UID: \"ecc70fd6-3ec9-4488-89c6-2a9a6d803bcb\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-5f9t4" Nov 24 09:31:47 crc kubenswrapper[4886]: I1124 09:31:47.460529 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/ecc70fd6-3ec9-4488-89c6-2a9a6d803bcb-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-5f9t4\" (UID: \"ecc70fd6-3ec9-4488-89c6-2a9a6d803bcb\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-5f9t4" Nov 24 09:31:47 crc kubenswrapper[4886]: I1124 09:31:47.561965 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/ecc70fd6-3ec9-4488-89c6-2a9a6d803bcb-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-5f9t4\" (UID: \"ecc70fd6-3ec9-4488-89c6-2a9a6d803bcb\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-5f9t4" Nov 24 09:31:47 crc kubenswrapper[4886]: I1124 09:31:47.562114 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x6ktm\" (UniqueName: \"kubernetes.io/projected/ecc70fd6-3ec9-4488-89c6-2a9a6d803bcb-kube-api-access-x6ktm\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-5f9t4\" (UID: \"ecc70fd6-3ec9-4488-89c6-2a9a6d803bcb\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-5f9t4" Nov 24 09:31:47 crc kubenswrapper[4886]: I1124 09:31:47.562204 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ecc70fd6-3ec9-4488-89c6-2a9a6d803bcb-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-5f9t4\" (UID: \"ecc70fd6-3ec9-4488-89c6-2a9a6d803bcb\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-5f9t4" Nov 24 09:31:47 crc kubenswrapper[4886]: I1124 09:31:47.562237 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/ecc70fd6-3ec9-4488-89c6-2a9a6d803bcb-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-5f9t4\" (UID: \"ecc70fd6-3ec9-4488-89c6-2a9a6d803bcb\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-5f9t4" Nov 24 09:31:47 crc kubenswrapper[4886]: I1124 09:31:47.562283 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/ecc70fd6-3ec9-4488-89c6-2a9a6d803bcb-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-5f9t4\" (UID: \"ecc70fd6-3ec9-4488-89c6-2a9a6d803bcb\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-5f9t4" Nov 24 09:31:47 crc kubenswrapper[4886]: I1124 09:31:47.562321 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ecc70fd6-3ec9-4488-89c6-2a9a6d803bcb-ssh-key\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-5f9t4\" (UID: \"ecc70fd6-3ec9-4488-89c6-2a9a6d803bcb\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-5f9t4" Nov 24 09:31:47 crc kubenswrapper[4886]: I1124 09:31:47.562356 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ecc70fd6-3ec9-4488-89c6-2a9a6d803bcb-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-5f9t4\" (UID: \"ecc70fd6-3ec9-4488-89c6-2a9a6d803bcb\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-5f9t4" Nov 24 09:31:47 crc kubenswrapper[4886]: I1124 09:31:47.567334 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/ecc70fd6-3ec9-4488-89c6-2a9a6d803bcb-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-5f9t4\" (UID: \"ecc70fd6-3ec9-4488-89c6-2a9a6d803bcb\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-5f9t4" Nov 24 09:31:47 crc kubenswrapper[4886]: I1124 09:31:47.567337 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/ecc70fd6-3ec9-4488-89c6-2a9a6d803bcb-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-5f9t4\" (UID: \"ecc70fd6-3ec9-4488-89c6-2a9a6d803bcb\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-5f9t4" Nov 24 09:31:47 crc kubenswrapper[4886]: I1124 09:31:47.567507 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ecc70fd6-3ec9-4488-89c6-2a9a6d803bcb-ssh-key\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-5f9t4\" (UID: \"ecc70fd6-3ec9-4488-89c6-2a9a6d803bcb\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-5f9t4" Nov 24 09:31:47 crc kubenswrapper[4886]: I1124 09:31:47.568561 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/ecc70fd6-3ec9-4488-89c6-2a9a6d803bcb-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-5f9t4\" (UID: \"ecc70fd6-3ec9-4488-89c6-2a9a6d803bcb\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-5f9t4" Nov 24 09:31:47 crc kubenswrapper[4886]: I1124 09:31:47.569823 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ecc70fd6-3ec9-4488-89c6-2a9a6d803bcb-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-5f9t4\" (UID: \"ecc70fd6-3ec9-4488-89c6-2a9a6d803bcb\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-5f9t4" Nov 24 09:31:47 crc kubenswrapper[4886]: I1124 09:31:47.570139 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ecc70fd6-3ec9-4488-89c6-2a9a6d803bcb-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-5f9t4\" (UID: \"ecc70fd6-3ec9-4488-89c6-2a9a6d803bcb\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-5f9t4" Nov 24 09:31:47 crc kubenswrapper[4886]: I1124 09:31:47.587951 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x6ktm\" (UniqueName: \"kubernetes.io/projected/ecc70fd6-3ec9-4488-89c6-2a9a6d803bcb-kube-api-access-x6ktm\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-5f9t4\" (UID: \"ecc70fd6-3ec9-4488-89c6-2a9a6d803bcb\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-5f9t4" Nov 24 09:31:47 crc kubenswrapper[4886]: I1124 09:31:47.616022 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-5f9t4" Nov 24 09:31:48 crc kubenswrapper[4886]: I1124 09:31:48.209319 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-5f9t4"] Nov 24 09:31:48 crc kubenswrapper[4886]: I1124 09:31:48.850204 4886 scope.go:117] "RemoveContainer" containerID="c1638d7a83dddd919356693ac2e24d3a4c73032c91053a69ef35d33fca8c2b71" Nov 24 09:31:48 crc kubenswrapper[4886]: E1124 09:31:48.851320 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zc46q_openshift-machine-config-operator(23cb993e-0360-4449-b604-8ddd825a6502)\"" pod="openshift-machine-config-operator/machine-config-daemon-zc46q" podUID="23cb993e-0360-4449-b604-8ddd825a6502" Nov 24 09:31:49 crc kubenswrapper[4886]: I1124 09:31:49.171463 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-5f9t4" event={"ID":"ecc70fd6-3ec9-4488-89c6-2a9a6d803bcb","Type":"ContainerStarted","Data":"66cbd376c53501452215a6d93bddec9a64917a6936be2c67f7cd024d0723f635"} Nov 24 09:31:49 crc kubenswrapper[4886]: I1124 09:31:49.171517 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-5f9t4" event={"ID":"ecc70fd6-3ec9-4488-89c6-2a9a6d803bcb","Type":"ContainerStarted","Data":"3a2a7ad0a9c874a1885e7340feda714f1a1d7df89255e476b2d5d2e6669a7c2e"} Nov 24 09:31:49 crc kubenswrapper[4886]: I1124 09:31:49.189069 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-5f9t4" podStartSLOduration=1.698620586 podStartE2EDuration="2.189046572s" podCreationTimestamp="2025-11-24 09:31:47 +0000 UTC" firstStartedPulling="2025-11-24 09:31:48.209995292 +0000 UTC m=+2564.096733427" lastFinishedPulling="2025-11-24 09:31:48.700421268 +0000 UTC m=+2564.587159413" observedRunningTime="2025-11-24 09:31:49.187969962 +0000 UTC m=+2565.074708147" watchObservedRunningTime="2025-11-24 09:31:49.189046572 +0000 UTC m=+2565.075784697" Nov 24 09:32:03 crc kubenswrapper[4886]: I1124 09:32:03.849220 4886 scope.go:117] "RemoveContainer" containerID="c1638d7a83dddd919356693ac2e24d3a4c73032c91053a69ef35d33fca8c2b71" Nov 24 09:32:03 crc kubenswrapper[4886]: E1124 09:32:03.849935 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zc46q_openshift-machine-config-operator(23cb993e-0360-4449-b604-8ddd825a6502)\"" pod="openshift-machine-config-operator/machine-config-daemon-zc46q" podUID="23cb993e-0360-4449-b604-8ddd825a6502" Nov 24 09:32:17 crc kubenswrapper[4886]: I1124 09:32:17.849924 4886 scope.go:117] "RemoveContainer" containerID="c1638d7a83dddd919356693ac2e24d3a4c73032c91053a69ef35d33fca8c2b71" Nov 24 09:32:17 crc kubenswrapper[4886]: E1124 09:32:17.850853 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zc46q_openshift-machine-config-operator(23cb993e-0360-4449-b604-8ddd825a6502)\"" pod="openshift-machine-config-operator/machine-config-daemon-zc46q" podUID="23cb993e-0360-4449-b604-8ddd825a6502" Nov 24 09:32:29 crc kubenswrapper[4886]: I1124 09:32:29.849140 4886 scope.go:117] "RemoveContainer" containerID="c1638d7a83dddd919356693ac2e24d3a4c73032c91053a69ef35d33fca8c2b71" Nov 24 09:32:29 crc kubenswrapper[4886]: E1124 09:32:29.850119 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zc46q_openshift-machine-config-operator(23cb993e-0360-4449-b604-8ddd825a6502)\"" pod="openshift-machine-config-operator/machine-config-daemon-zc46q" podUID="23cb993e-0360-4449-b604-8ddd825a6502" Nov 24 09:32:44 crc kubenswrapper[4886]: I1124 09:32:44.863063 4886 scope.go:117] "RemoveContainer" containerID="c1638d7a83dddd919356693ac2e24d3a4c73032c91053a69ef35d33fca8c2b71" Nov 24 09:32:44 crc kubenswrapper[4886]: E1124 09:32:44.864254 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zc46q_openshift-machine-config-operator(23cb993e-0360-4449-b604-8ddd825a6502)\"" pod="openshift-machine-config-operator/machine-config-daemon-zc46q" podUID="23cb993e-0360-4449-b604-8ddd825a6502" Nov 24 09:32:57 crc kubenswrapper[4886]: I1124 09:32:57.848700 4886 scope.go:117] "RemoveContainer" containerID="c1638d7a83dddd919356693ac2e24d3a4c73032c91053a69ef35d33fca8c2b71" Nov 24 09:32:57 crc kubenswrapper[4886]: E1124 09:32:57.849318 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zc46q_openshift-machine-config-operator(23cb993e-0360-4449-b604-8ddd825a6502)\"" pod="openshift-machine-config-operator/machine-config-daemon-zc46q" podUID="23cb993e-0360-4449-b604-8ddd825a6502" Nov 24 09:33:08 crc kubenswrapper[4886]: I1124 09:33:08.849809 4886 scope.go:117] "RemoveContainer" containerID="c1638d7a83dddd919356693ac2e24d3a4c73032c91053a69ef35d33fca8c2b71" Nov 24 09:33:08 crc kubenswrapper[4886]: E1124 09:33:08.850917 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zc46q_openshift-machine-config-operator(23cb993e-0360-4449-b604-8ddd825a6502)\"" pod="openshift-machine-config-operator/machine-config-daemon-zc46q" podUID="23cb993e-0360-4449-b604-8ddd825a6502" Nov 24 09:33:19 crc kubenswrapper[4886]: I1124 09:33:19.849425 4886 scope.go:117] "RemoveContainer" containerID="c1638d7a83dddd919356693ac2e24d3a4c73032c91053a69ef35d33fca8c2b71" Nov 24 09:33:19 crc kubenswrapper[4886]: E1124 09:33:19.850080 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zc46q_openshift-machine-config-operator(23cb993e-0360-4449-b604-8ddd825a6502)\"" pod="openshift-machine-config-operator/machine-config-daemon-zc46q" podUID="23cb993e-0360-4449-b604-8ddd825a6502" Nov 24 09:33:34 crc kubenswrapper[4886]: I1124 09:33:34.858107 4886 scope.go:117] "RemoveContainer" containerID="c1638d7a83dddd919356693ac2e24d3a4c73032c91053a69ef35d33fca8c2b71" Nov 24 09:33:34 crc kubenswrapper[4886]: E1124 09:33:34.858984 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zc46q_openshift-machine-config-operator(23cb993e-0360-4449-b604-8ddd825a6502)\"" pod="openshift-machine-config-operator/machine-config-daemon-zc46q" podUID="23cb993e-0360-4449-b604-8ddd825a6502" Nov 24 09:33:46 crc kubenswrapper[4886]: I1124 09:33:46.850402 4886 scope.go:117] "RemoveContainer" containerID="c1638d7a83dddd919356693ac2e24d3a4c73032c91053a69ef35d33fca8c2b71" Nov 24 09:33:46 crc kubenswrapper[4886]: E1124 09:33:46.851860 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zc46q_openshift-machine-config-operator(23cb993e-0360-4449-b604-8ddd825a6502)\"" pod="openshift-machine-config-operator/machine-config-daemon-zc46q" podUID="23cb993e-0360-4449-b604-8ddd825a6502" Nov 24 09:33:58 crc kubenswrapper[4886]: I1124 09:33:58.849706 4886 scope.go:117] "RemoveContainer" containerID="c1638d7a83dddd919356693ac2e24d3a4c73032c91053a69ef35d33fca8c2b71" Nov 24 09:33:58 crc kubenswrapper[4886]: E1124 09:33:58.850370 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zc46q_openshift-machine-config-operator(23cb993e-0360-4449-b604-8ddd825a6502)\"" pod="openshift-machine-config-operator/machine-config-daemon-zc46q" podUID="23cb993e-0360-4449-b604-8ddd825a6502" Nov 24 09:34:07 crc kubenswrapper[4886]: I1124 09:34:07.416330 4886 generic.go:334] "Generic (PLEG): container finished" podID="ecc70fd6-3ec9-4488-89c6-2a9a6d803bcb" containerID="66cbd376c53501452215a6d93bddec9a64917a6936be2c67f7cd024d0723f635" exitCode=0 Nov 24 09:34:07 crc kubenswrapper[4886]: I1124 09:34:07.416406 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-5f9t4" event={"ID":"ecc70fd6-3ec9-4488-89c6-2a9a6d803bcb","Type":"ContainerDied","Data":"66cbd376c53501452215a6d93bddec9a64917a6936be2c67f7cd024d0723f635"} Nov 24 09:34:08 crc kubenswrapper[4886]: I1124 09:34:08.843434 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-5f9t4" Nov 24 09:34:08 crc kubenswrapper[4886]: I1124 09:34:08.870498 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ecc70fd6-3ec9-4488-89c6-2a9a6d803bcb-inventory\") pod \"ecc70fd6-3ec9-4488-89c6-2a9a6d803bcb\" (UID: \"ecc70fd6-3ec9-4488-89c6-2a9a6d803bcb\") " Nov 24 09:34:08 crc kubenswrapper[4886]: I1124 09:34:08.870567 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/ecc70fd6-3ec9-4488-89c6-2a9a6d803bcb-ceilometer-compute-config-data-2\") pod \"ecc70fd6-3ec9-4488-89c6-2a9a6d803bcb\" (UID: \"ecc70fd6-3ec9-4488-89c6-2a9a6d803bcb\") " Nov 24 09:34:08 crc kubenswrapper[4886]: I1124 09:34:08.870603 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ecc70fd6-3ec9-4488-89c6-2a9a6d803bcb-ssh-key\") pod \"ecc70fd6-3ec9-4488-89c6-2a9a6d803bcb\" (UID: \"ecc70fd6-3ec9-4488-89c6-2a9a6d803bcb\") " Nov 24 09:34:08 crc kubenswrapper[4886]: I1124 09:34:08.870644 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/ecc70fd6-3ec9-4488-89c6-2a9a6d803bcb-ceilometer-compute-config-data-0\") pod \"ecc70fd6-3ec9-4488-89c6-2a9a6d803bcb\" (UID: \"ecc70fd6-3ec9-4488-89c6-2a9a6d803bcb\") " Nov 24 09:34:08 crc kubenswrapper[4886]: I1124 09:34:08.870727 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x6ktm\" (UniqueName: \"kubernetes.io/projected/ecc70fd6-3ec9-4488-89c6-2a9a6d803bcb-kube-api-access-x6ktm\") pod \"ecc70fd6-3ec9-4488-89c6-2a9a6d803bcb\" (UID: \"ecc70fd6-3ec9-4488-89c6-2a9a6d803bcb\") " Nov 24 09:34:08 crc kubenswrapper[4886]: I1124 09:34:08.870839 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/ecc70fd6-3ec9-4488-89c6-2a9a6d803bcb-ceilometer-compute-config-data-1\") pod \"ecc70fd6-3ec9-4488-89c6-2a9a6d803bcb\" (UID: \"ecc70fd6-3ec9-4488-89c6-2a9a6d803bcb\") " Nov 24 09:34:08 crc kubenswrapper[4886]: I1124 09:34:08.870872 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ecc70fd6-3ec9-4488-89c6-2a9a6d803bcb-telemetry-combined-ca-bundle\") pod \"ecc70fd6-3ec9-4488-89c6-2a9a6d803bcb\" (UID: \"ecc70fd6-3ec9-4488-89c6-2a9a6d803bcb\") " Nov 24 09:34:08 crc kubenswrapper[4886]: I1124 09:34:08.876014 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ecc70fd6-3ec9-4488-89c6-2a9a6d803bcb-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "ecc70fd6-3ec9-4488-89c6-2a9a6d803bcb" (UID: "ecc70fd6-3ec9-4488-89c6-2a9a6d803bcb"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:34:08 crc kubenswrapper[4886]: I1124 09:34:08.877636 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ecc70fd6-3ec9-4488-89c6-2a9a6d803bcb-kube-api-access-x6ktm" (OuterVolumeSpecName: "kube-api-access-x6ktm") pod "ecc70fd6-3ec9-4488-89c6-2a9a6d803bcb" (UID: "ecc70fd6-3ec9-4488-89c6-2a9a6d803bcb"). InnerVolumeSpecName "kube-api-access-x6ktm". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 09:34:08 crc kubenswrapper[4886]: I1124 09:34:08.900174 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ecc70fd6-3ec9-4488-89c6-2a9a6d803bcb-ceilometer-compute-config-data-1" (OuterVolumeSpecName: "ceilometer-compute-config-data-1") pod "ecc70fd6-3ec9-4488-89c6-2a9a6d803bcb" (UID: "ecc70fd6-3ec9-4488-89c6-2a9a6d803bcb"). InnerVolumeSpecName "ceilometer-compute-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:34:08 crc kubenswrapper[4886]: I1124 09:34:08.902009 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ecc70fd6-3ec9-4488-89c6-2a9a6d803bcb-ceilometer-compute-config-data-0" (OuterVolumeSpecName: "ceilometer-compute-config-data-0") pod "ecc70fd6-3ec9-4488-89c6-2a9a6d803bcb" (UID: "ecc70fd6-3ec9-4488-89c6-2a9a6d803bcb"). InnerVolumeSpecName "ceilometer-compute-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:34:08 crc kubenswrapper[4886]: I1124 09:34:08.905827 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ecc70fd6-3ec9-4488-89c6-2a9a6d803bcb-inventory" (OuterVolumeSpecName: "inventory") pod "ecc70fd6-3ec9-4488-89c6-2a9a6d803bcb" (UID: "ecc70fd6-3ec9-4488-89c6-2a9a6d803bcb"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:34:08 crc kubenswrapper[4886]: I1124 09:34:08.907848 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ecc70fd6-3ec9-4488-89c6-2a9a6d803bcb-ceilometer-compute-config-data-2" (OuterVolumeSpecName: "ceilometer-compute-config-data-2") pod "ecc70fd6-3ec9-4488-89c6-2a9a6d803bcb" (UID: "ecc70fd6-3ec9-4488-89c6-2a9a6d803bcb"). InnerVolumeSpecName "ceilometer-compute-config-data-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:34:08 crc kubenswrapper[4886]: I1124 09:34:08.909647 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ecc70fd6-3ec9-4488-89c6-2a9a6d803bcb-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "ecc70fd6-3ec9-4488-89c6-2a9a6d803bcb" (UID: "ecc70fd6-3ec9-4488-89c6-2a9a6d803bcb"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:34:08 crc kubenswrapper[4886]: I1124 09:34:08.972940 4886 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/ecc70fd6-3ec9-4488-89c6-2a9a6d803bcb-ceilometer-compute-config-data-1\") on node \"crc\" DevicePath \"\"" Nov 24 09:34:08 crc kubenswrapper[4886]: I1124 09:34:08.972976 4886 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ecc70fd6-3ec9-4488-89c6-2a9a6d803bcb-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 09:34:08 crc kubenswrapper[4886]: I1124 09:34:08.972994 4886 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ecc70fd6-3ec9-4488-89c6-2a9a6d803bcb-inventory\") on node \"crc\" DevicePath \"\"" Nov 24 09:34:08 crc kubenswrapper[4886]: I1124 09:34:08.973007 4886 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/ecc70fd6-3ec9-4488-89c6-2a9a6d803bcb-ceilometer-compute-config-data-2\") on node \"crc\" DevicePath \"\"" Nov 24 09:34:08 crc kubenswrapper[4886]: I1124 09:34:08.973018 4886 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ecc70fd6-3ec9-4488-89c6-2a9a6d803bcb-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 24 09:34:08 crc kubenswrapper[4886]: I1124 09:34:08.973029 4886 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/ecc70fd6-3ec9-4488-89c6-2a9a6d803bcb-ceilometer-compute-config-data-0\") on node \"crc\" DevicePath \"\"" Nov 24 09:34:08 crc kubenswrapper[4886]: I1124 09:34:08.973041 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x6ktm\" (UniqueName: \"kubernetes.io/projected/ecc70fd6-3ec9-4488-89c6-2a9a6d803bcb-kube-api-access-x6ktm\") on node \"crc\" DevicePath \"\"" Nov 24 09:34:09 crc kubenswrapper[4886]: I1124 09:34:09.445666 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-5f9t4" event={"ID":"ecc70fd6-3ec9-4488-89c6-2a9a6d803bcb","Type":"ContainerDied","Data":"3a2a7ad0a9c874a1885e7340feda714f1a1d7df89255e476b2d5d2e6669a7c2e"} Nov 24 09:34:09 crc kubenswrapper[4886]: I1124 09:34:09.445705 4886 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3a2a7ad0a9c874a1885e7340feda714f1a1d7df89255e476b2d5d2e6669a7c2e" Nov 24 09:34:09 crc kubenswrapper[4886]: I1124 09:34:09.445758 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-5f9t4" Nov 24 09:34:13 crc kubenswrapper[4886]: I1124 09:34:13.849894 4886 scope.go:117] "RemoveContainer" containerID="c1638d7a83dddd919356693ac2e24d3a4c73032c91053a69ef35d33fca8c2b71" Nov 24 09:34:14 crc kubenswrapper[4886]: I1124 09:34:14.487880 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zc46q" event={"ID":"23cb993e-0360-4449-b604-8ddd825a6502","Type":"ContainerStarted","Data":"28524ef17704b6f8c15ccaef5b8a47fad706e538601bd9483a53a9befb235aff"} Nov 24 09:34:14 crc kubenswrapper[4886]: I1124 09:34:14.718952 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-gzh7k"] Nov 24 09:34:14 crc kubenswrapper[4886]: E1124 09:34:14.719438 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ecc70fd6-3ec9-4488-89c6-2a9a6d803bcb" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Nov 24 09:34:14 crc kubenswrapper[4886]: I1124 09:34:14.719457 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="ecc70fd6-3ec9-4488-89c6-2a9a6d803bcb" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Nov 24 09:34:14 crc kubenswrapper[4886]: I1124 09:34:14.719669 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="ecc70fd6-3ec9-4488-89c6-2a9a6d803bcb" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Nov 24 09:34:14 crc kubenswrapper[4886]: I1124 09:34:14.721036 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gzh7k" Nov 24 09:34:14 crc kubenswrapper[4886]: I1124 09:34:14.729773 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-gzh7k"] Nov 24 09:34:14 crc kubenswrapper[4886]: I1124 09:34:14.797370 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/20ae5141-e2fd-41be-8a2f-e741e289ea42-utilities\") pod \"certified-operators-gzh7k\" (UID: \"20ae5141-e2fd-41be-8a2f-e741e289ea42\") " pod="openshift-marketplace/certified-operators-gzh7k" Nov 24 09:34:14 crc kubenswrapper[4886]: I1124 09:34:14.797736 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/20ae5141-e2fd-41be-8a2f-e741e289ea42-catalog-content\") pod \"certified-operators-gzh7k\" (UID: \"20ae5141-e2fd-41be-8a2f-e741e289ea42\") " pod="openshift-marketplace/certified-operators-gzh7k" Nov 24 09:34:14 crc kubenswrapper[4886]: I1124 09:34:14.798088 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hsll8\" (UniqueName: \"kubernetes.io/projected/20ae5141-e2fd-41be-8a2f-e741e289ea42-kube-api-access-hsll8\") pod \"certified-operators-gzh7k\" (UID: \"20ae5141-e2fd-41be-8a2f-e741e289ea42\") " pod="openshift-marketplace/certified-operators-gzh7k" Nov 24 09:34:14 crc kubenswrapper[4886]: I1124 09:34:14.901282 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/20ae5141-e2fd-41be-8a2f-e741e289ea42-utilities\") pod \"certified-operators-gzh7k\" (UID: \"20ae5141-e2fd-41be-8a2f-e741e289ea42\") " pod="openshift-marketplace/certified-operators-gzh7k" Nov 24 09:34:14 crc kubenswrapper[4886]: I1124 09:34:14.901410 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/20ae5141-e2fd-41be-8a2f-e741e289ea42-catalog-content\") pod \"certified-operators-gzh7k\" (UID: \"20ae5141-e2fd-41be-8a2f-e741e289ea42\") " pod="openshift-marketplace/certified-operators-gzh7k" Nov 24 09:34:14 crc kubenswrapper[4886]: I1124 09:34:14.901802 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hsll8\" (UniqueName: \"kubernetes.io/projected/20ae5141-e2fd-41be-8a2f-e741e289ea42-kube-api-access-hsll8\") pod \"certified-operators-gzh7k\" (UID: \"20ae5141-e2fd-41be-8a2f-e741e289ea42\") " pod="openshift-marketplace/certified-operators-gzh7k" Nov 24 09:34:14 crc kubenswrapper[4886]: I1124 09:34:14.905256 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/20ae5141-e2fd-41be-8a2f-e741e289ea42-utilities\") pod \"certified-operators-gzh7k\" (UID: \"20ae5141-e2fd-41be-8a2f-e741e289ea42\") " pod="openshift-marketplace/certified-operators-gzh7k" Nov 24 09:34:14 crc kubenswrapper[4886]: I1124 09:34:14.905677 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/20ae5141-e2fd-41be-8a2f-e741e289ea42-catalog-content\") pod \"certified-operators-gzh7k\" (UID: \"20ae5141-e2fd-41be-8a2f-e741e289ea42\") " pod="openshift-marketplace/certified-operators-gzh7k" Nov 24 09:34:14 crc kubenswrapper[4886]: I1124 09:34:14.924720 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hsll8\" (UniqueName: \"kubernetes.io/projected/20ae5141-e2fd-41be-8a2f-e741e289ea42-kube-api-access-hsll8\") pod \"certified-operators-gzh7k\" (UID: \"20ae5141-e2fd-41be-8a2f-e741e289ea42\") " pod="openshift-marketplace/certified-operators-gzh7k" Nov 24 09:34:15 crc kubenswrapper[4886]: I1124 09:34:15.049401 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gzh7k" Nov 24 09:34:15 crc kubenswrapper[4886]: I1124 09:34:15.584071 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-gzh7k"] Nov 24 09:34:15 crc kubenswrapper[4886]: W1124 09:34:15.588303 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod20ae5141_e2fd_41be_8a2f_e741e289ea42.slice/crio-f6acac062ea16a50ee757d59f7f427fb8f057a687910eb4acddbf77f015bd97f WatchSource:0}: Error finding container f6acac062ea16a50ee757d59f7f427fb8f057a687910eb4acddbf77f015bd97f: Status 404 returned error can't find the container with id f6acac062ea16a50ee757d59f7f427fb8f057a687910eb4acddbf77f015bd97f Nov 24 09:34:16 crc kubenswrapper[4886]: I1124 09:34:16.527944 4886 generic.go:334] "Generic (PLEG): container finished" podID="20ae5141-e2fd-41be-8a2f-e741e289ea42" containerID="c3a1495829c200c116f1e0235d3e12bb675c295602298e1376874d07ba9a1944" exitCode=0 Nov 24 09:34:16 crc kubenswrapper[4886]: I1124 09:34:16.528074 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gzh7k" event={"ID":"20ae5141-e2fd-41be-8a2f-e741e289ea42","Type":"ContainerDied","Data":"c3a1495829c200c116f1e0235d3e12bb675c295602298e1376874d07ba9a1944"} Nov 24 09:34:16 crc kubenswrapper[4886]: I1124 09:34:16.529199 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gzh7k" event={"ID":"20ae5141-e2fd-41be-8a2f-e741e289ea42","Type":"ContainerStarted","Data":"f6acac062ea16a50ee757d59f7f427fb8f057a687910eb4acddbf77f015bd97f"} Nov 24 09:34:16 crc kubenswrapper[4886]: I1124 09:34:16.532834 4886 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 24 09:34:19 crc kubenswrapper[4886]: I1124 09:34:19.561314 4886 generic.go:334] "Generic (PLEG): container finished" podID="20ae5141-e2fd-41be-8a2f-e741e289ea42" containerID="5689d05e08acc1397d0a75ba82f68016672e2fd5ff8163886079162a9f4e9500" exitCode=0 Nov 24 09:34:19 crc kubenswrapper[4886]: I1124 09:34:19.561511 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gzh7k" event={"ID":"20ae5141-e2fd-41be-8a2f-e741e289ea42","Type":"ContainerDied","Data":"5689d05e08acc1397d0a75ba82f68016672e2fd5ff8163886079162a9f4e9500"} Nov 24 09:34:20 crc kubenswrapper[4886]: I1124 09:34:20.572907 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gzh7k" event={"ID":"20ae5141-e2fd-41be-8a2f-e741e289ea42","Type":"ContainerStarted","Data":"4354de86033e605b0a2e25fdaed86958e34b40799b36753dab6195679a1a1e09"} Nov 24 09:34:20 crc kubenswrapper[4886]: I1124 09:34:20.610234 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-gzh7k" podStartSLOduration=3.084885162 podStartE2EDuration="6.610218679s" podCreationTimestamp="2025-11-24 09:34:14 +0000 UTC" firstStartedPulling="2025-11-24 09:34:16.532484842 +0000 UTC m=+2712.419222987" lastFinishedPulling="2025-11-24 09:34:20.057818369 +0000 UTC m=+2715.944556504" observedRunningTime="2025-11-24 09:34:20.609443477 +0000 UTC m=+2716.496181622" watchObservedRunningTime="2025-11-24 09:34:20.610218679 +0000 UTC m=+2716.496956814" Nov 24 09:34:25 crc kubenswrapper[4886]: I1124 09:34:25.049987 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-gzh7k" Nov 24 09:34:25 crc kubenswrapper[4886]: I1124 09:34:25.050595 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-gzh7k" Nov 24 09:34:25 crc kubenswrapper[4886]: I1124 09:34:25.102877 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-gzh7k" Nov 24 09:34:25 crc kubenswrapper[4886]: I1124 09:34:25.665065 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-gzh7k" Nov 24 09:34:25 crc kubenswrapper[4886]: I1124 09:34:25.707907 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-gzh7k"] Nov 24 09:34:27 crc kubenswrapper[4886]: I1124 09:34:27.638561 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-gzh7k" podUID="20ae5141-e2fd-41be-8a2f-e741e289ea42" containerName="registry-server" containerID="cri-o://4354de86033e605b0a2e25fdaed86958e34b40799b36753dab6195679a1a1e09" gracePeriod=2 Nov 24 09:34:28 crc kubenswrapper[4886]: I1124 09:34:28.099356 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gzh7k" Nov 24 09:34:28 crc kubenswrapper[4886]: I1124 09:34:28.160389 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/20ae5141-e2fd-41be-8a2f-e741e289ea42-catalog-content\") pod \"20ae5141-e2fd-41be-8a2f-e741e289ea42\" (UID: \"20ae5141-e2fd-41be-8a2f-e741e289ea42\") " Nov 24 09:34:28 crc kubenswrapper[4886]: I1124 09:34:28.160652 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hsll8\" (UniqueName: \"kubernetes.io/projected/20ae5141-e2fd-41be-8a2f-e741e289ea42-kube-api-access-hsll8\") pod \"20ae5141-e2fd-41be-8a2f-e741e289ea42\" (UID: \"20ae5141-e2fd-41be-8a2f-e741e289ea42\") " Nov 24 09:34:28 crc kubenswrapper[4886]: I1124 09:34:28.160708 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/20ae5141-e2fd-41be-8a2f-e741e289ea42-utilities\") pod \"20ae5141-e2fd-41be-8a2f-e741e289ea42\" (UID: \"20ae5141-e2fd-41be-8a2f-e741e289ea42\") " Nov 24 09:34:28 crc kubenswrapper[4886]: I1124 09:34:28.161906 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/20ae5141-e2fd-41be-8a2f-e741e289ea42-utilities" (OuterVolumeSpecName: "utilities") pod "20ae5141-e2fd-41be-8a2f-e741e289ea42" (UID: "20ae5141-e2fd-41be-8a2f-e741e289ea42"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 09:34:28 crc kubenswrapper[4886]: I1124 09:34:28.176382 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20ae5141-e2fd-41be-8a2f-e741e289ea42-kube-api-access-hsll8" (OuterVolumeSpecName: "kube-api-access-hsll8") pod "20ae5141-e2fd-41be-8a2f-e741e289ea42" (UID: "20ae5141-e2fd-41be-8a2f-e741e289ea42"). InnerVolumeSpecName "kube-api-access-hsll8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 09:34:28 crc kubenswrapper[4886]: I1124 09:34:28.219384 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/20ae5141-e2fd-41be-8a2f-e741e289ea42-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "20ae5141-e2fd-41be-8a2f-e741e289ea42" (UID: "20ae5141-e2fd-41be-8a2f-e741e289ea42"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 09:34:28 crc kubenswrapper[4886]: I1124 09:34:28.262938 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hsll8\" (UniqueName: \"kubernetes.io/projected/20ae5141-e2fd-41be-8a2f-e741e289ea42-kube-api-access-hsll8\") on node \"crc\" DevicePath \"\"" Nov 24 09:34:28 crc kubenswrapper[4886]: I1124 09:34:28.262985 4886 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/20ae5141-e2fd-41be-8a2f-e741e289ea42-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 09:34:28 crc kubenswrapper[4886]: I1124 09:34:28.263002 4886 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/20ae5141-e2fd-41be-8a2f-e741e289ea42-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 09:34:28 crc kubenswrapper[4886]: I1124 09:34:28.649043 4886 generic.go:334] "Generic (PLEG): container finished" podID="20ae5141-e2fd-41be-8a2f-e741e289ea42" containerID="4354de86033e605b0a2e25fdaed86958e34b40799b36753dab6195679a1a1e09" exitCode=0 Nov 24 09:34:28 crc kubenswrapper[4886]: I1124 09:34:28.649101 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gzh7k" event={"ID":"20ae5141-e2fd-41be-8a2f-e741e289ea42","Type":"ContainerDied","Data":"4354de86033e605b0a2e25fdaed86958e34b40799b36753dab6195679a1a1e09"} Nov 24 09:34:28 crc kubenswrapper[4886]: I1124 09:34:28.649108 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gzh7k" Nov 24 09:34:28 crc kubenswrapper[4886]: I1124 09:34:28.649132 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gzh7k" event={"ID":"20ae5141-e2fd-41be-8a2f-e741e289ea42","Type":"ContainerDied","Data":"f6acac062ea16a50ee757d59f7f427fb8f057a687910eb4acddbf77f015bd97f"} Nov 24 09:34:28 crc kubenswrapper[4886]: I1124 09:34:28.649165 4886 scope.go:117] "RemoveContainer" containerID="4354de86033e605b0a2e25fdaed86958e34b40799b36753dab6195679a1a1e09" Nov 24 09:34:28 crc kubenswrapper[4886]: I1124 09:34:28.675046 4886 scope.go:117] "RemoveContainer" containerID="5689d05e08acc1397d0a75ba82f68016672e2fd5ff8163886079162a9f4e9500" Nov 24 09:34:28 crc kubenswrapper[4886]: I1124 09:34:28.691788 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-gzh7k"] Nov 24 09:34:28 crc kubenswrapper[4886]: I1124 09:34:28.699125 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-gzh7k"] Nov 24 09:34:28 crc kubenswrapper[4886]: I1124 09:34:28.715575 4886 scope.go:117] "RemoveContainer" containerID="c3a1495829c200c116f1e0235d3e12bb675c295602298e1376874d07ba9a1944" Nov 24 09:34:28 crc kubenswrapper[4886]: I1124 09:34:28.757649 4886 scope.go:117] "RemoveContainer" containerID="4354de86033e605b0a2e25fdaed86958e34b40799b36753dab6195679a1a1e09" Nov 24 09:34:28 crc kubenswrapper[4886]: E1124 09:34:28.758898 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4354de86033e605b0a2e25fdaed86958e34b40799b36753dab6195679a1a1e09\": container with ID starting with 4354de86033e605b0a2e25fdaed86958e34b40799b36753dab6195679a1a1e09 not found: ID does not exist" containerID="4354de86033e605b0a2e25fdaed86958e34b40799b36753dab6195679a1a1e09" Nov 24 09:34:28 crc kubenswrapper[4886]: I1124 09:34:28.758931 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4354de86033e605b0a2e25fdaed86958e34b40799b36753dab6195679a1a1e09"} err="failed to get container status \"4354de86033e605b0a2e25fdaed86958e34b40799b36753dab6195679a1a1e09\": rpc error: code = NotFound desc = could not find container \"4354de86033e605b0a2e25fdaed86958e34b40799b36753dab6195679a1a1e09\": container with ID starting with 4354de86033e605b0a2e25fdaed86958e34b40799b36753dab6195679a1a1e09 not found: ID does not exist" Nov 24 09:34:28 crc kubenswrapper[4886]: I1124 09:34:28.758952 4886 scope.go:117] "RemoveContainer" containerID="5689d05e08acc1397d0a75ba82f68016672e2fd5ff8163886079162a9f4e9500" Nov 24 09:34:28 crc kubenswrapper[4886]: E1124 09:34:28.760098 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5689d05e08acc1397d0a75ba82f68016672e2fd5ff8163886079162a9f4e9500\": container with ID starting with 5689d05e08acc1397d0a75ba82f68016672e2fd5ff8163886079162a9f4e9500 not found: ID does not exist" containerID="5689d05e08acc1397d0a75ba82f68016672e2fd5ff8163886079162a9f4e9500" Nov 24 09:34:28 crc kubenswrapper[4886]: I1124 09:34:28.760119 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5689d05e08acc1397d0a75ba82f68016672e2fd5ff8163886079162a9f4e9500"} err="failed to get container status \"5689d05e08acc1397d0a75ba82f68016672e2fd5ff8163886079162a9f4e9500\": rpc error: code = NotFound desc = could not find container \"5689d05e08acc1397d0a75ba82f68016672e2fd5ff8163886079162a9f4e9500\": container with ID starting with 5689d05e08acc1397d0a75ba82f68016672e2fd5ff8163886079162a9f4e9500 not found: ID does not exist" Nov 24 09:34:28 crc kubenswrapper[4886]: I1124 09:34:28.760137 4886 scope.go:117] "RemoveContainer" containerID="c3a1495829c200c116f1e0235d3e12bb675c295602298e1376874d07ba9a1944" Nov 24 09:34:28 crc kubenswrapper[4886]: E1124 09:34:28.760547 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c3a1495829c200c116f1e0235d3e12bb675c295602298e1376874d07ba9a1944\": container with ID starting with c3a1495829c200c116f1e0235d3e12bb675c295602298e1376874d07ba9a1944 not found: ID does not exist" containerID="c3a1495829c200c116f1e0235d3e12bb675c295602298e1376874d07ba9a1944" Nov 24 09:34:28 crc kubenswrapper[4886]: I1124 09:34:28.760589 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c3a1495829c200c116f1e0235d3e12bb675c295602298e1376874d07ba9a1944"} err="failed to get container status \"c3a1495829c200c116f1e0235d3e12bb675c295602298e1376874d07ba9a1944\": rpc error: code = NotFound desc = could not find container \"c3a1495829c200c116f1e0235d3e12bb675c295602298e1376874d07ba9a1944\": container with ID starting with c3a1495829c200c116f1e0235d3e12bb675c295602298e1376874d07ba9a1944 not found: ID does not exist" Nov 24 09:34:28 crc kubenswrapper[4886]: I1124 09:34:28.861027 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20ae5141-e2fd-41be-8a2f-e741e289ea42" path="/var/lib/kubelet/pods/20ae5141-e2fd-41be-8a2f-e741e289ea42/volumes" Nov 24 09:35:08 crc kubenswrapper[4886]: I1124 09:35:08.213003 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tempest-tests-tempest"] Nov 24 09:35:08 crc kubenswrapper[4886]: E1124 09:35:08.214034 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20ae5141-e2fd-41be-8a2f-e741e289ea42" containerName="extract-utilities" Nov 24 09:35:08 crc kubenswrapper[4886]: I1124 09:35:08.214052 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="20ae5141-e2fd-41be-8a2f-e741e289ea42" containerName="extract-utilities" Nov 24 09:35:08 crc kubenswrapper[4886]: E1124 09:35:08.214066 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20ae5141-e2fd-41be-8a2f-e741e289ea42" containerName="registry-server" Nov 24 09:35:08 crc kubenswrapper[4886]: I1124 09:35:08.214071 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="20ae5141-e2fd-41be-8a2f-e741e289ea42" containerName="registry-server" Nov 24 09:35:08 crc kubenswrapper[4886]: E1124 09:35:08.214095 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20ae5141-e2fd-41be-8a2f-e741e289ea42" containerName="extract-content" Nov 24 09:35:08 crc kubenswrapper[4886]: I1124 09:35:08.214103 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="20ae5141-e2fd-41be-8a2f-e741e289ea42" containerName="extract-content" Nov 24 09:35:08 crc kubenswrapper[4886]: I1124 09:35:08.214313 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="20ae5141-e2fd-41be-8a2f-e741e289ea42" containerName="registry-server" Nov 24 09:35:08 crc kubenswrapper[4886]: I1124 09:35:08.215085 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Nov 24 09:35:08 crc kubenswrapper[4886]: I1124 09:35:08.217768 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"test-operator-controller-priv-key" Nov 24 09:35:08 crc kubenswrapper[4886]: I1124 09:35:08.218096 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-custom-data-s0" Nov 24 09:35:08 crc kubenswrapper[4886]: I1124 09:35:08.218315 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Nov 24 09:35:08 crc kubenswrapper[4886]: I1124 09:35:08.224928 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Nov 24 09:35:08 crc kubenswrapper[4886]: I1124 09:35:08.226614 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-vl5zb" Nov 24 09:35:08 crc kubenswrapper[4886]: I1124 09:35:08.302966 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/347e0b0b-6caf-4b65-8fd5-a8cf2c61acc7-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"347e0b0b-6caf-4b65-8fd5-a8cf2c61acc7\") " pod="openstack/tempest-tests-tempest" Nov 24 09:35:08 crc kubenswrapper[4886]: I1124 09:35:08.303138 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/347e0b0b-6caf-4b65-8fd5-a8cf2c61acc7-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"347e0b0b-6caf-4b65-8fd5-a8cf2c61acc7\") " pod="openstack/tempest-tests-tempest" Nov 24 09:35:08 crc kubenswrapper[4886]: I1124 09:35:08.303436 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/347e0b0b-6caf-4b65-8fd5-a8cf2c61acc7-config-data\") pod \"tempest-tests-tempest\" (UID: \"347e0b0b-6caf-4b65-8fd5-a8cf2c61acc7\") " pod="openstack/tempest-tests-tempest" Nov 24 09:35:08 crc kubenswrapper[4886]: I1124 09:35:08.406029 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/347e0b0b-6caf-4b65-8fd5-a8cf2c61acc7-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"347e0b0b-6caf-4b65-8fd5-a8cf2c61acc7\") " pod="openstack/tempest-tests-tempest" Nov 24 09:35:08 crc kubenswrapper[4886]: I1124 09:35:08.406568 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/347e0b0b-6caf-4b65-8fd5-a8cf2c61acc7-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"347e0b0b-6caf-4b65-8fd5-a8cf2c61acc7\") " pod="openstack/tempest-tests-tempest" Nov 24 09:35:08 crc kubenswrapper[4886]: I1124 09:35:08.406628 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/347e0b0b-6caf-4b65-8fd5-a8cf2c61acc7-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"347e0b0b-6caf-4b65-8fd5-a8cf2c61acc7\") " pod="openstack/tempest-tests-tempest" Nov 24 09:35:08 crc kubenswrapper[4886]: I1124 09:35:08.406695 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"tempest-tests-tempest\" (UID: \"347e0b0b-6caf-4b65-8fd5-a8cf2c61acc7\") " pod="openstack/tempest-tests-tempest" Nov 24 09:35:08 crc kubenswrapper[4886]: I1124 09:35:08.406742 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/347e0b0b-6caf-4b65-8fd5-a8cf2c61acc7-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"347e0b0b-6caf-4b65-8fd5-a8cf2c61acc7\") " pod="openstack/tempest-tests-tempest" Nov 24 09:35:08 crc kubenswrapper[4886]: I1124 09:35:08.406845 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/347e0b0b-6caf-4b65-8fd5-a8cf2c61acc7-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"347e0b0b-6caf-4b65-8fd5-a8cf2c61acc7\") " pod="openstack/tempest-tests-tempest" Nov 24 09:35:08 crc kubenswrapper[4886]: I1124 09:35:08.406883 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/347e0b0b-6caf-4b65-8fd5-a8cf2c61acc7-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"347e0b0b-6caf-4b65-8fd5-a8cf2c61acc7\") " pod="openstack/tempest-tests-tempest" Nov 24 09:35:08 crc kubenswrapper[4886]: I1124 09:35:08.406927 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/347e0b0b-6caf-4b65-8fd5-a8cf2c61acc7-config-data\") pod \"tempest-tests-tempest\" (UID: \"347e0b0b-6caf-4b65-8fd5-a8cf2c61acc7\") " pod="openstack/tempest-tests-tempest" Nov 24 09:35:08 crc kubenswrapper[4886]: I1124 09:35:08.406992 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pkr8x\" (UniqueName: \"kubernetes.io/projected/347e0b0b-6caf-4b65-8fd5-a8cf2c61acc7-kube-api-access-pkr8x\") pod \"tempest-tests-tempest\" (UID: \"347e0b0b-6caf-4b65-8fd5-a8cf2c61acc7\") " pod="openstack/tempest-tests-tempest" Nov 24 09:35:08 crc kubenswrapper[4886]: I1124 09:35:08.408145 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/347e0b0b-6caf-4b65-8fd5-a8cf2c61acc7-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"347e0b0b-6caf-4b65-8fd5-a8cf2c61acc7\") " pod="openstack/tempest-tests-tempest" Nov 24 09:35:08 crc kubenswrapper[4886]: I1124 09:35:08.409240 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/347e0b0b-6caf-4b65-8fd5-a8cf2c61acc7-config-data\") pod \"tempest-tests-tempest\" (UID: \"347e0b0b-6caf-4b65-8fd5-a8cf2c61acc7\") " pod="openstack/tempest-tests-tempest" Nov 24 09:35:08 crc kubenswrapper[4886]: I1124 09:35:08.415932 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/347e0b0b-6caf-4b65-8fd5-a8cf2c61acc7-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"347e0b0b-6caf-4b65-8fd5-a8cf2c61acc7\") " pod="openstack/tempest-tests-tempest" Nov 24 09:35:08 crc kubenswrapper[4886]: I1124 09:35:08.509128 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/347e0b0b-6caf-4b65-8fd5-a8cf2c61acc7-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"347e0b0b-6caf-4b65-8fd5-a8cf2c61acc7\") " pod="openstack/tempest-tests-tempest" Nov 24 09:35:08 crc kubenswrapper[4886]: I1124 09:35:08.509227 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"tempest-tests-tempest\" (UID: \"347e0b0b-6caf-4b65-8fd5-a8cf2c61acc7\") " pod="openstack/tempest-tests-tempest" Nov 24 09:35:08 crc kubenswrapper[4886]: I1124 09:35:08.509329 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/347e0b0b-6caf-4b65-8fd5-a8cf2c61acc7-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"347e0b0b-6caf-4b65-8fd5-a8cf2c61acc7\") " pod="openstack/tempest-tests-tempest" Nov 24 09:35:08 crc kubenswrapper[4886]: I1124 09:35:08.509353 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/347e0b0b-6caf-4b65-8fd5-a8cf2c61acc7-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"347e0b0b-6caf-4b65-8fd5-a8cf2c61acc7\") " pod="openstack/tempest-tests-tempest" Nov 24 09:35:08 crc kubenswrapper[4886]: I1124 09:35:08.509410 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pkr8x\" (UniqueName: \"kubernetes.io/projected/347e0b0b-6caf-4b65-8fd5-a8cf2c61acc7-kube-api-access-pkr8x\") pod \"tempest-tests-tempest\" (UID: \"347e0b0b-6caf-4b65-8fd5-a8cf2c61acc7\") " pod="openstack/tempest-tests-tempest" Nov 24 09:35:08 crc kubenswrapper[4886]: I1124 09:35:08.509479 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/347e0b0b-6caf-4b65-8fd5-a8cf2c61acc7-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"347e0b0b-6caf-4b65-8fd5-a8cf2c61acc7\") " pod="openstack/tempest-tests-tempest" Nov 24 09:35:08 crc kubenswrapper[4886]: I1124 09:35:08.509870 4886 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"tempest-tests-tempest\" (UID: \"347e0b0b-6caf-4b65-8fd5-a8cf2c61acc7\") device mount path \"/mnt/openstack/pv12\"" pod="openstack/tempest-tests-tempest" Nov 24 09:35:08 crc kubenswrapper[4886]: I1124 09:35:08.509968 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/347e0b0b-6caf-4b65-8fd5-a8cf2c61acc7-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"347e0b0b-6caf-4b65-8fd5-a8cf2c61acc7\") " pod="openstack/tempest-tests-tempest" Nov 24 09:35:08 crc kubenswrapper[4886]: I1124 09:35:08.510333 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/347e0b0b-6caf-4b65-8fd5-a8cf2c61acc7-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"347e0b0b-6caf-4b65-8fd5-a8cf2c61acc7\") " pod="openstack/tempest-tests-tempest" Nov 24 09:35:08 crc kubenswrapper[4886]: I1124 09:35:08.513610 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/347e0b0b-6caf-4b65-8fd5-a8cf2c61acc7-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"347e0b0b-6caf-4b65-8fd5-a8cf2c61acc7\") " pod="openstack/tempest-tests-tempest" Nov 24 09:35:08 crc kubenswrapper[4886]: I1124 09:35:08.513944 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/347e0b0b-6caf-4b65-8fd5-a8cf2c61acc7-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"347e0b0b-6caf-4b65-8fd5-a8cf2c61acc7\") " pod="openstack/tempest-tests-tempest" Nov 24 09:35:08 crc kubenswrapper[4886]: I1124 09:35:08.529091 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pkr8x\" (UniqueName: \"kubernetes.io/projected/347e0b0b-6caf-4b65-8fd5-a8cf2c61acc7-kube-api-access-pkr8x\") pod \"tempest-tests-tempest\" (UID: \"347e0b0b-6caf-4b65-8fd5-a8cf2c61acc7\") " pod="openstack/tempest-tests-tempest" Nov 24 09:35:08 crc kubenswrapper[4886]: I1124 09:35:08.541484 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"tempest-tests-tempest\" (UID: \"347e0b0b-6caf-4b65-8fd5-a8cf2c61acc7\") " pod="openstack/tempest-tests-tempest" Nov 24 09:35:08 crc kubenswrapper[4886]: I1124 09:35:08.835818 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Nov 24 09:35:09 crc kubenswrapper[4886]: I1124 09:35:09.274959 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Nov 24 09:35:10 crc kubenswrapper[4886]: I1124 09:35:10.017785 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"347e0b0b-6caf-4b65-8fd5-a8cf2c61acc7","Type":"ContainerStarted","Data":"baae8c0aaa40f60839c35466cabeee4144dcc81e641267e599faae0702a72938"} Nov 24 09:35:40 crc kubenswrapper[4886]: E1124 09:35:40.208583 4886 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified" Nov 24 09:35:40 crc kubenswrapper[4886]: E1124 09:35:40.209441 4886 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:tempest-tests-tempest-tests-runner,Image:quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:test-operator-ephemeral-workdir,ReadOnly:false,MountPath:/var/lib/tempest,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-temporary,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/test_operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-logs,ReadOnly:false,MountPath:/var/lib/tempest/external_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/etc/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/var/lib/tempest/.config/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/etc/openstack/secure.yaml,SubPath:secure.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ca-certs,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ssh-key,ReadOnly:false,MountPath:/var/lib/tempest/id_ecdsa,SubPath:ssh_key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pkr8x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42480,RunAsNonRoot:*false,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*true,RunAsGroup:*42480,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-custom-data-s0,},Optional:nil,},SecretRef:nil,},EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-env-vars-s0,},Optional:nil,},SecretRef:nil,},},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod tempest-tests-tempest_openstack(347e0b0b-6caf-4b65-8fd5-a8cf2c61acc7): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 24 09:35:40 crc kubenswrapper[4886]: E1124 09:35:40.210648 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/tempest-tests-tempest" podUID="347e0b0b-6caf-4b65-8fd5-a8cf2c61acc7" Nov 24 09:35:40 crc kubenswrapper[4886]: E1124 09:35:40.323444 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified\\\"\"" pod="openstack/tempest-tests-tempest" podUID="347e0b0b-6caf-4b65-8fd5-a8cf2c61acc7" Nov 24 09:35:48 crc kubenswrapper[4886]: I1124 09:35:48.696442 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-9cgmx"] Nov 24 09:35:48 crc kubenswrapper[4886]: I1124 09:35:48.699624 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9cgmx" Nov 24 09:35:48 crc kubenswrapper[4886]: I1124 09:35:48.706960 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-9cgmx"] Nov 24 09:35:48 crc kubenswrapper[4886]: I1124 09:35:48.867904 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4e1b427b-3215-47bb-91ca-19b628f71f8c-utilities\") pod \"redhat-operators-9cgmx\" (UID: \"4e1b427b-3215-47bb-91ca-19b628f71f8c\") " pod="openshift-marketplace/redhat-operators-9cgmx" Nov 24 09:35:48 crc kubenswrapper[4886]: I1124 09:35:48.868014 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xk589\" (UniqueName: \"kubernetes.io/projected/4e1b427b-3215-47bb-91ca-19b628f71f8c-kube-api-access-xk589\") pod \"redhat-operators-9cgmx\" (UID: \"4e1b427b-3215-47bb-91ca-19b628f71f8c\") " pod="openshift-marketplace/redhat-operators-9cgmx" Nov 24 09:35:48 crc kubenswrapper[4886]: I1124 09:35:48.868077 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4e1b427b-3215-47bb-91ca-19b628f71f8c-catalog-content\") pod \"redhat-operators-9cgmx\" (UID: \"4e1b427b-3215-47bb-91ca-19b628f71f8c\") " pod="openshift-marketplace/redhat-operators-9cgmx" Nov 24 09:35:48 crc kubenswrapper[4886]: I1124 09:35:48.970086 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4e1b427b-3215-47bb-91ca-19b628f71f8c-catalog-content\") pod \"redhat-operators-9cgmx\" (UID: \"4e1b427b-3215-47bb-91ca-19b628f71f8c\") " pod="openshift-marketplace/redhat-operators-9cgmx" Nov 24 09:35:48 crc kubenswrapper[4886]: I1124 09:35:48.970290 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4e1b427b-3215-47bb-91ca-19b628f71f8c-utilities\") pod \"redhat-operators-9cgmx\" (UID: \"4e1b427b-3215-47bb-91ca-19b628f71f8c\") " pod="openshift-marketplace/redhat-operators-9cgmx" Nov 24 09:35:48 crc kubenswrapper[4886]: I1124 09:35:48.970361 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xk589\" (UniqueName: \"kubernetes.io/projected/4e1b427b-3215-47bb-91ca-19b628f71f8c-kube-api-access-xk589\") pod \"redhat-operators-9cgmx\" (UID: \"4e1b427b-3215-47bb-91ca-19b628f71f8c\") " pod="openshift-marketplace/redhat-operators-9cgmx" Nov 24 09:35:48 crc kubenswrapper[4886]: I1124 09:35:48.970855 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4e1b427b-3215-47bb-91ca-19b628f71f8c-utilities\") pod \"redhat-operators-9cgmx\" (UID: \"4e1b427b-3215-47bb-91ca-19b628f71f8c\") " pod="openshift-marketplace/redhat-operators-9cgmx" Nov 24 09:35:48 crc kubenswrapper[4886]: I1124 09:35:48.970855 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4e1b427b-3215-47bb-91ca-19b628f71f8c-catalog-content\") pod \"redhat-operators-9cgmx\" (UID: \"4e1b427b-3215-47bb-91ca-19b628f71f8c\") " pod="openshift-marketplace/redhat-operators-9cgmx" Nov 24 09:35:49 crc kubenswrapper[4886]: I1124 09:35:48.992140 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xk589\" (UniqueName: \"kubernetes.io/projected/4e1b427b-3215-47bb-91ca-19b628f71f8c-kube-api-access-xk589\") pod \"redhat-operators-9cgmx\" (UID: \"4e1b427b-3215-47bb-91ca-19b628f71f8c\") " pod="openshift-marketplace/redhat-operators-9cgmx" Nov 24 09:35:49 crc kubenswrapper[4886]: I1124 09:35:49.026119 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9cgmx" Nov 24 09:35:49 crc kubenswrapper[4886]: I1124 09:35:49.506855 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-9cgmx"] Nov 24 09:35:50 crc kubenswrapper[4886]: I1124 09:35:50.411031 4886 generic.go:334] "Generic (PLEG): container finished" podID="4e1b427b-3215-47bb-91ca-19b628f71f8c" containerID="42893b97364b81009dfdf5ee2aeba855d61433425433269c5eb42d9b0c829bfe" exitCode=0 Nov 24 09:35:50 crc kubenswrapper[4886]: I1124 09:35:50.411095 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9cgmx" event={"ID":"4e1b427b-3215-47bb-91ca-19b628f71f8c","Type":"ContainerDied","Data":"42893b97364b81009dfdf5ee2aeba855d61433425433269c5eb42d9b0c829bfe"} Nov 24 09:35:50 crc kubenswrapper[4886]: I1124 09:35:50.411387 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9cgmx" event={"ID":"4e1b427b-3215-47bb-91ca-19b628f71f8c","Type":"ContainerStarted","Data":"d6c634bf703d8ebd1693cd0a5a96ba7ebaa1ad1415e015c05a8412285a44daa3"} Nov 24 09:35:52 crc kubenswrapper[4886]: I1124 09:35:52.444844 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9cgmx" event={"ID":"4e1b427b-3215-47bb-91ca-19b628f71f8c","Type":"ContainerStarted","Data":"f79f12a4439dd4c57713e3a4289ce682f446a021912cdafea132af1d44f45760"} Nov 24 09:35:54 crc kubenswrapper[4886]: I1124 09:35:54.466398 4886 generic.go:334] "Generic (PLEG): container finished" podID="4e1b427b-3215-47bb-91ca-19b628f71f8c" containerID="f79f12a4439dd4c57713e3a4289ce682f446a021912cdafea132af1d44f45760" exitCode=0 Nov 24 09:35:54 crc kubenswrapper[4886]: I1124 09:35:54.466500 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9cgmx" event={"ID":"4e1b427b-3215-47bb-91ca-19b628f71f8c","Type":"ContainerDied","Data":"f79f12a4439dd4c57713e3a4289ce682f446a021912cdafea132af1d44f45760"} Nov 24 09:35:56 crc kubenswrapper[4886]: I1124 09:35:56.484480 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9cgmx" event={"ID":"4e1b427b-3215-47bb-91ca-19b628f71f8c","Type":"ContainerStarted","Data":"582f086faf570db60fd85f9ac43a8a8e0b36758deabfa84ad7a1215cd40803af"} Nov 24 09:35:56 crc kubenswrapper[4886]: I1124 09:35:56.504056 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-9cgmx" podStartSLOduration=2.698301077 podStartE2EDuration="8.504034754s" podCreationTimestamp="2025-11-24 09:35:48 +0000 UTC" firstStartedPulling="2025-11-24 09:35:50.413377503 +0000 UTC m=+2806.300115648" lastFinishedPulling="2025-11-24 09:35:56.21911119 +0000 UTC m=+2812.105849325" observedRunningTime="2025-11-24 09:35:56.503776276 +0000 UTC m=+2812.390514411" watchObservedRunningTime="2025-11-24 09:35:56.504034754 +0000 UTC m=+2812.390772889" Nov 24 09:35:57 crc kubenswrapper[4886]: I1124 09:35:57.496872 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"347e0b0b-6caf-4b65-8fd5-a8cf2c61acc7","Type":"ContainerStarted","Data":"582f3d4c5d02870dc2e229f11ddf735e7c4316ddbb3d39f51d692866555e70ff"} Nov 24 09:35:57 crc kubenswrapper[4886]: I1124 09:35:57.521711 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tempest-tests-tempest" podStartSLOduration=3.707716167 podStartE2EDuration="50.52169337s" podCreationTimestamp="2025-11-24 09:35:07 +0000 UTC" firstStartedPulling="2025-11-24 09:35:09.290382655 +0000 UTC m=+2765.177120790" lastFinishedPulling="2025-11-24 09:35:56.104359868 +0000 UTC m=+2811.991097993" observedRunningTime="2025-11-24 09:35:57.513376923 +0000 UTC m=+2813.400115058" watchObservedRunningTime="2025-11-24 09:35:57.52169337 +0000 UTC m=+2813.408431505" Nov 24 09:35:59 crc kubenswrapper[4886]: I1124 09:35:59.026891 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-9cgmx" Nov 24 09:35:59 crc kubenswrapper[4886]: I1124 09:35:59.026969 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-9cgmx" Nov 24 09:36:00 crc kubenswrapper[4886]: I1124 09:36:00.115043 4886 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-9cgmx" podUID="4e1b427b-3215-47bb-91ca-19b628f71f8c" containerName="registry-server" probeResult="failure" output=< Nov 24 09:36:00 crc kubenswrapper[4886]: timeout: failed to connect service ":50051" within 1s Nov 24 09:36:00 crc kubenswrapper[4886]: > Nov 24 09:36:09 crc kubenswrapper[4886]: I1124 09:36:09.086500 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-9cgmx" Nov 24 09:36:09 crc kubenswrapper[4886]: I1124 09:36:09.156103 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-9cgmx" Nov 24 09:36:09 crc kubenswrapper[4886]: I1124 09:36:09.323958 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-9cgmx"] Nov 24 09:36:10 crc kubenswrapper[4886]: I1124 09:36:10.607250 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-9cgmx" podUID="4e1b427b-3215-47bb-91ca-19b628f71f8c" containerName="registry-server" containerID="cri-o://582f086faf570db60fd85f9ac43a8a8e0b36758deabfa84ad7a1215cd40803af" gracePeriod=2 Nov 24 09:36:11 crc kubenswrapper[4886]: I1124 09:36:11.083711 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9cgmx" Nov 24 09:36:11 crc kubenswrapper[4886]: I1124 09:36:11.218616 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xk589\" (UniqueName: \"kubernetes.io/projected/4e1b427b-3215-47bb-91ca-19b628f71f8c-kube-api-access-xk589\") pod \"4e1b427b-3215-47bb-91ca-19b628f71f8c\" (UID: \"4e1b427b-3215-47bb-91ca-19b628f71f8c\") " Nov 24 09:36:11 crc kubenswrapper[4886]: I1124 09:36:11.218986 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4e1b427b-3215-47bb-91ca-19b628f71f8c-utilities\") pod \"4e1b427b-3215-47bb-91ca-19b628f71f8c\" (UID: \"4e1b427b-3215-47bb-91ca-19b628f71f8c\") " Nov 24 09:36:11 crc kubenswrapper[4886]: I1124 09:36:11.219021 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4e1b427b-3215-47bb-91ca-19b628f71f8c-catalog-content\") pod \"4e1b427b-3215-47bb-91ca-19b628f71f8c\" (UID: \"4e1b427b-3215-47bb-91ca-19b628f71f8c\") " Nov 24 09:36:11 crc kubenswrapper[4886]: I1124 09:36:11.219662 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4e1b427b-3215-47bb-91ca-19b628f71f8c-utilities" (OuterVolumeSpecName: "utilities") pod "4e1b427b-3215-47bb-91ca-19b628f71f8c" (UID: "4e1b427b-3215-47bb-91ca-19b628f71f8c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 09:36:11 crc kubenswrapper[4886]: I1124 09:36:11.224896 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4e1b427b-3215-47bb-91ca-19b628f71f8c-kube-api-access-xk589" (OuterVolumeSpecName: "kube-api-access-xk589") pod "4e1b427b-3215-47bb-91ca-19b628f71f8c" (UID: "4e1b427b-3215-47bb-91ca-19b628f71f8c"). InnerVolumeSpecName "kube-api-access-xk589". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 09:36:11 crc kubenswrapper[4886]: I1124 09:36:11.321662 4886 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4e1b427b-3215-47bb-91ca-19b628f71f8c-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 09:36:11 crc kubenswrapper[4886]: I1124 09:36:11.321706 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xk589\" (UniqueName: \"kubernetes.io/projected/4e1b427b-3215-47bb-91ca-19b628f71f8c-kube-api-access-xk589\") on node \"crc\" DevicePath \"\"" Nov 24 09:36:11 crc kubenswrapper[4886]: I1124 09:36:11.326237 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4e1b427b-3215-47bb-91ca-19b628f71f8c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4e1b427b-3215-47bb-91ca-19b628f71f8c" (UID: "4e1b427b-3215-47bb-91ca-19b628f71f8c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 09:36:11 crc kubenswrapper[4886]: I1124 09:36:11.433293 4886 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4e1b427b-3215-47bb-91ca-19b628f71f8c-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 09:36:11 crc kubenswrapper[4886]: I1124 09:36:11.618066 4886 generic.go:334] "Generic (PLEG): container finished" podID="4e1b427b-3215-47bb-91ca-19b628f71f8c" containerID="582f086faf570db60fd85f9ac43a8a8e0b36758deabfa84ad7a1215cd40803af" exitCode=0 Nov 24 09:36:11 crc kubenswrapper[4886]: I1124 09:36:11.618139 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9cgmx" event={"ID":"4e1b427b-3215-47bb-91ca-19b628f71f8c","Type":"ContainerDied","Data":"582f086faf570db60fd85f9ac43a8a8e0b36758deabfa84ad7a1215cd40803af"} Nov 24 09:36:11 crc kubenswrapper[4886]: I1124 09:36:11.618394 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9cgmx" event={"ID":"4e1b427b-3215-47bb-91ca-19b628f71f8c","Type":"ContainerDied","Data":"d6c634bf703d8ebd1693cd0a5a96ba7ebaa1ad1415e015c05a8412285a44daa3"} Nov 24 09:36:11 crc kubenswrapper[4886]: I1124 09:36:11.618417 4886 scope.go:117] "RemoveContainer" containerID="582f086faf570db60fd85f9ac43a8a8e0b36758deabfa84ad7a1215cd40803af" Nov 24 09:36:11 crc kubenswrapper[4886]: I1124 09:36:11.618182 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9cgmx" Nov 24 09:36:11 crc kubenswrapper[4886]: I1124 09:36:11.645360 4886 scope.go:117] "RemoveContainer" containerID="f79f12a4439dd4c57713e3a4289ce682f446a021912cdafea132af1d44f45760" Nov 24 09:36:11 crc kubenswrapper[4886]: I1124 09:36:11.653929 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-9cgmx"] Nov 24 09:36:11 crc kubenswrapper[4886]: I1124 09:36:11.663033 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-9cgmx"] Nov 24 09:36:11 crc kubenswrapper[4886]: I1124 09:36:11.677900 4886 scope.go:117] "RemoveContainer" containerID="42893b97364b81009dfdf5ee2aeba855d61433425433269c5eb42d9b0c829bfe" Nov 24 09:36:11 crc kubenswrapper[4886]: I1124 09:36:11.709490 4886 scope.go:117] "RemoveContainer" containerID="582f086faf570db60fd85f9ac43a8a8e0b36758deabfa84ad7a1215cd40803af" Nov 24 09:36:11 crc kubenswrapper[4886]: E1124 09:36:11.710176 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"582f086faf570db60fd85f9ac43a8a8e0b36758deabfa84ad7a1215cd40803af\": container with ID starting with 582f086faf570db60fd85f9ac43a8a8e0b36758deabfa84ad7a1215cd40803af not found: ID does not exist" containerID="582f086faf570db60fd85f9ac43a8a8e0b36758deabfa84ad7a1215cd40803af" Nov 24 09:36:11 crc kubenswrapper[4886]: I1124 09:36:11.710211 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"582f086faf570db60fd85f9ac43a8a8e0b36758deabfa84ad7a1215cd40803af"} err="failed to get container status \"582f086faf570db60fd85f9ac43a8a8e0b36758deabfa84ad7a1215cd40803af\": rpc error: code = NotFound desc = could not find container \"582f086faf570db60fd85f9ac43a8a8e0b36758deabfa84ad7a1215cd40803af\": container with ID starting with 582f086faf570db60fd85f9ac43a8a8e0b36758deabfa84ad7a1215cd40803af not found: ID does not exist" Nov 24 09:36:11 crc kubenswrapper[4886]: I1124 09:36:11.710230 4886 scope.go:117] "RemoveContainer" containerID="f79f12a4439dd4c57713e3a4289ce682f446a021912cdafea132af1d44f45760" Nov 24 09:36:11 crc kubenswrapper[4886]: E1124 09:36:11.711182 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f79f12a4439dd4c57713e3a4289ce682f446a021912cdafea132af1d44f45760\": container with ID starting with f79f12a4439dd4c57713e3a4289ce682f446a021912cdafea132af1d44f45760 not found: ID does not exist" containerID="f79f12a4439dd4c57713e3a4289ce682f446a021912cdafea132af1d44f45760" Nov 24 09:36:11 crc kubenswrapper[4886]: I1124 09:36:11.711201 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f79f12a4439dd4c57713e3a4289ce682f446a021912cdafea132af1d44f45760"} err="failed to get container status \"f79f12a4439dd4c57713e3a4289ce682f446a021912cdafea132af1d44f45760\": rpc error: code = NotFound desc = could not find container \"f79f12a4439dd4c57713e3a4289ce682f446a021912cdafea132af1d44f45760\": container with ID starting with f79f12a4439dd4c57713e3a4289ce682f446a021912cdafea132af1d44f45760 not found: ID does not exist" Nov 24 09:36:11 crc kubenswrapper[4886]: I1124 09:36:11.711213 4886 scope.go:117] "RemoveContainer" containerID="42893b97364b81009dfdf5ee2aeba855d61433425433269c5eb42d9b0c829bfe" Nov 24 09:36:11 crc kubenswrapper[4886]: E1124 09:36:11.711595 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"42893b97364b81009dfdf5ee2aeba855d61433425433269c5eb42d9b0c829bfe\": container with ID starting with 42893b97364b81009dfdf5ee2aeba855d61433425433269c5eb42d9b0c829bfe not found: ID does not exist" containerID="42893b97364b81009dfdf5ee2aeba855d61433425433269c5eb42d9b0c829bfe" Nov 24 09:36:11 crc kubenswrapper[4886]: I1124 09:36:11.711617 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"42893b97364b81009dfdf5ee2aeba855d61433425433269c5eb42d9b0c829bfe"} err="failed to get container status \"42893b97364b81009dfdf5ee2aeba855d61433425433269c5eb42d9b0c829bfe\": rpc error: code = NotFound desc = could not find container \"42893b97364b81009dfdf5ee2aeba855d61433425433269c5eb42d9b0c829bfe\": container with ID starting with 42893b97364b81009dfdf5ee2aeba855d61433425433269c5eb42d9b0c829bfe not found: ID does not exist" Nov 24 09:36:12 crc kubenswrapper[4886]: I1124 09:36:12.862263 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4e1b427b-3215-47bb-91ca-19b628f71f8c" path="/var/lib/kubelet/pods/4e1b427b-3215-47bb-91ca-19b628f71f8c/volumes" Nov 24 09:36:31 crc kubenswrapper[4886]: I1124 09:36:31.783945 4886 patch_prober.go:28] interesting pod/machine-config-daemon-zc46q container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 09:36:31 crc kubenswrapper[4886]: I1124 09:36:31.784608 4886 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zc46q" podUID="23cb993e-0360-4449-b604-8ddd825a6502" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 09:37:01 crc kubenswrapper[4886]: I1124 09:37:01.784827 4886 patch_prober.go:28] interesting pod/machine-config-daemon-zc46q container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 09:37:01 crc kubenswrapper[4886]: I1124 09:37:01.785388 4886 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zc46q" podUID="23cb993e-0360-4449-b604-8ddd825a6502" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 09:37:25 crc kubenswrapper[4886]: I1124 09:37:25.952508 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-fks87"] Nov 24 09:37:25 crc kubenswrapper[4886]: E1124 09:37:25.953673 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e1b427b-3215-47bb-91ca-19b628f71f8c" containerName="extract-utilities" Nov 24 09:37:25 crc kubenswrapper[4886]: I1124 09:37:25.953692 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e1b427b-3215-47bb-91ca-19b628f71f8c" containerName="extract-utilities" Nov 24 09:37:25 crc kubenswrapper[4886]: E1124 09:37:25.953713 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e1b427b-3215-47bb-91ca-19b628f71f8c" containerName="extract-content" Nov 24 09:37:25 crc kubenswrapper[4886]: I1124 09:37:25.953722 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e1b427b-3215-47bb-91ca-19b628f71f8c" containerName="extract-content" Nov 24 09:37:25 crc kubenswrapper[4886]: E1124 09:37:25.953740 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e1b427b-3215-47bb-91ca-19b628f71f8c" containerName="registry-server" Nov 24 09:37:25 crc kubenswrapper[4886]: I1124 09:37:25.953749 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e1b427b-3215-47bb-91ca-19b628f71f8c" containerName="registry-server" Nov 24 09:37:25 crc kubenswrapper[4886]: I1124 09:37:25.953982 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="4e1b427b-3215-47bb-91ca-19b628f71f8c" containerName="registry-server" Nov 24 09:37:25 crc kubenswrapper[4886]: I1124 09:37:25.955892 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fks87" Nov 24 09:37:25 crc kubenswrapper[4886]: I1124 09:37:25.961887 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-fks87"] Nov 24 09:37:26 crc kubenswrapper[4886]: I1124 09:37:26.094291 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e2b82948-72ce-4741-8956-67c43e51eaaa-utilities\") pod \"community-operators-fks87\" (UID: \"e2b82948-72ce-4741-8956-67c43e51eaaa\") " pod="openshift-marketplace/community-operators-fks87" Nov 24 09:37:26 crc kubenswrapper[4886]: I1124 09:37:26.094361 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e2b82948-72ce-4741-8956-67c43e51eaaa-catalog-content\") pod \"community-operators-fks87\" (UID: \"e2b82948-72ce-4741-8956-67c43e51eaaa\") " pod="openshift-marketplace/community-operators-fks87" Nov 24 09:37:26 crc kubenswrapper[4886]: I1124 09:37:26.094483 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jwwc6\" (UniqueName: \"kubernetes.io/projected/e2b82948-72ce-4741-8956-67c43e51eaaa-kube-api-access-jwwc6\") pod \"community-operators-fks87\" (UID: \"e2b82948-72ce-4741-8956-67c43e51eaaa\") " pod="openshift-marketplace/community-operators-fks87" Nov 24 09:37:26 crc kubenswrapper[4886]: I1124 09:37:26.196284 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e2b82948-72ce-4741-8956-67c43e51eaaa-utilities\") pod \"community-operators-fks87\" (UID: \"e2b82948-72ce-4741-8956-67c43e51eaaa\") " pod="openshift-marketplace/community-operators-fks87" Nov 24 09:37:26 crc kubenswrapper[4886]: I1124 09:37:26.196352 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e2b82948-72ce-4741-8956-67c43e51eaaa-catalog-content\") pod \"community-operators-fks87\" (UID: \"e2b82948-72ce-4741-8956-67c43e51eaaa\") " pod="openshift-marketplace/community-operators-fks87" Nov 24 09:37:26 crc kubenswrapper[4886]: I1124 09:37:26.196420 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jwwc6\" (UniqueName: \"kubernetes.io/projected/e2b82948-72ce-4741-8956-67c43e51eaaa-kube-api-access-jwwc6\") pod \"community-operators-fks87\" (UID: \"e2b82948-72ce-4741-8956-67c43e51eaaa\") " pod="openshift-marketplace/community-operators-fks87" Nov 24 09:37:26 crc kubenswrapper[4886]: I1124 09:37:26.196889 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e2b82948-72ce-4741-8956-67c43e51eaaa-utilities\") pod \"community-operators-fks87\" (UID: \"e2b82948-72ce-4741-8956-67c43e51eaaa\") " pod="openshift-marketplace/community-operators-fks87" Nov 24 09:37:26 crc kubenswrapper[4886]: I1124 09:37:26.196962 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e2b82948-72ce-4741-8956-67c43e51eaaa-catalog-content\") pod \"community-operators-fks87\" (UID: \"e2b82948-72ce-4741-8956-67c43e51eaaa\") " pod="openshift-marketplace/community-operators-fks87" Nov 24 09:37:26 crc kubenswrapper[4886]: I1124 09:37:26.215673 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jwwc6\" (UniqueName: \"kubernetes.io/projected/e2b82948-72ce-4741-8956-67c43e51eaaa-kube-api-access-jwwc6\") pod \"community-operators-fks87\" (UID: \"e2b82948-72ce-4741-8956-67c43e51eaaa\") " pod="openshift-marketplace/community-operators-fks87" Nov 24 09:37:26 crc kubenswrapper[4886]: I1124 09:37:26.283481 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fks87" Nov 24 09:37:26 crc kubenswrapper[4886]: I1124 09:37:26.364442 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-gnpbb"] Nov 24 09:37:26 crc kubenswrapper[4886]: I1124 09:37:26.372750 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gnpbb" Nov 24 09:37:26 crc kubenswrapper[4886]: I1124 09:37:26.380746 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-gnpbb"] Nov 24 09:37:26 crc kubenswrapper[4886]: I1124 09:37:26.508629 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jrbrs\" (UniqueName: \"kubernetes.io/projected/866e35fd-797b-470c-8957-b6d6ce878425-kube-api-access-jrbrs\") pod \"redhat-marketplace-gnpbb\" (UID: \"866e35fd-797b-470c-8957-b6d6ce878425\") " pod="openshift-marketplace/redhat-marketplace-gnpbb" Nov 24 09:37:26 crc kubenswrapper[4886]: I1124 09:37:26.508736 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/866e35fd-797b-470c-8957-b6d6ce878425-catalog-content\") pod \"redhat-marketplace-gnpbb\" (UID: \"866e35fd-797b-470c-8957-b6d6ce878425\") " pod="openshift-marketplace/redhat-marketplace-gnpbb" Nov 24 09:37:26 crc kubenswrapper[4886]: I1124 09:37:26.508798 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/866e35fd-797b-470c-8957-b6d6ce878425-utilities\") pod \"redhat-marketplace-gnpbb\" (UID: \"866e35fd-797b-470c-8957-b6d6ce878425\") " pod="openshift-marketplace/redhat-marketplace-gnpbb" Nov 24 09:37:26 crc kubenswrapper[4886]: I1124 09:37:26.609875 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jrbrs\" (UniqueName: \"kubernetes.io/projected/866e35fd-797b-470c-8957-b6d6ce878425-kube-api-access-jrbrs\") pod \"redhat-marketplace-gnpbb\" (UID: \"866e35fd-797b-470c-8957-b6d6ce878425\") " pod="openshift-marketplace/redhat-marketplace-gnpbb" Nov 24 09:37:26 crc kubenswrapper[4886]: I1124 09:37:26.609983 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/866e35fd-797b-470c-8957-b6d6ce878425-catalog-content\") pod \"redhat-marketplace-gnpbb\" (UID: \"866e35fd-797b-470c-8957-b6d6ce878425\") " pod="openshift-marketplace/redhat-marketplace-gnpbb" Nov 24 09:37:26 crc kubenswrapper[4886]: I1124 09:37:26.610043 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/866e35fd-797b-470c-8957-b6d6ce878425-utilities\") pod \"redhat-marketplace-gnpbb\" (UID: \"866e35fd-797b-470c-8957-b6d6ce878425\") " pod="openshift-marketplace/redhat-marketplace-gnpbb" Nov 24 09:37:26 crc kubenswrapper[4886]: I1124 09:37:26.610591 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/866e35fd-797b-470c-8957-b6d6ce878425-utilities\") pod \"redhat-marketplace-gnpbb\" (UID: \"866e35fd-797b-470c-8957-b6d6ce878425\") " pod="openshift-marketplace/redhat-marketplace-gnpbb" Nov 24 09:37:26 crc kubenswrapper[4886]: I1124 09:37:26.611009 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/866e35fd-797b-470c-8957-b6d6ce878425-catalog-content\") pod \"redhat-marketplace-gnpbb\" (UID: \"866e35fd-797b-470c-8957-b6d6ce878425\") " pod="openshift-marketplace/redhat-marketplace-gnpbb" Nov 24 09:37:26 crc kubenswrapper[4886]: I1124 09:37:26.658227 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jrbrs\" (UniqueName: \"kubernetes.io/projected/866e35fd-797b-470c-8957-b6d6ce878425-kube-api-access-jrbrs\") pod \"redhat-marketplace-gnpbb\" (UID: \"866e35fd-797b-470c-8957-b6d6ce878425\") " pod="openshift-marketplace/redhat-marketplace-gnpbb" Nov 24 09:37:26 crc kubenswrapper[4886]: I1124 09:37:26.775136 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gnpbb" Nov 24 09:37:27 crc kubenswrapper[4886]: I1124 09:37:27.009647 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-fks87"] Nov 24 09:37:27 crc kubenswrapper[4886]: I1124 09:37:27.313939 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fks87" event={"ID":"e2b82948-72ce-4741-8956-67c43e51eaaa","Type":"ContainerStarted","Data":"88887e3be8b13d9d543720b23e827e3eeaed9f5f66f8c7c547c9b12093cbfb38"} Nov 24 09:37:27 crc kubenswrapper[4886]: I1124 09:37:27.314445 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fks87" event={"ID":"e2b82948-72ce-4741-8956-67c43e51eaaa","Type":"ContainerStarted","Data":"01d3340ec17db0f5d4c0db3b8071e32e5aebf6208188c5304897750f02a7512f"} Nov 24 09:37:27 crc kubenswrapper[4886]: I1124 09:37:27.344317 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-gnpbb"] Nov 24 09:37:28 crc kubenswrapper[4886]: I1124 09:37:28.325144 4886 generic.go:334] "Generic (PLEG): container finished" podID="866e35fd-797b-470c-8957-b6d6ce878425" containerID="5b4def02c5cc7f2209d60ea999263a05f35402c4ccc1bdd1a3fc33766c7f050d" exitCode=0 Nov 24 09:37:28 crc kubenswrapper[4886]: I1124 09:37:28.325503 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gnpbb" event={"ID":"866e35fd-797b-470c-8957-b6d6ce878425","Type":"ContainerDied","Data":"5b4def02c5cc7f2209d60ea999263a05f35402c4ccc1bdd1a3fc33766c7f050d"} Nov 24 09:37:28 crc kubenswrapper[4886]: I1124 09:37:28.325546 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gnpbb" event={"ID":"866e35fd-797b-470c-8957-b6d6ce878425","Type":"ContainerStarted","Data":"7b3c0c1a0c91d7b99954d779ac3f01e14aa1e6fbd476ff919d1fe28f2b7ff73b"} Nov 24 09:37:28 crc kubenswrapper[4886]: I1124 09:37:28.328266 4886 generic.go:334] "Generic (PLEG): container finished" podID="e2b82948-72ce-4741-8956-67c43e51eaaa" containerID="88887e3be8b13d9d543720b23e827e3eeaed9f5f66f8c7c547c9b12093cbfb38" exitCode=0 Nov 24 09:37:28 crc kubenswrapper[4886]: I1124 09:37:28.328302 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fks87" event={"ID":"e2b82948-72ce-4741-8956-67c43e51eaaa","Type":"ContainerDied","Data":"88887e3be8b13d9d543720b23e827e3eeaed9f5f66f8c7c547c9b12093cbfb38"} Nov 24 09:37:30 crc kubenswrapper[4886]: I1124 09:37:30.350615 4886 generic.go:334] "Generic (PLEG): container finished" podID="e2b82948-72ce-4741-8956-67c43e51eaaa" containerID="b0c756e27ef40f284e88e7f66c7937508e4e188c78b1f21987719c2ddc13042b" exitCode=0 Nov 24 09:37:30 crc kubenswrapper[4886]: I1124 09:37:30.350733 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fks87" event={"ID":"e2b82948-72ce-4741-8956-67c43e51eaaa","Type":"ContainerDied","Data":"b0c756e27ef40f284e88e7f66c7937508e4e188c78b1f21987719c2ddc13042b"} Nov 24 09:37:30 crc kubenswrapper[4886]: I1124 09:37:30.353809 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gnpbb" event={"ID":"866e35fd-797b-470c-8957-b6d6ce878425","Type":"ContainerStarted","Data":"3eef1d856b4d30c68404dcbfe9efa480a44961ca124d57636cdb62277adfe2de"} Nov 24 09:37:31 crc kubenswrapper[4886]: I1124 09:37:31.379652 4886 generic.go:334] "Generic (PLEG): container finished" podID="866e35fd-797b-470c-8957-b6d6ce878425" containerID="3eef1d856b4d30c68404dcbfe9efa480a44961ca124d57636cdb62277adfe2de" exitCode=0 Nov 24 09:37:31 crc kubenswrapper[4886]: I1124 09:37:31.379716 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gnpbb" event={"ID":"866e35fd-797b-470c-8957-b6d6ce878425","Type":"ContainerDied","Data":"3eef1d856b4d30c68404dcbfe9efa480a44961ca124d57636cdb62277adfe2de"} Nov 24 09:37:31 crc kubenswrapper[4886]: I1124 09:37:31.383382 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fks87" event={"ID":"e2b82948-72ce-4741-8956-67c43e51eaaa","Type":"ContainerStarted","Data":"15bf6ef5cf0ff205835ceca1926f7fed32f0bd4d15ad101f8bcd774f33f102c1"} Nov 24 09:37:31 crc kubenswrapper[4886]: I1124 09:37:31.421812 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-fks87" podStartSLOduration=3.681912698 podStartE2EDuration="6.42179262s" podCreationTimestamp="2025-11-24 09:37:25 +0000 UTC" firstStartedPulling="2025-11-24 09:37:28.32988146 +0000 UTC m=+2904.216619585" lastFinishedPulling="2025-11-24 09:37:31.069761372 +0000 UTC m=+2906.956499507" observedRunningTime="2025-11-24 09:37:31.415630754 +0000 UTC m=+2907.302368899" watchObservedRunningTime="2025-11-24 09:37:31.42179262 +0000 UTC m=+2907.308530755" Nov 24 09:37:31 crc kubenswrapper[4886]: I1124 09:37:31.784569 4886 patch_prober.go:28] interesting pod/machine-config-daemon-zc46q container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 09:37:31 crc kubenswrapper[4886]: I1124 09:37:31.784917 4886 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zc46q" podUID="23cb993e-0360-4449-b604-8ddd825a6502" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 09:37:31 crc kubenswrapper[4886]: I1124 09:37:31.784985 4886 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-zc46q" Nov 24 09:37:31 crc kubenswrapper[4886]: I1124 09:37:31.785840 4886 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"28524ef17704b6f8c15ccaef5b8a47fad706e538601bd9483a53a9befb235aff"} pod="openshift-machine-config-operator/machine-config-daemon-zc46q" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 24 09:37:31 crc kubenswrapper[4886]: I1124 09:37:31.785907 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-zc46q" podUID="23cb993e-0360-4449-b604-8ddd825a6502" containerName="machine-config-daemon" containerID="cri-o://28524ef17704b6f8c15ccaef5b8a47fad706e538601bd9483a53a9befb235aff" gracePeriod=600 Nov 24 09:37:32 crc kubenswrapper[4886]: I1124 09:37:32.395759 4886 generic.go:334] "Generic (PLEG): container finished" podID="23cb993e-0360-4449-b604-8ddd825a6502" containerID="28524ef17704b6f8c15ccaef5b8a47fad706e538601bd9483a53a9befb235aff" exitCode=0 Nov 24 09:37:32 crc kubenswrapper[4886]: I1124 09:37:32.396340 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zc46q" event={"ID":"23cb993e-0360-4449-b604-8ddd825a6502","Type":"ContainerDied","Data":"28524ef17704b6f8c15ccaef5b8a47fad706e538601bd9483a53a9befb235aff"} Nov 24 09:37:32 crc kubenswrapper[4886]: I1124 09:37:32.396412 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zc46q" event={"ID":"23cb993e-0360-4449-b604-8ddd825a6502","Type":"ContainerStarted","Data":"aa909447043ab64320232fc597943f6e0932319ce7386a6b3d06298eb5b2bbf2"} Nov 24 09:37:32 crc kubenswrapper[4886]: I1124 09:37:32.396438 4886 scope.go:117] "RemoveContainer" containerID="c1638d7a83dddd919356693ac2e24d3a4c73032c91053a69ef35d33fca8c2b71" Nov 24 09:37:32 crc kubenswrapper[4886]: I1124 09:37:32.401279 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gnpbb" event={"ID":"866e35fd-797b-470c-8957-b6d6ce878425","Type":"ContainerStarted","Data":"9e2f9329d94be43e19d38282f1a1e1556684ac63811765ef6fe9f1f0a71f132d"} Nov 24 09:37:33 crc kubenswrapper[4886]: I1124 09:37:33.447399 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-gnpbb" podStartSLOduration=3.758627708 podStartE2EDuration="7.447358124s" podCreationTimestamp="2025-11-24 09:37:26 +0000 UTC" firstStartedPulling="2025-11-24 09:37:28.327281376 +0000 UTC m=+2904.214019511" lastFinishedPulling="2025-11-24 09:37:32.016011792 +0000 UTC m=+2907.902749927" observedRunningTime="2025-11-24 09:37:33.439268803 +0000 UTC m=+2909.326006948" watchObservedRunningTime="2025-11-24 09:37:33.447358124 +0000 UTC m=+2909.334096259" Nov 24 09:37:36 crc kubenswrapper[4886]: I1124 09:37:36.283969 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-fks87" Nov 24 09:37:36 crc kubenswrapper[4886]: I1124 09:37:36.285492 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-fks87" Nov 24 09:37:36 crc kubenswrapper[4886]: I1124 09:37:36.334548 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-fks87" Nov 24 09:37:36 crc kubenswrapper[4886]: I1124 09:37:36.504281 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-fks87" Nov 24 09:37:36 crc kubenswrapper[4886]: I1124 09:37:36.775934 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-gnpbb" Nov 24 09:37:36 crc kubenswrapper[4886]: I1124 09:37:36.776187 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-gnpbb" Nov 24 09:37:36 crc kubenswrapper[4886]: I1124 09:37:36.832733 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-gnpbb" Nov 24 09:37:37 crc kubenswrapper[4886]: I1124 09:37:37.518314 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-gnpbb" Nov 24 09:37:38 crc kubenswrapper[4886]: I1124 09:37:38.537289 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-fks87"] Nov 24 09:37:39 crc kubenswrapper[4886]: I1124 09:37:39.135657 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-gnpbb"] Nov 24 09:37:39 crc kubenswrapper[4886]: I1124 09:37:39.480843 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-fks87" podUID="e2b82948-72ce-4741-8956-67c43e51eaaa" containerName="registry-server" containerID="cri-o://15bf6ef5cf0ff205835ceca1926f7fed32f0bd4d15ad101f8bcd774f33f102c1" gracePeriod=2 Nov 24 09:37:39 crc kubenswrapper[4886]: I1124 09:37:39.480964 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-gnpbb" podUID="866e35fd-797b-470c-8957-b6d6ce878425" containerName="registry-server" containerID="cri-o://9e2f9329d94be43e19d38282f1a1e1556684ac63811765ef6fe9f1f0a71f132d" gracePeriod=2 Nov 24 09:37:40 crc kubenswrapper[4886]: I1124 09:37:40.495676 4886 generic.go:334] "Generic (PLEG): container finished" podID="e2b82948-72ce-4741-8956-67c43e51eaaa" containerID="15bf6ef5cf0ff205835ceca1926f7fed32f0bd4d15ad101f8bcd774f33f102c1" exitCode=0 Nov 24 09:37:40 crc kubenswrapper[4886]: I1124 09:37:40.495799 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fks87" event={"ID":"e2b82948-72ce-4741-8956-67c43e51eaaa","Type":"ContainerDied","Data":"15bf6ef5cf0ff205835ceca1926f7fed32f0bd4d15ad101f8bcd774f33f102c1"} Nov 24 09:37:40 crc kubenswrapper[4886]: I1124 09:37:40.497932 4886 generic.go:334] "Generic (PLEG): container finished" podID="866e35fd-797b-470c-8957-b6d6ce878425" containerID="9e2f9329d94be43e19d38282f1a1e1556684ac63811765ef6fe9f1f0a71f132d" exitCode=0 Nov 24 09:37:40 crc kubenswrapper[4886]: I1124 09:37:40.497975 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gnpbb" event={"ID":"866e35fd-797b-470c-8957-b6d6ce878425","Type":"ContainerDied","Data":"9e2f9329d94be43e19d38282f1a1e1556684ac63811765ef6fe9f1f0a71f132d"} Nov 24 09:37:40 crc kubenswrapper[4886]: I1124 09:37:40.653209 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fks87" Nov 24 09:37:40 crc kubenswrapper[4886]: I1124 09:37:40.662401 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gnpbb" Nov 24 09:37:40 crc kubenswrapper[4886]: I1124 09:37:40.730349 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/866e35fd-797b-470c-8957-b6d6ce878425-catalog-content\") pod \"866e35fd-797b-470c-8957-b6d6ce878425\" (UID: \"866e35fd-797b-470c-8957-b6d6ce878425\") " Nov 24 09:37:40 crc kubenswrapper[4886]: I1124 09:37:40.730501 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jrbrs\" (UniqueName: \"kubernetes.io/projected/866e35fd-797b-470c-8957-b6d6ce878425-kube-api-access-jrbrs\") pod \"866e35fd-797b-470c-8957-b6d6ce878425\" (UID: \"866e35fd-797b-470c-8957-b6d6ce878425\") " Nov 24 09:37:40 crc kubenswrapper[4886]: I1124 09:37:40.730561 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e2b82948-72ce-4741-8956-67c43e51eaaa-utilities\") pod \"e2b82948-72ce-4741-8956-67c43e51eaaa\" (UID: \"e2b82948-72ce-4741-8956-67c43e51eaaa\") " Nov 24 09:37:40 crc kubenswrapper[4886]: I1124 09:37:40.730586 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jwwc6\" (UniqueName: \"kubernetes.io/projected/e2b82948-72ce-4741-8956-67c43e51eaaa-kube-api-access-jwwc6\") pod \"e2b82948-72ce-4741-8956-67c43e51eaaa\" (UID: \"e2b82948-72ce-4741-8956-67c43e51eaaa\") " Nov 24 09:37:40 crc kubenswrapper[4886]: I1124 09:37:40.730624 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/866e35fd-797b-470c-8957-b6d6ce878425-utilities\") pod \"866e35fd-797b-470c-8957-b6d6ce878425\" (UID: \"866e35fd-797b-470c-8957-b6d6ce878425\") " Nov 24 09:37:40 crc kubenswrapper[4886]: I1124 09:37:40.730677 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e2b82948-72ce-4741-8956-67c43e51eaaa-catalog-content\") pod \"e2b82948-72ce-4741-8956-67c43e51eaaa\" (UID: \"e2b82948-72ce-4741-8956-67c43e51eaaa\") " Nov 24 09:37:40 crc kubenswrapper[4886]: I1124 09:37:40.732838 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e2b82948-72ce-4741-8956-67c43e51eaaa-utilities" (OuterVolumeSpecName: "utilities") pod "e2b82948-72ce-4741-8956-67c43e51eaaa" (UID: "e2b82948-72ce-4741-8956-67c43e51eaaa"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 09:37:40 crc kubenswrapper[4886]: I1124 09:37:40.733369 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/866e35fd-797b-470c-8957-b6d6ce878425-utilities" (OuterVolumeSpecName: "utilities") pod "866e35fd-797b-470c-8957-b6d6ce878425" (UID: "866e35fd-797b-470c-8957-b6d6ce878425"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 09:37:40 crc kubenswrapper[4886]: I1124 09:37:40.744526 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e2b82948-72ce-4741-8956-67c43e51eaaa-kube-api-access-jwwc6" (OuterVolumeSpecName: "kube-api-access-jwwc6") pod "e2b82948-72ce-4741-8956-67c43e51eaaa" (UID: "e2b82948-72ce-4741-8956-67c43e51eaaa"). InnerVolumeSpecName "kube-api-access-jwwc6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 09:37:40 crc kubenswrapper[4886]: I1124 09:37:40.747388 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/866e35fd-797b-470c-8957-b6d6ce878425-kube-api-access-jrbrs" (OuterVolumeSpecName: "kube-api-access-jrbrs") pod "866e35fd-797b-470c-8957-b6d6ce878425" (UID: "866e35fd-797b-470c-8957-b6d6ce878425"). InnerVolumeSpecName "kube-api-access-jrbrs". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 09:37:40 crc kubenswrapper[4886]: I1124 09:37:40.781883 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/866e35fd-797b-470c-8957-b6d6ce878425-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "866e35fd-797b-470c-8957-b6d6ce878425" (UID: "866e35fd-797b-470c-8957-b6d6ce878425"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 09:37:40 crc kubenswrapper[4886]: I1124 09:37:40.796103 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e2b82948-72ce-4741-8956-67c43e51eaaa-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e2b82948-72ce-4741-8956-67c43e51eaaa" (UID: "e2b82948-72ce-4741-8956-67c43e51eaaa"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 09:37:40 crc kubenswrapper[4886]: I1124 09:37:40.834293 4886 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e2b82948-72ce-4741-8956-67c43e51eaaa-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 09:37:40 crc kubenswrapper[4886]: I1124 09:37:40.834324 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jwwc6\" (UniqueName: \"kubernetes.io/projected/e2b82948-72ce-4741-8956-67c43e51eaaa-kube-api-access-jwwc6\") on node \"crc\" DevicePath \"\"" Nov 24 09:37:40 crc kubenswrapper[4886]: I1124 09:37:40.834341 4886 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/866e35fd-797b-470c-8957-b6d6ce878425-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 09:37:40 crc kubenswrapper[4886]: I1124 09:37:40.834349 4886 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e2b82948-72ce-4741-8956-67c43e51eaaa-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 09:37:40 crc kubenswrapper[4886]: I1124 09:37:40.834358 4886 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/866e35fd-797b-470c-8957-b6d6ce878425-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 09:37:40 crc kubenswrapper[4886]: I1124 09:37:40.834367 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jrbrs\" (UniqueName: \"kubernetes.io/projected/866e35fd-797b-470c-8957-b6d6ce878425-kube-api-access-jrbrs\") on node \"crc\" DevicePath \"\"" Nov 24 09:37:41 crc kubenswrapper[4886]: I1124 09:37:41.511623 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gnpbb" event={"ID":"866e35fd-797b-470c-8957-b6d6ce878425","Type":"ContainerDied","Data":"7b3c0c1a0c91d7b99954d779ac3f01e14aa1e6fbd476ff919d1fe28f2b7ff73b"} Nov 24 09:37:41 crc kubenswrapper[4886]: I1124 09:37:41.512259 4886 scope.go:117] "RemoveContainer" containerID="9e2f9329d94be43e19d38282f1a1e1556684ac63811765ef6fe9f1f0a71f132d" Nov 24 09:37:41 crc kubenswrapper[4886]: I1124 09:37:41.511849 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gnpbb" Nov 24 09:37:41 crc kubenswrapper[4886]: I1124 09:37:41.517597 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fks87" event={"ID":"e2b82948-72ce-4741-8956-67c43e51eaaa","Type":"ContainerDied","Data":"01d3340ec17db0f5d4c0db3b8071e32e5aebf6208188c5304897750f02a7512f"} Nov 24 09:37:41 crc kubenswrapper[4886]: I1124 09:37:41.517657 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fks87" Nov 24 09:37:41 crc kubenswrapper[4886]: I1124 09:37:41.544510 4886 scope.go:117] "RemoveContainer" containerID="3eef1d856b4d30c68404dcbfe9efa480a44961ca124d57636cdb62277adfe2de" Nov 24 09:37:41 crc kubenswrapper[4886]: I1124 09:37:41.548300 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-gnpbb"] Nov 24 09:37:41 crc kubenswrapper[4886]: I1124 09:37:41.573098 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-gnpbb"] Nov 24 09:37:41 crc kubenswrapper[4886]: I1124 09:37:41.582307 4886 scope.go:117] "RemoveContainer" containerID="5b4def02c5cc7f2209d60ea999263a05f35402c4ccc1bdd1a3fc33766c7f050d" Nov 24 09:37:41 crc kubenswrapper[4886]: I1124 09:37:41.596251 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-fks87"] Nov 24 09:37:41 crc kubenswrapper[4886]: I1124 09:37:41.603369 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-fks87"] Nov 24 09:37:41 crc kubenswrapper[4886]: I1124 09:37:41.634636 4886 scope.go:117] "RemoveContainer" containerID="15bf6ef5cf0ff205835ceca1926f7fed32f0bd4d15ad101f8bcd774f33f102c1" Nov 24 09:37:41 crc kubenswrapper[4886]: I1124 09:37:41.674075 4886 scope.go:117] "RemoveContainer" containerID="b0c756e27ef40f284e88e7f66c7937508e4e188c78b1f21987719c2ddc13042b" Nov 24 09:37:41 crc kubenswrapper[4886]: I1124 09:37:41.702611 4886 scope.go:117] "RemoveContainer" containerID="88887e3be8b13d9d543720b23e827e3eeaed9f5f66f8c7c547c9b12093cbfb38" Nov 24 09:37:42 crc kubenswrapper[4886]: I1124 09:37:42.858512 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="866e35fd-797b-470c-8957-b6d6ce878425" path="/var/lib/kubelet/pods/866e35fd-797b-470c-8957-b6d6ce878425/volumes" Nov 24 09:37:42 crc kubenswrapper[4886]: I1124 09:37:42.859480 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e2b82948-72ce-4741-8956-67c43e51eaaa" path="/var/lib/kubelet/pods/e2b82948-72ce-4741-8956-67c43e51eaaa/volumes" Nov 24 09:40:01 crc kubenswrapper[4886]: I1124 09:40:01.785535 4886 patch_prober.go:28] interesting pod/machine-config-daemon-zc46q container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 09:40:01 crc kubenswrapper[4886]: I1124 09:40:01.786500 4886 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zc46q" podUID="23cb993e-0360-4449-b604-8ddd825a6502" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 09:40:31 crc kubenswrapper[4886]: I1124 09:40:31.784911 4886 patch_prober.go:28] interesting pod/machine-config-daemon-zc46q container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 09:40:31 crc kubenswrapper[4886]: I1124 09:40:31.786012 4886 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zc46q" podUID="23cb993e-0360-4449-b604-8ddd825a6502" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 09:41:01 crc kubenswrapper[4886]: I1124 09:41:01.784940 4886 patch_prober.go:28] interesting pod/machine-config-daemon-zc46q container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 09:41:01 crc kubenswrapper[4886]: I1124 09:41:01.785912 4886 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zc46q" podUID="23cb993e-0360-4449-b604-8ddd825a6502" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 09:41:01 crc kubenswrapper[4886]: I1124 09:41:01.785983 4886 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-zc46q" Nov 24 09:41:01 crc kubenswrapper[4886]: I1124 09:41:01.787356 4886 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"aa909447043ab64320232fc597943f6e0932319ce7386a6b3d06298eb5b2bbf2"} pod="openshift-machine-config-operator/machine-config-daemon-zc46q" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 24 09:41:01 crc kubenswrapper[4886]: I1124 09:41:01.787422 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-zc46q" podUID="23cb993e-0360-4449-b604-8ddd825a6502" containerName="machine-config-daemon" containerID="cri-o://aa909447043ab64320232fc597943f6e0932319ce7386a6b3d06298eb5b2bbf2" gracePeriod=600 Nov 24 09:41:02 crc kubenswrapper[4886]: E1124 09:41:02.028904 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zc46q_openshift-machine-config-operator(23cb993e-0360-4449-b604-8ddd825a6502)\"" pod="openshift-machine-config-operator/machine-config-daemon-zc46q" podUID="23cb993e-0360-4449-b604-8ddd825a6502" Nov 24 09:41:02 crc kubenswrapper[4886]: I1124 09:41:02.589993 4886 generic.go:334] "Generic (PLEG): container finished" podID="23cb993e-0360-4449-b604-8ddd825a6502" containerID="aa909447043ab64320232fc597943f6e0932319ce7386a6b3d06298eb5b2bbf2" exitCode=0 Nov 24 09:41:02 crc kubenswrapper[4886]: I1124 09:41:02.590086 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zc46q" event={"ID":"23cb993e-0360-4449-b604-8ddd825a6502","Type":"ContainerDied","Data":"aa909447043ab64320232fc597943f6e0932319ce7386a6b3d06298eb5b2bbf2"} Nov 24 09:41:02 crc kubenswrapper[4886]: I1124 09:41:02.590258 4886 scope.go:117] "RemoveContainer" containerID="28524ef17704b6f8c15ccaef5b8a47fad706e538601bd9483a53a9befb235aff" Nov 24 09:41:02 crc kubenswrapper[4886]: I1124 09:41:02.591457 4886 scope.go:117] "RemoveContainer" containerID="aa909447043ab64320232fc597943f6e0932319ce7386a6b3d06298eb5b2bbf2" Nov 24 09:41:02 crc kubenswrapper[4886]: E1124 09:41:02.592276 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zc46q_openshift-machine-config-operator(23cb993e-0360-4449-b604-8ddd825a6502)\"" pod="openshift-machine-config-operator/machine-config-daemon-zc46q" podUID="23cb993e-0360-4449-b604-8ddd825a6502" Nov 24 09:41:16 crc kubenswrapper[4886]: I1124 09:41:16.850318 4886 scope.go:117] "RemoveContainer" containerID="aa909447043ab64320232fc597943f6e0932319ce7386a6b3d06298eb5b2bbf2" Nov 24 09:41:16 crc kubenswrapper[4886]: E1124 09:41:16.851137 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zc46q_openshift-machine-config-operator(23cb993e-0360-4449-b604-8ddd825a6502)\"" pod="openshift-machine-config-operator/machine-config-daemon-zc46q" podUID="23cb993e-0360-4449-b604-8ddd825a6502" Nov 24 09:41:29 crc kubenswrapper[4886]: I1124 09:41:29.850219 4886 scope.go:117] "RemoveContainer" containerID="aa909447043ab64320232fc597943f6e0932319ce7386a6b3d06298eb5b2bbf2" Nov 24 09:41:29 crc kubenswrapper[4886]: E1124 09:41:29.851128 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zc46q_openshift-machine-config-operator(23cb993e-0360-4449-b604-8ddd825a6502)\"" pod="openshift-machine-config-operator/machine-config-daemon-zc46q" podUID="23cb993e-0360-4449-b604-8ddd825a6502" Nov 24 09:41:40 crc kubenswrapper[4886]: I1124 09:41:40.850048 4886 scope.go:117] "RemoveContainer" containerID="aa909447043ab64320232fc597943f6e0932319ce7386a6b3d06298eb5b2bbf2" Nov 24 09:41:40 crc kubenswrapper[4886]: E1124 09:41:40.851292 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zc46q_openshift-machine-config-operator(23cb993e-0360-4449-b604-8ddd825a6502)\"" pod="openshift-machine-config-operator/machine-config-daemon-zc46q" podUID="23cb993e-0360-4449-b604-8ddd825a6502" Nov 24 09:41:51 crc kubenswrapper[4886]: I1124 09:41:51.849756 4886 scope.go:117] "RemoveContainer" containerID="aa909447043ab64320232fc597943f6e0932319ce7386a6b3d06298eb5b2bbf2" Nov 24 09:41:51 crc kubenswrapper[4886]: E1124 09:41:51.851014 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zc46q_openshift-machine-config-operator(23cb993e-0360-4449-b604-8ddd825a6502)\"" pod="openshift-machine-config-operator/machine-config-daemon-zc46q" podUID="23cb993e-0360-4449-b604-8ddd825a6502" Nov 24 09:42:03 crc kubenswrapper[4886]: I1124 09:42:03.850179 4886 scope.go:117] "RemoveContainer" containerID="aa909447043ab64320232fc597943f6e0932319ce7386a6b3d06298eb5b2bbf2" Nov 24 09:42:03 crc kubenswrapper[4886]: E1124 09:42:03.851422 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zc46q_openshift-machine-config-operator(23cb993e-0360-4449-b604-8ddd825a6502)\"" pod="openshift-machine-config-operator/machine-config-daemon-zc46q" podUID="23cb993e-0360-4449-b604-8ddd825a6502" Nov 24 09:42:14 crc kubenswrapper[4886]: I1124 09:42:14.886233 4886 scope.go:117] "RemoveContainer" containerID="aa909447043ab64320232fc597943f6e0932319ce7386a6b3d06298eb5b2bbf2" Nov 24 09:42:14 crc kubenswrapper[4886]: E1124 09:42:14.888366 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zc46q_openshift-machine-config-operator(23cb993e-0360-4449-b604-8ddd825a6502)\"" pod="openshift-machine-config-operator/machine-config-daemon-zc46q" podUID="23cb993e-0360-4449-b604-8ddd825a6502" Nov 24 09:42:27 crc kubenswrapper[4886]: I1124 09:42:27.849850 4886 scope.go:117] "RemoveContainer" containerID="aa909447043ab64320232fc597943f6e0932319ce7386a6b3d06298eb5b2bbf2" Nov 24 09:42:27 crc kubenswrapper[4886]: E1124 09:42:27.850710 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zc46q_openshift-machine-config-operator(23cb993e-0360-4449-b604-8ddd825a6502)\"" pod="openshift-machine-config-operator/machine-config-daemon-zc46q" podUID="23cb993e-0360-4449-b604-8ddd825a6502" Nov 24 09:42:40 crc kubenswrapper[4886]: I1124 09:42:40.849188 4886 scope.go:117] "RemoveContainer" containerID="aa909447043ab64320232fc597943f6e0932319ce7386a6b3d06298eb5b2bbf2" Nov 24 09:42:40 crc kubenswrapper[4886]: E1124 09:42:40.850062 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zc46q_openshift-machine-config-operator(23cb993e-0360-4449-b604-8ddd825a6502)\"" pod="openshift-machine-config-operator/machine-config-daemon-zc46q" podUID="23cb993e-0360-4449-b604-8ddd825a6502" Nov 24 09:42:52 crc kubenswrapper[4886]: I1124 09:42:52.849758 4886 scope.go:117] "RemoveContainer" containerID="aa909447043ab64320232fc597943f6e0932319ce7386a6b3d06298eb5b2bbf2" Nov 24 09:42:52 crc kubenswrapper[4886]: E1124 09:42:52.850378 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zc46q_openshift-machine-config-operator(23cb993e-0360-4449-b604-8ddd825a6502)\"" pod="openshift-machine-config-operator/machine-config-daemon-zc46q" podUID="23cb993e-0360-4449-b604-8ddd825a6502" Nov 24 09:43:05 crc kubenswrapper[4886]: I1124 09:43:05.849992 4886 scope.go:117] "RemoveContainer" containerID="aa909447043ab64320232fc597943f6e0932319ce7386a6b3d06298eb5b2bbf2" Nov 24 09:43:05 crc kubenswrapper[4886]: E1124 09:43:05.850696 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zc46q_openshift-machine-config-operator(23cb993e-0360-4449-b604-8ddd825a6502)\"" pod="openshift-machine-config-operator/machine-config-daemon-zc46q" podUID="23cb993e-0360-4449-b604-8ddd825a6502" Nov 24 09:43:19 crc kubenswrapper[4886]: I1124 09:43:19.849754 4886 scope.go:117] "RemoveContainer" containerID="aa909447043ab64320232fc597943f6e0932319ce7386a6b3d06298eb5b2bbf2" Nov 24 09:43:19 crc kubenswrapper[4886]: E1124 09:43:19.850547 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zc46q_openshift-machine-config-operator(23cb993e-0360-4449-b604-8ddd825a6502)\"" pod="openshift-machine-config-operator/machine-config-daemon-zc46q" podUID="23cb993e-0360-4449-b604-8ddd825a6502" Nov 24 09:43:30 crc kubenswrapper[4886]: I1124 09:43:30.849292 4886 scope.go:117] "RemoveContainer" containerID="aa909447043ab64320232fc597943f6e0932319ce7386a6b3d06298eb5b2bbf2" Nov 24 09:43:30 crc kubenswrapper[4886]: E1124 09:43:30.850210 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zc46q_openshift-machine-config-operator(23cb993e-0360-4449-b604-8ddd825a6502)\"" pod="openshift-machine-config-operator/machine-config-daemon-zc46q" podUID="23cb993e-0360-4449-b604-8ddd825a6502" Nov 24 09:43:45 crc kubenswrapper[4886]: I1124 09:43:45.849600 4886 scope.go:117] "RemoveContainer" containerID="aa909447043ab64320232fc597943f6e0932319ce7386a6b3d06298eb5b2bbf2" Nov 24 09:43:45 crc kubenswrapper[4886]: E1124 09:43:45.850766 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zc46q_openshift-machine-config-operator(23cb993e-0360-4449-b604-8ddd825a6502)\"" pod="openshift-machine-config-operator/machine-config-daemon-zc46q" podUID="23cb993e-0360-4449-b604-8ddd825a6502" Nov 24 09:44:02 crc kubenswrapper[4886]: I1124 09:44:02.003565 4886 scope.go:117] "RemoveContainer" containerID="aa909447043ab64320232fc597943f6e0932319ce7386a6b3d06298eb5b2bbf2" Nov 24 09:44:02 crc kubenswrapper[4886]: E1124 09:44:02.007476 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zc46q_openshift-machine-config-operator(23cb993e-0360-4449-b604-8ddd825a6502)\"" pod="openshift-machine-config-operator/machine-config-daemon-zc46q" podUID="23cb993e-0360-4449-b604-8ddd825a6502" Nov 24 09:44:14 crc kubenswrapper[4886]: I1124 09:44:14.849951 4886 scope.go:117] "RemoveContainer" containerID="aa909447043ab64320232fc597943f6e0932319ce7386a6b3d06298eb5b2bbf2" Nov 24 09:44:14 crc kubenswrapper[4886]: E1124 09:44:14.850914 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zc46q_openshift-machine-config-operator(23cb993e-0360-4449-b604-8ddd825a6502)\"" pod="openshift-machine-config-operator/machine-config-daemon-zc46q" podUID="23cb993e-0360-4449-b604-8ddd825a6502" Nov 24 09:44:26 crc kubenswrapper[4886]: I1124 09:44:26.849217 4886 scope.go:117] "RemoveContainer" containerID="aa909447043ab64320232fc597943f6e0932319ce7386a6b3d06298eb5b2bbf2" Nov 24 09:44:26 crc kubenswrapper[4886]: E1124 09:44:26.849993 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zc46q_openshift-machine-config-operator(23cb993e-0360-4449-b604-8ddd825a6502)\"" pod="openshift-machine-config-operator/machine-config-daemon-zc46q" podUID="23cb993e-0360-4449-b604-8ddd825a6502" Nov 24 09:44:37 crc kubenswrapper[4886]: I1124 09:44:37.850695 4886 scope.go:117] "RemoveContainer" containerID="aa909447043ab64320232fc597943f6e0932319ce7386a6b3d06298eb5b2bbf2" Nov 24 09:44:37 crc kubenswrapper[4886]: E1124 09:44:37.851843 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zc46q_openshift-machine-config-operator(23cb993e-0360-4449-b604-8ddd825a6502)\"" pod="openshift-machine-config-operator/machine-config-daemon-zc46q" podUID="23cb993e-0360-4449-b604-8ddd825a6502" Nov 24 09:44:48 crc kubenswrapper[4886]: I1124 09:44:48.849476 4886 scope.go:117] "RemoveContainer" containerID="aa909447043ab64320232fc597943f6e0932319ce7386a6b3d06298eb5b2bbf2" Nov 24 09:44:48 crc kubenswrapper[4886]: E1124 09:44:48.850284 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zc46q_openshift-machine-config-operator(23cb993e-0360-4449-b604-8ddd825a6502)\"" pod="openshift-machine-config-operator/machine-config-daemon-zc46q" podUID="23cb993e-0360-4449-b604-8ddd825a6502" Nov 24 09:45:00 crc kubenswrapper[4886]: I1124 09:45:00.206105 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29399625-v2v7s"] Nov 24 09:45:00 crc kubenswrapper[4886]: E1124 09:45:00.208939 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2b82948-72ce-4741-8956-67c43e51eaaa" containerName="extract-content" Nov 24 09:45:00 crc kubenswrapper[4886]: I1124 09:45:00.208987 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2b82948-72ce-4741-8956-67c43e51eaaa" containerName="extract-content" Nov 24 09:45:00 crc kubenswrapper[4886]: E1124 09:45:00.209000 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2b82948-72ce-4741-8956-67c43e51eaaa" containerName="extract-utilities" Nov 24 09:45:00 crc kubenswrapper[4886]: I1124 09:45:00.209008 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2b82948-72ce-4741-8956-67c43e51eaaa" containerName="extract-utilities" Nov 24 09:45:00 crc kubenswrapper[4886]: E1124 09:45:00.209018 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="866e35fd-797b-470c-8957-b6d6ce878425" containerName="extract-utilities" Nov 24 09:45:00 crc kubenswrapper[4886]: I1124 09:45:00.209027 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="866e35fd-797b-470c-8957-b6d6ce878425" containerName="extract-utilities" Nov 24 09:45:00 crc kubenswrapper[4886]: E1124 09:45:00.209041 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2b82948-72ce-4741-8956-67c43e51eaaa" containerName="registry-server" Nov 24 09:45:00 crc kubenswrapper[4886]: I1124 09:45:00.209047 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2b82948-72ce-4741-8956-67c43e51eaaa" containerName="registry-server" Nov 24 09:45:00 crc kubenswrapper[4886]: E1124 09:45:00.209062 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="866e35fd-797b-470c-8957-b6d6ce878425" containerName="registry-server" Nov 24 09:45:00 crc kubenswrapper[4886]: I1124 09:45:00.209068 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="866e35fd-797b-470c-8957-b6d6ce878425" containerName="registry-server" Nov 24 09:45:00 crc kubenswrapper[4886]: E1124 09:45:00.209080 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="866e35fd-797b-470c-8957-b6d6ce878425" containerName="extract-content" Nov 24 09:45:00 crc kubenswrapper[4886]: I1124 09:45:00.209087 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="866e35fd-797b-470c-8957-b6d6ce878425" containerName="extract-content" Nov 24 09:45:00 crc kubenswrapper[4886]: I1124 09:45:00.209330 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="e2b82948-72ce-4741-8956-67c43e51eaaa" containerName="registry-server" Nov 24 09:45:00 crc kubenswrapper[4886]: I1124 09:45:00.209342 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="866e35fd-797b-470c-8957-b6d6ce878425" containerName="registry-server" Nov 24 09:45:00 crc kubenswrapper[4886]: I1124 09:45:00.210381 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29399625-v2v7s" Nov 24 09:45:00 crc kubenswrapper[4886]: I1124 09:45:00.214747 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 24 09:45:00 crc kubenswrapper[4886]: I1124 09:45:00.215052 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 24 09:45:00 crc kubenswrapper[4886]: I1124 09:45:00.270552 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29399625-v2v7s"] Nov 24 09:45:00 crc kubenswrapper[4886]: I1124 09:45:00.332225 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8db2a0e5-d762-4910-8be1-cb45140f49f0-config-volume\") pod \"collect-profiles-29399625-v2v7s\" (UID: \"8db2a0e5-d762-4910-8be1-cb45140f49f0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399625-v2v7s" Nov 24 09:45:00 crc kubenswrapper[4886]: I1124 09:45:00.332585 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8j829\" (UniqueName: \"kubernetes.io/projected/8db2a0e5-d762-4910-8be1-cb45140f49f0-kube-api-access-8j829\") pod \"collect-profiles-29399625-v2v7s\" (UID: \"8db2a0e5-d762-4910-8be1-cb45140f49f0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399625-v2v7s" Nov 24 09:45:00 crc kubenswrapper[4886]: I1124 09:45:00.332810 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8db2a0e5-d762-4910-8be1-cb45140f49f0-secret-volume\") pod \"collect-profiles-29399625-v2v7s\" (UID: \"8db2a0e5-d762-4910-8be1-cb45140f49f0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399625-v2v7s" Nov 24 09:45:00 crc kubenswrapper[4886]: I1124 09:45:00.434488 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8db2a0e5-d762-4910-8be1-cb45140f49f0-config-volume\") pod \"collect-profiles-29399625-v2v7s\" (UID: \"8db2a0e5-d762-4910-8be1-cb45140f49f0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399625-v2v7s" Nov 24 09:45:00 crc kubenswrapper[4886]: I1124 09:45:00.434543 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8j829\" (UniqueName: \"kubernetes.io/projected/8db2a0e5-d762-4910-8be1-cb45140f49f0-kube-api-access-8j829\") pod \"collect-profiles-29399625-v2v7s\" (UID: \"8db2a0e5-d762-4910-8be1-cb45140f49f0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399625-v2v7s" Nov 24 09:45:00 crc kubenswrapper[4886]: I1124 09:45:00.434603 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8db2a0e5-d762-4910-8be1-cb45140f49f0-secret-volume\") pod \"collect-profiles-29399625-v2v7s\" (UID: \"8db2a0e5-d762-4910-8be1-cb45140f49f0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399625-v2v7s" Nov 24 09:45:00 crc kubenswrapper[4886]: I1124 09:45:00.435851 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8db2a0e5-d762-4910-8be1-cb45140f49f0-config-volume\") pod \"collect-profiles-29399625-v2v7s\" (UID: \"8db2a0e5-d762-4910-8be1-cb45140f49f0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399625-v2v7s" Nov 24 09:45:00 crc kubenswrapper[4886]: I1124 09:45:00.444610 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8db2a0e5-d762-4910-8be1-cb45140f49f0-secret-volume\") pod \"collect-profiles-29399625-v2v7s\" (UID: \"8db2a0e5-d762-4910-8be1-cb45140f49f0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399625-v2v7s" Nov 24 09:45:00 crc kubenswrapper[4886]: I1124 09:45:00.454476 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8j829\" (UniqueName: \"kubernetes.io/projected/8db2a0e5-d762-4910-8be1-cb45140f49f0-kube-api-access-8j829\") pod \"collect-profiles-29399625-v2v7s\" (UID: \"8db2a0e5-d762-4910-8be1-cb45140f49f0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399625-v2v7s" Nov 24 09:45:00 crc kubenswrapper[4886]: I1124 09:45:00.532097 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29399625-v2v7s" Nov 24 09:45:01 crc kubenswrapper[4886]: I1124 09:45:01.034814 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29399625-v2v7s"] Nov 24 09:45:01 crc kubenswrapper[4886]: I1124 09:45:01.658736 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29399625-v2v7s" event={"ID":"8db2a0e5-d762-4910-8be1-cb45140f49f0","Type":"ContainerStarted","Data":"36fb687c8bcdf1f9cddd59357e48f7a5eac6a8ad7b09b590dac4584d04f29bea"} Nov 24 09:45:01 crc kubenswrapper[4886]: I1124 09:45:01.659100 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29399625-v2v7s" event={"ID":"8db2a0e5-d762-4910-8be1-cb45140f49f0","Type":"ContainerStarted","Data":"1c3d576a882e337b3aa6be4b1f59f36d139d836a8a3ce353536f676d48354630"} Nov 24 09:45:01 crc kubenswrapper[4886]: I1124 09:45:01.681681 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29399625-v2v7s" podStartSLOduration=1.68165998 podStartE2EDuration="1.68165998s" podCreationTimestamp="2025-11-24 09:45:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 09:45:01.673478036 +0000 UTC m=+3357.560216171" watchObservedRunningTime="2025-11-24 09:45:01.68165998 +0000 UTC m=+3357.568398115" Nov 24 09:45:01 crc kubenswrapper[4886]: I1124 09:45:01.849879 4886 scope.go:117] "RemoveContainer" containerID="aa909447043ab64320232fc597943f6e0932319ce7386a6b3d06298eb5b2bbf2" Nov 24 09:45:01 crc kubenswrapper[4886]: E1124 09:45:01.850280 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zc46q_openshift-machine-config-operator(23cb993e-0360-4449-b604-8ddd825a6502)\"" pod="openshift-machine-config-operator/machine-config-daemon-zc46q" podUID="23cb993e-0360-4449-b604-8ddd825a6502" Nov 24 09:45:02 crc kubenswrapper[4886]: I1124 09:45:02.673526 4886 generic.go:334] "Generic (PLEG): container finished" podID="8db2a0e5-d762-4910-8be1-cb45140f49f0" containerID="36fb687c8bcdf1f9cddd59357e48f7a5eac6a8ad7b09b590dac4584d04f29bea" exitCode=0 Nov 24 09:45:02 crc kubenswrapper[4886]: I1124 09:45:02.673633 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29399625-v2v7s" event={"ID":"8db2a0e5-d762-4910-8be1-cb45140f49f0","Type":"ContainerDied","Data":"36fb687c8bcdf1f9cddd59357e48f7a5eac6a8ad7b09b590dac4584d04f29bea"} Nov 24 09:45:04 crc kubenswrapper[4886]: I1124 09:45:04.064487 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29399625-v2v7s" Nov 24 09:45:04 crc kubenswrapper[4886]: I1124 09:45:04.234669 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8db2a0e5-d762-4910-8be1-cb45140f49f0-config-volume\") pod \"8db2a0e5-d762-4910-8be1-cb45140f49f0\" (UID: \"8db2a0e5-d762-4910-8be1-cb45140f49f0\") " Nov 24 09:45:04 crc kubenswrapper[4886]: I1124 09:45:04.234931 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8db2a0e5-d762-4910-8be1-cb45140f49f0-secret-volume\") pod \"8db2a0e5-d762-4910-8be1-cb45140f49f0\" (UID: \"8db2a0e5-d762-4910-8be1-cb45140f49f0\") " Nov 24 09:45:04 crc kubenswrapper[4886]: I1124 09:45:04.235066 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8j829\" (UniqueName: \"kubernetes.io/projected/8db2a0e5-d762-4910-8be1-cb45140f49f0-kube-api-access-8j829\") pod \"8db2a0e5-d762-4910-8be1-cb45140f49f0\" (UID: \"8db2a0e5-d762-4910-8be1-cb45140f49f0\") " Nov 24 09:45:04 crc kubenswrapper[4886]: I1124 09:45:04.235811 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8db2a0e5-d762-4910-8be1-cb45140f49f0-config-volume" (OuterVolumeSpecName: "config-volume") pod "8db2a0e5-d762-4910-8be1-cb45140f49f0" (UID: "8db2a0e5-d762-4910-8be1-cb45140f49f0"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 09:45:04 crc kubenswrapper[4886]: I1124 09:45:04.245386 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8db2a0e5-d762-4910-8be1-cb45140f49f0-kube-api-access-8j829" (OuterVolumeSpecName: "kube-api-access-8j829") pod "8db2a0e5-d762-4910-8be1-cb45140f49f0" (UID: "8db2a0e5-d762-4910-8be1-cb45140f49f0"). InnerVolumeSpecName "kube-api-access-8j829". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 09:45:04 crc kubenswrapper[4886]: I1124 09:45:04.245518 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8db2a0e5-d762-4910-8be1-cb45140f49f0-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "8db2a0e5-d762-4910-8be1-cb45140f49f0" (UID: "8db2a0e5-d762-4910-8be1-cb45140f49f0"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:45:04 crc kubenswrapper[4886]: I1124 09:45:04.337982 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8j829\" (UniqueName: \"kubernetes.io/projected/8db2a0e5-d762-4910-8be1-cb45140f49f0-kube-api-access-8j829\") on node \"crc\" DevicePath \"\"" Nov 24 09:45:04 crc kubenswrapper[4886]: I1124 09:45:04.338249 4886 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8db2a0e5-d762-4910-8be1-cb45140f49f0-config-volume\") on node \"crc\" DevicePath \"\"" Nov 24 09:45:04 crc kubenswrapper[4886]: I1124 09:45:04.338315 4886 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8db2a0e5-d762-4910-8be1-cb45140f49f0-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 24 09:45:04 crc kubenswrapper[4886]: I1124 09:45:04.692846 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29399625-v2v7s" event={"ID":"8db2a0e5-d762-4910-8be1-cb45140f49f0","Type":"ContainerDied","Data":"1c3d576a882e337b3aa6be4b1f59f36d139d836a8a3ce353536f676d48354630"} Nov 24 09:45:04 crc kubenswrapper[4886]: I1124 09:45:04.692896 4886 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1c3d576a882e337b3aa6be4b1f59f36d139d836a8a3ce353536f676d48354630" Nov 24 09:45:04 crc kubenswrapper[4886]: I1124 09:45:04.692968 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29399625-v2v7s" Nov 24 09:45:04 crc kubenswrapper[4886]: I1124 09:45:04.778328 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29399580-vkx6n"] Nov 24 09:45:04 crc kubenswrapper[4886]: I1124 09:45:04.787852 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29399580-vkx6n"] Nov 24 09:45:04 crc kubenswrapper[4886]: I1124 09:45:04.865287 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f350f6e8-25d9-410b-be41-c4d511d67599" path="/var/lib/kubelet/pods/f350f6e8-25d9-410b-be41-c4d511d67599/volumes" Nov 24 09:45:14 crc kubenswrapper[4886]: I1124 09:45:14.856798 4886 scope.go:117] "RemoveContainer" containerID="aa909447043ab64320232fc597943f6e0932319ce7386a6b3d06298eb5b2bbf2" Nov 24 09:45:14 crc kubenswrapper[4886]: E1124 09:45:14.858129 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zc46q_openshift-machine-config-operator(23cb993e-0360-4449-b604-8ddd825a6502)\"" pod="openshift-machine-config-operator/machine-config-daemon-zc46q" podUID="23cb993e-0360-4449-b604-8ddd825a6502" Nov 24 09:45:23 crc kubenswrapper[4886]: I1124 09:45:23.428174 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-v8cm6"] Nov 24 09:45:23 crc kubenswrapper[4886]: E1124 09:45:23.429287 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8db2a0e5-d762-4910-8be1-cb45140f49f0" containerName="collect-profiles" Nov 24 09:45:23 crc kubenswrapper[4886]: I1124 09:45:23.429308 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="8db2a0e5-d762-4910-8be1-cb45140f49f0" containerName="collect-profiles" Nov 24 09:45:23 crc kubenswrapper[4886]: I1124 09:45:23.429531 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="8db2a0e5-d762-4910-8be1-cb45140f49f0" containerName="collect-profiles" Nov 24 09:45:23 crc kubenswrapper[4886]: I1124 09:45:23.431100 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-v8cm6" Nov 24 09:45:23 crc kubenswrapper[4886]: I1124 09:45:23.441862 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-v8cm6"] Nov 24 09:45:23 crc kubenswrapper[4886]: I1124 09:45:23.478313 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e13f3f09-878a-4f75-bef8-fb57045287d0-utilities\") pod \"certified-operators-v8cm6\" (UID: \"e13f3f09-878a-4f75-bef8-fb57045287d0\") " pod="openshift-marketplace/certified-operators-v8cm6" Nov 24 09:45:23 crc kubenswrapper[4886]: I1124 09:45:23.478421 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e13f3f09-878a-4f75-bef8-fb57045287d0-catalog-content\") pod \"certified-operators-v8cm6\" (UID: \"e13f3f09-878a-4f75-bef8-fb57045287d0\") " pod="openshift-marketplace/certified-operators-v8cm6" Nov 24 09:45:23 crc kubenswrapper[4886]: I1124 09:45:23.478815 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nck8t\" (UniqueName: \"kubernetes.io/projected/e13f3f09-878a-4f75-bef8-fb57045287d0-kube-api-access-nck8t\") pod \"certified-operators-v8cm6\" (UID: \"e13f3f09-878a-4f75-bef8-fb57045287d0\") " pod="openshift-marketplace/certified-operators-v8cm6" Nov 24 09:45:23 crc kubenswrapper[4886]: I1124 09:45:23.581125 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e13f3f09-878a-4f75-bef8-fb57045287d0-utilities\") pod \"certified-operators-v8cm6\" (UID: \"e13f3f09-878a-4f75-bef8-fb57045287d0\") " pod="openshift-marketplace/certified-operators-v8cm6" Nov 24 09:45:23 crc kubenswrapper[4886]: I1124 09:45:23.581277 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e13f3f09-878a-4f75-bef8-fb57045287d0-catalog-content\") pod \"certified-operators-v8cm6\" (UID: \"e13f3f09-878a-4f75-bef8-fb57045287d0\") " pod="openshift-marketplace/certified-operators-v8cm6" Nov 24 09:45:23 crc kubenswrapper[4886]: I1124 09:45:23.581403 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nck8t\" (UniqueName: \"kubernetes.io/projected/e13f3f09-878a-4f75-bef8-fb57045287d0-kube-api-access-nck8t\") pod \"certified-operators-v8cm6\" (UID: \"e13f3f09-878a-4f75-bef8-fb57045287d0\") " pod="openshift-marketplace/certified-operators-v8cm6" Nov 24 09:45:23 crc kubenswrapper[4886]: I1124 09:45:23.581969 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e13f3f09-878a-4f75-bef8-fb57045287d0-utilities\") pod \"certified-operators-v8cm6\" (UID: \"e13f3f09-878a-4f75-bef8-fb57045287d0\") " pod="openshift-marketplace/certified-operators-v8cm6" Nov 24 09:45:23 crc kubenswrapper[4886]: I1124 09:45:23.582065 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e13f3f09-878a-4f75-bef8-fb57045287d0-catalog-content\") pod \"certified-operators-v8cm6\" (UID: \"e13f3f09-878a-4f75-bef8-fb57045287d0\") " pod="openshift-marketplace/certified-operators-v8cm6" Nov 24 09:45:23 crc kubenswrapper[4886]: I1124 09:45:23.604904 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nck8t\" (UniqueName: \"kubernetes.io/projected/e13f3f09-878a-4f75-bef8-fb57045287d0-kube-api-access-nck8t\") pod \"certified-operators-v8cm6\" (UID: \"e13f3f09-878a-4f75-bef8-fb57045287d0\") " pod="openshift-marketplace/certified-operators-v8cm6" Nov 24 09:45:23 crc kubenswrapper[4886]: I1124 09:45:23.803650 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-v8cm6" Nov 24 09:45:24 crc kubenswrapper[4886]: I1124 09:45:24.382671 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-v8cm6"] Nov 24 09:45:24 crc kubenswrapper[4886]: I1124 09:45:24.891808 4886 generic.go:334] "Generic (PLEG): container finished" podID="e13f3f09-878a-4f75-bef8-fb57045287d0" containerID="e53a613a72f9fc8f505f8625ba2ce79405b8d1a3f61bc7b21210fdfe058ee52c" exitCode=0 Nov 24 09:45:24 crc kubenswrapper[4886]: I1124 09:45:24.892106 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-v8cm6" event={"ID":"e13f3f09-878a-4f75-bef8-fb57045287d0","Type":"ContainerDied","Data":"e53a613a72f9fc8f505f8625ba2ce79405b8d1a3f61bc7b21210fdfe058ee52c"} Nov 24 09:45:24 crc kubenswrapper[4886]: I1124 09:45:24.892138 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-v8cm6" event={"ID":"e13f3f09-878a-4f75-bef8-fb57045287d0","Type":"ContainerStarted","Data":"6069eda1792e8a0f71ec1f06797c9649f8e30fcb810c9ccf4cfeecb7513f536f"} Nov 24 09:45:24 crc kubenswrapper[4886]: I1124 09:45:24.894883 4886 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 24 09:45:25 crc kubenswrapper[4886]: I1124 09:45:25.849384 4886 scope.go:117] "RemoveContainer" containerID="aa909447043ab64320232fc597943f6e0932319ce7386a6b3d06298eb5b2bbf2" Nov 24 09:45:25 crc kubenswrapper[4886]: E1124 09:45:25.850190 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zc46q_openshift-machine-config-operator(23cb993e-0360-4449-b604-8ddd825a6502)\"" pod="openshift-machine-config-operator/machine-config-daemon-zc46q" podUID="23cb993e-0360-4449-b604-8ddd825a6502" Nov 24 09:45:25 crc kubenswrapper[4886]: I1124 09:45:25.906658 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-v8cm6" event={"ID":"e13f3f09-878a-4f75-bef8-fb57045287d0","Type":"ContainerStarted","Data":"42c9cc030e84f00c1d8b78a8e7b5ff05be159e311df8db415325efc23df0e49f"} Nov 24 09:45:26 crc kubenswrapper[4886]: I1124 09:45:26.918456 4886 generic.go:334] "Generic (PLEG): container finished" podID="e13f3f09-878a-4f75-bef8-fb57045287d0" containerID="42c9cc030e84f00c1d8b78a8e7b5ff05be159e311df8db415325efc23df0e49f" exitCode=0 Nov 24 09:45:26 crc kubenswrapper[4886]: I1124 09:45:26.918567 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-v8cm6" event={"ID":"e13f3f09-878a-4f75-bef8-fb57045287d0","Type":"ContainerDied","Data":"42c9cc030e84f00c1d8b78a8e7b5ff05be159e311df8db415325efc23df0e49f"} Nov 24 09:45:27 crc kubenswrapper[4886]: I1124 09:45:27.932653 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-v8cm6" event={"ID":"e13f3f09-878a-4f75-bef8-fb57045287d0","Type":"ContainerStarted","Data":"460831fd9b3dadfe6606e5e5b553c36f14828397dfcd90379ce0e4648d3c6b66"} Nov 24 09:45:27 crc kubenswrapper[4886]: I1124 09:45:27.958430 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-v8cm6" podStartSLOduration=2.538974407 podStartE2EDuration="4.958398033s" podCreationTimestamp="2025-11-24 09:45:23 +0000 UTC" firstStartedPulling="2025-11-24 09:45:24.894548294 +0000 UTC m=+3380.781286429" lastFinishedPulling="2025-11-24 09:45:27.31397192 +0000 UTC m=+3383.200710055" observedRunningTime="2025-11-24 09:45:27.956458957 +0000 UTC m=+3383.843197092" watchObservedRunningTime="2025-11-24 09:45:27.958398033 +0000 UTC m=+3383.845136168" Nov 24 09:45:33 crc kubenswrapper[4886]: I1124 09:45:33.804413 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-v8cm6" Nov 24 09:45:33 crc kubenswrapper[4886]: I1124 09:45:33.805011 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-v8cm6" Nov 24 09:45:33 crc kubenswrapper[4886]: I1124 09:45:33.880302 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-v8cm6" Nov 24 09:45:34 crc kubenswrapper[4886]: I1124 09:45:34.038565 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-v8cm6" Nov 24 09:45:35 crc kubenswrapper[4886]: I1124 09:45:35.216710 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-v8cm6"] Nov 24 09:45:36 crc kubenswrapper[4886]: I1124 09:45:36.012097 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-v8cm6" podUID="e13f3f09-878a-4f75-bef8-fb57045287d0" containerName="registry-server" containerID="cri-o://460831fd9b3dadfe6606e5e5b553c36f14828397dfcd90379ce0e4648d3c6b66" gracePeriod=2 Nov 24 09:45:36 crc kubenswrapper[4886]: I1124 09:45:36.552773 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-v8cm6" Nov 24 09:45:36 crc kubenswrapper[4886]: I1124 09:45:36.605770 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nck8t\" (UniqueName: \"kubernetes.io/projected/e13f3f09-878a-4f75-bef8-fb57045287d0-kube-api-access-nck8t\") pod \"e13f3f09-878a-4f75-bef8-fb57045287d0\" (UID: \"e13f3f09-878a-4f75-bef8-fb57045287d0\") " Nov 24 09:45:36 crc kubenswrapper[4886]: I1124 09:45:36.605937 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e13f3f09-878a-4f75-bef8-fb57045287d0-utilities\") pod \"e13f3f09-878a-4f75-bef8-fb57045287d0\" (UID: \"e13f3f09-878a-4f75-bef8-fb57045287d0\") " Nov 24 09:45:36 crc kubenswrapper[4886]: I1124 09:45:36.606076 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e13f3f09-878a-4f75-bef8-fb57045287d0-catalog-content\") pod \"e13f3f09-878a-4f75-bef8-fb57045287d0\" (UID: \"e13f3f09-878a-4f75-bef8-fb57045287d0\") " Nov 24 09:45:36 crc kubenswrapper[4886]: I1124 09:45:36.606925 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e13f3f09-878a-4f75-bef8-fb57045287d0-utilities" (OuterVolumeSpecName: "utilities") pod "e13f3f09-878a-4f75-bef8-fb57045287d0" (UID: "e13f3f09-878a-4f75-bef8-fb57045287d0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 09:45:36 crc kubenswrapper[4886]: I1124 09:45:36.615564 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e13f3f09-878a-4f75-bef8-fb57045287d0-kube-api-access-nck8t" (OuterVolumeSpecName: "kube-api-access-nck8t") pod "e13f3f09-878a-4f75-bef8-fb57045287d0" (UID: "e13f3f09-878a-4f75-bef8-fb57045287d0"). InnerVolumeSpecName "kube-api-access-nck8t". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 09:45:36 crc kubenswrapper[4886]: I1124 09:45:36.676529 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e13f3f09-878a-4f75-bef8-fb57045287d0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e13f3f09-878a-4f75-bef8-fb57045287d0" (UID: "e13f3f09-878a-4f75-bef8-fb57045287d0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 09:45:36 crc kubenswrapper[4886]: I1124 09:45:36.709341 4886 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e13f3f09-878a-4f75-bef8-fb57045287d0-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 09:45:36 crc kubenswrapper[4886]: I1124 09:45:36.709798 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nck8t\" (UniqueName: \"kubernetes.io/projected/e13f3f09-878a-4f75-bef8-fb57045287d0-kube-api-access-nck8t\") on node \"crc\" DevicePath \"\"" Nov 24 09:45:36 crc kubenswrapper[4886]: I1124 09:45:36.709881 4886 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e13f3f09-878a-4f75-bef8-fb57045287d0-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 09:45:37 crc kubenswrapper[4886]: I1124 09:45:37.035937 4886 generic.go:334] "Generic (PLEG): container finished" podID="e13f3f09-878a-4f75-bef8-fb57045287d0" containerID="460831fd9b3dadfe6606e5e5b553c36f14828397dfcd90379ce0e4648d3c6b66" exitCode=0 Nov 24 09:45:37 crc kubenswrapper[4886]: I1124 09:45:37.035986 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-v8cm6" event={"ID":"e13f3f09-878a-4f75-bef8-fb57045287d0","Type":"ContainerDied","Data":"460831fd9b3dadfe6606e5e5b553c36f14828397dfcd90379ce0e4648d3c6b66"} Nov 24 09:45:37 crc kubenswrapper[4886]: I1124 09:45:37.036013 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-v8cm6" event={"ID":"e13f3f09-878a-4f75-bef8-fb57045287d0","Type":"ContainerDied","Data":"6069eda1792e8a0f71ec1f06797c9649f8e30fcb810c9ccf4cfeecb7513f536f"} Nov 24 09:45:37 crc kubenswrapper[4886]: I1124 09:45:37.036030 4886 scope.go:117] "RemoveContainer" containerID="460831fd9b3dadfe6606e5e5b553c36f14828397dfcd90379ce0e4648d3c6b66" Nov 24 09:45:37 crc kubenswrapper[4886]: I1124 09:45:37.036040 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-v8cm6" Nov 24 09:45:37 crc kubenswrapper[4886]: I1124 09:45:37.067179 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-v8cm6"] Nov 24 09:45:37 crc kubenswrapper[4886]: I1124 09:45:37.077304 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-v8cm6"] Nov 24 09:45:37 crc kubenswrapper[4886]: I1124 09:45:37.078310 4886 scope.go:117] "RemoveContainer" containerID="42c9cc030e84f00c1d8b78a8e7b5ff05be159e311df8db415325efc23df0e49f" Nov 24 09:45:37 crc kubenswrapper[4886]: I1124 09:45:37.109944 4886 scope.go:117] "RemoveContainer" containerID="e53a613a72f9fc8f505f8625ba2ce79405b8d1a3f61bc7b21210fdfe058ee52c" Nov 24 09:45:37 crc kubenswrapper[4886]: I1124 09:45:37.162107 4886 scope.go:117] "RemoveContainer" containerID="460831fd9b3dadfe6606e5e5b553c36f14828397dfcd90379ce0e4648d3c6b66" Nov 24 09:45:37 crc kubenswrapper[4886]: E1124 09:45:37.162821 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"460831fd9b3dadfe6606e5e5b553c36f14828397dfcd90379ce0e4648d3c6b66\": container with ID starting with 460831fd9b3dadfe6606e5e5b553c36f14828397dfcd90379ce0e4648d3c6b66 not found: ID does not exist" containerID="460831fd9b3dadfe6606e5e5b553c36f14828397dfcd90379ce0e4648d3c6b66" Nov 24 09:45:37 crc kubenswrapper[4886]: I1124 09:45:37.162870 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"460831fd9b3dadfe6606e5e5b553c36f14828397dfcd90379ce0e4648d3c6b66"} err="failed to get container status \"460831fd9b3dadfe6606e5e5b553c36f14828397dfcd90379ce0e4648d3c6b66\": rpc error: code = NotFound desc = could not find container \"460831fd9b3dadfe6606e5e5b553c36f14828397dfcd90379ce0e4648d3c6b66\": container with ID starting with 460831fd9b3dadfe6606e5e5b553c36f14828397dfcd90379ce0e4648d3c6b66 not found: ID does not exist" Nov 24 09:45:37 crc kubenswrapper[4886]: I1124 09:45:37.162901 4886 scope.go:117] "RemoveContainer" containerID="42c9cc030e84f00c1d8b78a8e7b5ff05be159e311df8db415325efc23df0e49f" Nov 24 09:45:37 crc kubenswrapper[4886]: E1124 09:45:37.163367 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"42c9cc030e84f00c1d8b78a8e7b5ff05be159e311df8db415325efc23df0e49f\": container with ID starting with 42c9cc030e84f00c1d8b78a8e7b5ff05be159e311df8db415325efc23df0e49f not found: ID does not exist" containerID="42c9cc030e84f00c1d8b78a8e7b5ff05be159e311df8db415325efc23df0e49f" Nov 24 09:45:37 crc kubenswrapper[4886]: I1124 09:45:37.163403 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"42c9cc030e84f00c1d8b78a8e7b5ff05be159e311df8db415325efc23df0e49f"} err="failed to get container status \"42c9cc030e84f00c1d8b78a8e7b5ff05be159e311df8db415325efc23df0e49f\": rpc error: code = NotFound desc = could not find container \"42c9cc030e84f00c1d8b78a8e7b5ff05be159e311df8db415325efc23df0e49f\": container with ID starting with 42c9cc030e84f00c1d8b78a8e7b5ff05be159e311df8db415325efc23df0e49f not found: ID does not exist" Nov 24 09:45:37 crc kubenswrapper[4886]: I1124 09:45:37.163423 4886 scope.go:117] "RemoveContainer" containerID="e53a613a72f9fc8f505f8625ba2ce79405b8d1a3f61bc7b21210fdfe058ee52c" Nov 24 09:45:37 crc kubenswrapper[4886]: E1124 09:45:37.163722 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e53a613a72f9fc8f505f8625ba2ce79405b8d1a3f61bc7b21210fdfe058ee52c\": container with ID starting with e53a613a72f9fc8f505f8625ba2ce79405b8d1a3f61bc7b21210fdfe058ee52c not found: ID does not exist" containerID="e53a613a72f9fc8f505f8625ba2ce79405b8d1a3f61bc7b21210fdfe058ee52c" Nov 24 09:45:37 crc kubenswrapper[4886]: I1124 09:45:37.163758 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e53a613a72f9fc8f505f8625ba2ce79405b8d1a3f61bc7b21210fdfe058ee52c"} err="failed to get container status \"e53a613a72f9fc8f505f8625ba2ce79405b8d1a3f61bc7b21210fdfe058ee52c\": rpc error: code = NotFound desc = could not find container \"e53a613a72f9fc8f505f8625ba2ce79405b8d1a3f61bc7b21210fdfe058ee52c\": container with ID starting with e53a613a72f9fc8f505f8625ba2ce79405b8d1a3f61bc7b21210fdfe058ee52c not found: ID does not exist" Nov 24 09:45:38 crc kubenswrapper[4886]: I1124 09:45:38.849920 4886 scope.go:117] "RemoveContainer" containerID="aa909447043ab64320232fc597943f6e0932319ce7386a6b3d06298eb5b2bbf2" Nov 24 09:45:38 crc kubenswrapper[4886]: E1124 09:45:38.851416 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zc46q_openshift-machine-config-operator(23cb993e-0360-4449-b604-8ddd825a6502)\"" pod="openshift-machine-config-operator/machine-config-daemon-zc46q" podUID="23cb993e-0360-4449-b604-8ddd825a6502" Nov 24 09:45:38 crc kubenswrapper[4886]: I1124 09:45:38.871315 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e13f3f09-878a-4f75-bef8-fb57045287d0" path="/var/lib/kubelet/pods/e13f3f09-878a-4f75-bef8-fb57045287d0/volumes" Nov 24 09:45:40 crc kubenswrapper[4886]: I1124 09:45:40.445549 4886 scope.go:117] "RemoveContainer" containerID="fc00c94d88dea5b57b5c85a20ee620fad7ff66b4b9bd361fa15ab65aa74ec015" Nov 24 09:45:53 crc kubenswrapper[4886]: I1124 09:45:53.849506 4886 scope.go:117] "RemoveContainer" containerID="aa909447043ab64320232fc597943f6e0932319ce7386a6b3d06298eb5b2bbf2" Nov 24 09:45:53 crc kubenswrapper[4886]: E1124 09:45:53.852191 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zc46q_openshift-machine-config-operator(23cb993e-0360-4449-b604-8ddd825a6502)\"" pod="openshift-machine-config-operator/machine-config-daemon-zc46q" podUID="23cb993e-0360-4449-b604-8ddd825a6502" Nov 24 09:46:04 crc kubenswrapper[4886]: I1124 09:46:04.862032 4886 scope.go:117] "RemoveContainer" containerID="aa909447043ab64320232fc597943f6e0932319ce7386a6b3d06298eb5b2bbf2" Nov 24 09:46:05 crc kubenswrapper[4886]: I1124 09:46:05.311877 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zc46q" event={"ID":"23cb993e-0360-4449-b604-8ddd825a6502","Type":"ContainerStarted","Data":"4c81d65f580ad035ae3cb40985f146ce45574345f2884eeaa80d7f0764f1e262"} Nov 24 09:47:44 crc kubenswrapper[4886]: I1124 09:47:44.276396 4886 generic.go:334] "Generic (PLEG): container finished" podID="347e0b0b-6caf-4b65-8fd5-a8cf2c61acc7" containerID="582f3d4c5d02870dc2e229f11ddf735e7c4316ddbb3d39f51d692866555e70ff" exitCode=0 Nov 24 09:47:44 crc kubenswrapper[4886]: I1124 09:47:44.276482 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"347e0b0b-6caf-4b65-8fd5-a8cf2c61acc7","Type":"ContainerDied","Data":"582f3d4c5d02870dc2e229f11ddf735e7c4316ddbb3d39f51d692866555e70ff"} Nov 24 09:47:45 crc kubenswrapper[4886]: I1124 09:47:45.625759 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Nov 24 09:47:45 crc kubenswrapper[4886]: I1124 09:47:45.670854 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/347e0b0b-6caf-4b65-8fd5-a8cf2c61acc7-openstack-config\") pod \"347e0b0b-6caf-4b65-8fd5-a8cf2c61acc7\" (UID: \"347e0b0b-6caf-4b65-8fd5-a8cf2c61acc7\") " Nov 24 09:47:45 crc kubenswrapper[4886]: I1124 09:47:45.670930 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/347e0b0b-6caf-4b65-8fd5-a8cf2c61acc7-test-operator-ephemeral-temporary\") pod \"347e0b0b-6caf-4b65-8fd5-a8cf2c61acc7\" (UID: \"347e0b0b-6caf-4b65-8fd5-a8cf2c61acc7\") " Nov 24 09:47:45 crc kubenswrapper[4886]: I1124 09:47:45.670958 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pkr8x\" (UniqueName: \"kubernetes.io/projected/347e0b0b-6caf-4b65-8fd5-a8cf2c61acc7-kube-api-access-pkr8x\") pod \"347e0b0b-6caf-4b65-8fd5-a8cf2c61acc7\" (UID: \"347e0b0b-6caf-4b65-8fd5-a8cf2c61acc7\") " Nov 24 09:47:45 crc kubenswrapper[4886]: I1124 09:47:45.670988 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-logs\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"347e0b0b-6caf-4b65-8fd5-a8cf2c61acc7\" (UID: \"347e0b0b-6caf-4b65-8fd5-a8cf2c61acc7\") " Nov 24 09:47:45 crc kubenswrapper[4886]: I1124 09:47:45.671018 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/347e0b0b-6caf-4b65-8fd5-a8cf2c61acc7-ssh-key\") pod \"347e0b0b-6caf-4b65-8fd5-a8cf2c61acc7\" (UID: \"347e0b0b-6caf-4b65-8fd5-a8cf2c61acc7\") " Nov 24 09:47:45 crc kubenswrapper[4886]: I1124 09:47:45.671085 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/347e0b0b-6caf-4b65-8fd5-a8cf2c61acc7-config-data\") pod \"347e0b0b-6caf-4b65-8fd5-a8cf2c61acc7\" (UID: \"347e0b0b-6caf-4b65-8fd5-a8cf2c61acc7\") " Nov 24 09:47:45 crc kubenswrapper[4886]: I1124 09:47:45.671186 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/347e0b0b-6caf-4b65-8fd5-a8cf2c61acc7-test-operator-ephemeral-workdir\") pod \"347e0b0b-6caf-4b65-8fd5-a8cf2c61acc7\" (UID: \"347e0b0b-6caf-4b65-8fd5-a8cf2c61acc7\") " Nov 24 09:47:45 crc kubenswrapper[4886]: I1124 09:47:45.671299 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/347e0b0b-6caf-4b65-8fd5-a8cf2c61acc7-openstack-config-secret\") pod \"347e0b0b-6caf-4b65-8fd5-a8cf2c61acc7\" (UID: \"347e0b0b-6caf-4b65-8fd5-a8cf2c61acc7\") " Nov 24 09:47:45 crc kubenswrapper[4886]: I1124 09:47:45.671335 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/347e0b0b-6caf-4b65-8fd5-a8cf2c61acc7-ca-certs\") pod \"347e0b0b-6caf-4b65-8fd5-a8cf2c61acc7\" (UID: \"347e0b0b-6caf-4b65-8fd5-a8cf2c61acc7\") " Nov 24 09:47:45 crc kubenswrapper[4886]: I1124 09:47:45.671511 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/347e0b0b-6caf-4b65-8fd5-a8cf2c61acc7-test-operator-ephemeral-temporary" (OuterVolumeSpecName: "test-operator-ephemeral-temporary") pod "347e0b0b-6caf-4b65-8fd5-a8cf2c61acc7" (UID: "347e0b0b-6caf-4b65-8fd5-a8cf2c61acc7"). InnerVolumeSpecName "test-operator-ephemeral-temporary". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 09:47:45 crc kubenswrapper[4886]: I1124 09:47:45.671912 4886 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/347e0b0b-6caf-4b65-8fd5-a8cf2c61acc7-test-operator-ephemeral-temporary\") on node \"crc\" DevicePath \"\"" Nov 24 09:47:45 crc kubenswrapper[4886]: I1124 09:47:45.672202 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/347e0b0b-6caf-4b65-8fd5-a8cf2c61acc7-config-data" (OuterVolumeSpecName: "config-data") pod "347e0b0b-6caf-4b65-8fd5-a8cf2c61acc7" (UID: "347e0b0b-6caf-4b65-8fd5-a8cf2c61acc7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 09:47:45 crc kubenswrapper[4886]: I1124 09:47:45.676449 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/347e0b0b-6caf-4b65-8fd5-a8cf2c61acc7-test-operator-ephemeral-workdir" (OuterVolumeSpecName: "test-operator-ephemeral-workdir") pod "347e0b0b-6caf-4b65-8fd5-a8cf2c61acc7" (UID: "347e0b0b-6caf-4b65-8fd5-a8cf2c61acc7"). InnerVolumeSpecName "test-operator-ephemeral-workdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 09:47:45 crc kubenswrapper[4886]: I1124 09:47:45.677078 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage12-crc" (OuterVolumeSpecName: "test-operator-logs") pod "347e0b0b-6caf-4b65-8fd5-a8cf2c61acc7" (UID: "347e0b0b-6caf-4b65-8fd5-a8cf2c61acc7"). InnerVolumeSpecName "local-storage12-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 24 09:47:45 crc kubenswrapper[4886]: I1124 09:47:45.677796 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/347e0b0b-6caf-4b65-8fd5-a8cf2c61acc7-kube-api-access-pkr8x" (OuterVolumeSpecName: "kube-api-access-pkr8x") pod "347e0b0b-6caf-4b65-8fd5-a8cf2c61acc7" (UID: "347e0b0b-6caf-4b65-8fd5-a8cf2c61acc7"). InnerVolumeSpecName "kube-api-access-pkr8x". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 09:47:45 crc kubenswrapper[4886]: I1124 09:47:45.700164 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/347e0b0b-6caf-4b65-8fd5-a8cf2c61acc7-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "347e0b0b-6caf-4b65-8fd5-a8cf2c61acc7" (UID: "347e0b0b-6caf-4b65-8fd5-a8cf2c61acc7"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:47:45 crc kubenswrapper[4886]: I1124 09:47:45.701649 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/347e0b0b-6caf-4b65-8fd5-a8cf2c61acc7-ca-certs" (OuterVolumeSpecName: "ca-certs") pod "347e0b0b-6caf-4b65-8fd5-a8cf2c61acc7" (UID: "347e0b0b-6caf-4b65-8fd5-a8cf2c61acc7"). InnerVolumeSpecName "ca-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:47:45 crc kubenswrapper[4886]: I1124 09:47:45.706250 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/347e0b0b-6caf-4b65-8fd5-a8cf2c61acc7-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "347e0b0b-6caf-4b65-8fd5-a8cf2c61acc7" (UID: "347e0b0b-6caf-4b65-8fd5-a8cf2c61acc7"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:47:45 crc kubenswrapper[4886]: I1124 09:47:45.721699 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/347e0b0b-6caf-4b65-8fd5-a8cf2c61acc7-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "347e0b0b-6caf-4b65-8fd5-a8cf2c61acc7" (UID: "347e0b0b-6caf-4b65-8fd5-a8cf2c61acc7"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 09:47:45 crc kubenswrapper[4886]: I1124 09:47:45.774068 4886 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/347e0b0b-6caf-4b65-8fd5-a8cf2c61acc7-openstack-config\") on node \"crc\" DevicePath \"\"" Nov 24 09:47:45 crc kubenswrapper[4886]: I1124 09:47:45.774112 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pkr8x\" (UniqueName: \"kubernetes.io/projected/347e0b0b-6caf-4b65-8fd5-a8cf2c61acc7-kube-api-access-pkr8x\") on node \"crc\" DevicePath \"\"" Nov 24 09:47:45 crc kubenswrapper[4886]: I1124 09:47:45.774178 4886 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" " Nov 24 09:47:45 crc kubenswrapper[4886]: I1124 09:47:45.774192 4886 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/347e0b0b-6caf-4b65-8fd5-a8cf2c61acc7-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 24 09:47:45 crc kubenswrapper[4886]: I1124 09:47:45.774202 4886 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/347e0b0b-6caf-4b65-8fd5-a8cf2c61acc7-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 09:47:45 crc kubenswrapper[4886]: I1124 09:47:45.774213 4886 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/347e0b0b-6caf-4b65-8fd5-a8cf2c61acc7-test-operator-ephemeral-workdir\") on node \"crc\" DevicePath \"\"" Nov 24 09:47:45 crc kubenswrapper[4886]: I1124 09:47:45.774223 4886 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/347e0b0b-6caf-4b65-8fd5-a8cf2c61acc7-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Nov 24 09:47:45 crc kubenswrapper[4886]: I1124 09:47:45.774233 4886 reconciler_common.go:293] "Volume detached for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/347e0b0b-6caf-4b65-8fd5-a8cf2c61acc7-ca-certs\") on node \"crc\" DevicePath \"\"" Nov 24 09:47:45 crc kubenswrapper[4886]: I1124 09:47:45.796101 4886 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage12-crc" (UniqueName: "kubernetes.io/local-volume/local-storage12-crc") on node "crc" Nov 24 09:47:45 crc kubenswrapper[4886]: I1124 09:47:45.876618 4886 reconciler_common.go:293] "Volume detached for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" DevicePath \"\"" Nov 24 09:47:46 crc kubenswrapper[4886]: I1124 09:47:46.309182 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"347e0b0b-6caf-4b65-8fd5-a8cf2c61acc7","Type":"ContainerDied","Data":"baae8c0aaa40f60839c35466cabeee4144dcc81e641267e599faae0702a72938"} Nov 24 09:47:46 crc kubenswrapper[4886]: I1124 09:47:46.309569 4886 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="baae8c0aaa40f60839c35466cabeee4144dcc81e641267e599faae0702a72938" Nov 24 09:47:46 crc kubenswrapper[4886]: I1124 09:47:46.309353 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Nov 24 09:47:58 crc kubenswrapper[4886]: I1124 09:47:58.634778 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Nov 24 09:47:58 crc kubenswrapper[4886]: E1124 09:47:58.635736 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="347e0b0b-6caf-4b65-8fd5-a8cf2c61acc7" containerName="tempest-tests-tempest-tests-runner" Nov 24 09:47:58 crc kubenswrapper[4886]: I1124 09:47:58.635753 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="347e0b0b-6caf-4b65-8fd5-a8cf2c61acc7" containerName="tempest-tests-tempest-tests-runner" Nov 24 09:47:58 crc kubenswrapper[4886]: E1124 09:47:58.635779 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e13f3f09-878a-4f75-bef8-fb57045287d0" containerName="extract-content" Nov 24 09:47:58 crc kubenswrapper[4886]: I1124 09:47:58.635787 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="e13f3f09-878a-4f75-bef8-fb57045287d0" containerName="extract-content" Nov 24 09:47:58 crc kubenswrapper[4886]: E1124 09:47:58.635806 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e13f3f09-878a-4f75-bef8-fb57045287d0" containerName="extract-utilities" Nov 24 09:47:58 crc kubenswrapper[4886]: I1124 09:47:58.635813 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="e13f3f09-878a-4f75-bef8-fb57045287d0" containerName="extract-utilities" Nov 24 09:47:58 crc kubenswrapper[4886]: E1124 09:47:58.635827 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e13f3f09-878a-4f75-bef8-fb57045287d0" containerName="registry-server" Nov 24 09:47:58 crc kubenswrapper[4886]: I1124 09:47:58.635834 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="e13f3f09-878a-4f75-bef8-fb57045287d0" containerName="registry-server" Nov 24 09:47:58 crc kubenswrapper[4886]: I1124 09:47:58.636058 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="347e0b0b-6caf-4b65-8fd5-a8cf2c61acc7" containerName="tempest-tests-tempest-tests-runner" Nov 24 09:47:58 crc kubenswrapper[4886]: I1124 09:47:58.636094 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="e13f3f09-878a-4f75-bef8-fb57045287d0" containerName="registry-server" Nov 24 09:47:58 crc kubenswrapper[4886]: I1124 09:47:58.636898 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 24 09:47:58 crc kubenswrapper[4886]: I1124 09:47:58.639212 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-vl5zb" Nov 24 09:47:58 crc kubenswrapper[4886]: I1124 09:47:58.648525 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Nov 24 09:47:58 crc kubenswrapper[4886]: I1124 09:47:58.751687 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"2e959b23-6fa2-4d10-b235-5fdc8c476ff9\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 24 09:47:58 crc kubenswrapper[4886]: I1124 09:47:58.751920 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jmh94\" (UniqueName: \"kubernetes.io/projected/2e959b23-6fa2-4d10-b235-5fdc8c476ff9-kube-api-access-jmh94\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"2e959b23-6fa2-4d10-b235-5fdc8c476ff9\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 24 09:47:58 crc kubenswrapper[4886]: I1124 09:47:58.854236 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jmh94\" (UniqueName: \"kubernetes.io/projected/2e959b23-6fa2-4d10-b235-5fdc8c476ff9-kube-api-access-jmh94\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"2e959b23-6fa2-4d10-b235-5fdc8c476ff9\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 24 09:47:58 crc kubenswrapper[4886]: I1124 09:47:58.854295 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"2e959b23-6fa2-4d10-b235-5fdc8c476ff9\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 24 09:47:58 crc kubenswrapper[4886]: I1124 09:47:58.855083 4886 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"2e959b23-6fa2-4d10-b235-5fdc8c476ff9\") device mount path \"/mnt/openstack/pv12\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 24 09:47:58 crc kubenswrapper[4886]: I1124 09:47:58.880796 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jmh94\" (UniqueName: \"kubernetes.io/projected/2e959b23-6fa2-4d10-b235-5fdc8c476ff9-kube-api-access-jmh94\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"2e959b23-6fa2-4d10-b235-5fdc8c476ff9\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 24 09:47:58 crc kubenswrapper[4886]: I1124 09:47:58.889291 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"2e959b23-6fa2-4d10-b235-5fdc8c476ff9\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 24 09:47:58 crc kubenswrapper[4886]: I1124 09:47:58.965726 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 24 09:47:59 crc kubenswrapper[4886]: I1124 09:47:59.495732 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Nov 24 09:48:00 crc kubenswrapper[4886]: I1124 09:48:00.478035 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"2e959b23-6fa2-4d10-b235-5fdc8c476ff9","Type":"ContainerStarted","Data":"0b6fafde4b18f19996bb99f9c67b8c5d5bf14e86fcca2d07ad1e4754cb8fe22d"} Nov 24 09:48:02 crc kubenswrapper[4886]: I1124 09:48:02.526085 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"2e959b23-6fa2-4d10-b235-5fdc8c476ff9","Type":"ContainerStarted","Data":"45eae9d64f8bddb1ce9d5486e69420c32aea366fd0e79213e3a68fc18a85c9e0"} Nov 24 09:48:02 crc kubenswrapper[4886]: I1124 09:48:02.549816 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podStartSLOduration=2.369394068 podStartE2EDuration="4.549791857s" podCreationTimestamp="2025-11-24 09:47:58 +0000 UTC" firstStartedPulling="2025-11-24 09:47:59.501338638 +0000 UTC m=+3535.388076773" lastFinishedPulling="2025-11-24 09:48:01.681736427 +0000 UTC m=+3537.568474562" observedRunningTime="2025-11-24 09:48:02.544187746 +0000 UTC m=+3538.430925901" watchObservedRunningTime="2025-11-24 09:48:02.549791857 +0000 UTC m=+3538.436529992" Nov 24 09:48:17 crc kubenswrapper[4886]: I1124 09:48:17.945083 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-6k45g"] Nov 24 09:48:17 crc kubenswrapper[4886]: I1124 09:48:17.948499 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6k45g" Nov 24 09:48:17 crc kubenswrapper[4886]: I1124 09:48:17.962439 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-6k45g"] Nov 24 09:48:18 crc kubenswrapper[4886]: I1124 09:48:18.067372 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/984528ee-0a53-4678-8515-dab6a6ae710e-utilities\") pod \"redhat-marketplace-6k45g\" (UID: \"984528ee-0a53-4678-8515-dab6a6ae710e\") " pod="openshift-marketplace/redhat-marketplace-6k45g" Nov 24 09:48:18 crc kubenswrapper[4886]: I1124 09:48:18.067467 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/984528ee-0a53-4678-8515-dab6a6ae710e-catalog-content\") pod \"redhat-marketplace-6k45g\" (UID: \"984528ee-0a53-4678-8515-dab6a6ae710e\") " pod="openshift-marketplace/redhat-marketplace-6k45g" Nov 24 09:48:18 crc kubenswrapper[4886]: I1124 09:48:18.068090 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qzbct\" (UniqueName: \"kubernetes.io/projected/984528ee-0a53-4678-8515-dab6a6ae710e-kube-api-access-qzbct\") pod \"redhat-marketplace-6k45g\" (UID: \"984528ee-0a53-4678-8515-dab6a6ae710e\") " pod="openshift-marketplace/redhat-marketplace-6k45g" Nov 24 09:48:18 crc kubenswrapper[4886]: I1124 09:48:18.170698 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qzbct\" (UniqueName: \"kubernetes.io/projected/984528ee-0a53-4678-8515-dab6a6ae710e-kube-api-access-qzbct\") pod \"redhat-marketplace-6k45g\" (UID: \"984528ee-0a53-4678-8515-dab6a6ae710e\") " pod="openshift-marketplace/redhat-marketplace-6k45g" Nov 24 09:48:18 crc kubenswrapper[4886]: I1124 09:48:18.170872 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/984528ee-0a53-4678-8515-dab6a6ae710e-utilities\") pod \"redhat-marketplace-6k45g\" (UID: \"984528ee-0a53-4678-8515-dab6a6ae710e\") " pod="openshift-marketplace/redhat-marketplace-6k45g" Nov 24 09:48:18 crc kubenswrapper[4886]: I1124 09:48:18.170905 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/984528ee-0a53-4678-8515-dab6a6ae710e-catalog-content\") pod \"redhat-marketplace-6k45g\" (UID: \"984528ee-0a53-4678-8515-dab6a6ae710e\") " pod="openshift-marketplace/redhat-marketplace-6k45g" Nov 24 09:48:18 crc kubenswrapper[4886]: I1124 09:48:18.171509 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/984528ee-0a53-4678-8515-dab6a6ae710e-catalog-content\") pod \"redhat-marketplace-6k45g\" (UID: \"984528ee-0a53-4678-8515-dab6a6ae710e\") " pod="openshift-marketplace/redhat-marketplace-6k45g" Nov 24 09:48:18 crc kubenswrapper[4886]: I1124 09:48:18.171441 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/984528ee-0a53-4678-8515-dab6a6ae710e-utilities\") pod \"redhat-marketplace-6k45g\" (UID: \"984528ee-0a53-4678-8515-dab6a6ae710e\") " pod="openshift-marketplace/redhat-marketplace-6k45g" Nov 24 09:48:18 crc kubenswrapper[4886]: I1124 09:48:18.196611 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qzbct\" (UniqueName: \"kubernetes.io/projected/984528ee-0a53-4678-8515-dab6a6ae710e-kube-api-access-qzbct\") pod \"redhat-marketplace-6k45g\" (UID: \"984528ee-0a53-4678-8515-dab6a6ae710e\") " pod="openshift-marketplace/redhat-marketplace-6k45g" Nov 24 09:48:18 crc kubenswrapper[4886]: I1124 09:48:18.330313 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6k45g" Nov 24 09:48:18 crc kubenswrapper[4886]: W1124 09:48:18.834228 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod984528ee_0a53_4678_8515_dab6a6ae710e.slice/crio-5d150e7fbac793e2037301493a15ee8b3ff6f634bb3d4ef3f2ec00ce15957210 WatchSource:0}: Error finding container 5d150e7fbac793e2037301493a15ee8b3ff6f634bb3d4ef3f2ec00ce15957210: Status 404 returned error can't find the container with id 5d150e7fbac793e2037301493a15ee8b3ff6f634bb3d4ef3f2ec00ce15957210 Nov 24 09:48:18 crc kubenswrapper[4886]: I1124 09:48:18.837664 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-6k45g"] Nov 24 09:48:19 crc kubenswrapper[4886]: I1124 09:48:19.681349 4886 generic.go:334] "Generic (PLEG): container finished" podID="984528ee-0a53-4678-8515-dab6a6ae710e" containerID="64ceb2b91bc1a52bd71ae773023eee5958f57f500dd9a26f0e1b311fc7a0a7c1" exitCode=0 Nov 24 09:48:19 crc kubenswrapper[4886]: I1124 09:48:19.681427 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6k45g" event={"ID":"984528ee-0a53-4678-8515-dab6a6ae710e","Type":"ContainerDied","Data":"64ceb2b91bc1a52bd71ae773023eee5958f57f500dd9a26f0e1b311fc7a0a7c1"} Nov 24 09:48:19 crc kubenswrapper[4886]: I1124 09:48:19.682476 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6k45g" event={"ID":"984528ee-0a53-4678-8515-dab6a6ae710e","Type":"ContainerStarted","Data":"5d150e7fbac793e2037301493a15ee8b3ff6f634bb3d4ef3f2ec00ce15957210"} Nov 24 09:48:24 crc kubenswrapper[4886]: I1124 09:48:24.722086 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-tjqb8/must-gather-ldxjf"] Nov 24 09:48:24 crc kubenswrapper[4886]: I1124 09:48:24.726054 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-tjqb8/must-gather-ldxjf" Nov 24 09:48:24 crc kubenswrapper[4886]: I1124 09:48:24.738785 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-tjqb8"/"kube-root-ca.crt" Nov 24 09:48:24 crc kubenswrapper[4886]: I1124 09:48:24.739054 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-tjqb8"/"default-dockercfg-glqkj" Nov 24 09:48:24 crc kubenswrapper[4886]: I1124 09:48:24.739264 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-tjqb8"/"openshift-service-ca.crt" Nov 24 09:48:24 crc kubenswrapper[4886]: I1124 09:48:24.744296 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-tjqb8/must-gather-ldxjf"] Nov 24 09:48:24 crc kubenswrapper[4886]: I1124 09:48:24.751076 4886 generic.go:334] "Generic (PLEG): container finished" podID="984528ee-0a53-4678-8515-dab6a6ae710e" containerID="106c9b0894a97781251cea542c057a7d5e6bbb617f748b0565076d30edd0138e" exitCode=0 Nov 24 09:48:24 crc kubenswrapper[4886]: I1124 09:48:24.751163 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6k45g" event={"ID":"984528ee-0a53-4678-8515-dab6a6ae710e","Type":"ContainerDied","Data":"106c9b0894a97781251cea542c057a7d5e6bbb617f748b0565076d30edd0138e"} Nov 24 09:48:24 crc kubenswrapper[4886]: I1124 09:48:24.911448 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/a9b187e2-2e4e-4f8a-9c51-657f618463cd-must-gather-output\") pod \"must-gather-ldxjf\" (UID: \"a9b187e2-2e4e-4f8a-9c51-657f618463cd\") " pod="openshift-must-gather-tjqb8/must-gather-ldxjf" Nov 24 09:48:24 crc kubenswrapper[4886]: I1124 09:48:24.911545 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-574f2\" (UniqueName: \"kubernetes.io/projected/a9b187e2-2e4e-4f8a-9c51-657f618463cd-kube-api-access-574f2\") pod \"must-gather-ldxjf\" (UID: \"a9b187e2-2e4e-4f8a-9c51-657f618463cd\") " pod="openshift-must-gather-tjqb8/must-gather-ldxjf" Nov 24 09:48:25 crc kubenswrapper[4886]: I1124 09:48:25.013214 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/a9b187e2-2e4e-4f8a-9c51-657f618463cd-must-gather-output\") pod \"must-gather-ldxjf\" (UID: \"a9b187e2-2e4e-4f8a-9c51-657f618463cd\") " pod="openshift-must-gather-tjqb8/must-gather-ldxjf" Nov 24 09:48:25 crc kubenswrapper[4886]: I1124 09:48:25.013287 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-574f2\" (UniqueName: \"kubernetes.io/projected/a9b187e2-2e4e-4f8a-9c51-657f618463cd-kube-api-access-574f2\") pod \"must-gather-ldxjf\" (UID: \"a9b187e2-2e4e-4f8a-9c51-657f618463cd\") " pod="openshift-must-gather-tjqb8/must-gather-ldxjf" Nov 24 09:48:25 crc kubenswrapper[4886]: I1124 09:48:25.014123 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/a9b187e2-2e4e-4f8a-9c51-657f618463cd-must-gather-output\") pod \"must-gather-ldxjf\" (UID: \"a9b187e2-2e4e-4f8a-9c51-657f618463cd\") " pod="openshift-must-gather-tjqb8/must-gather-ldxjf" Nov 24 09:48:25 crc kubenswrapper[4886]: I1124 09:48:25.037035 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-574f2\" (UniqueName: \"kubernetes.io/projected/a9b187e2-2e4e-4f8a-9c51-657f618463cd-kube-api-access-574f2\") pod \"must-gather-ldxjf\" (UID: \"a9b187e2-2e4e-4f8a-9c51-657f618463cd\") " pod="openshift-must-gather-tjqb8/must-gather-ldxjf" Nov 24 09:48:25 crc kubenswrapper[4886]: I1124 09:48:25.057754 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-tjqb8/must-gather-ldxjf" Nov 24 09:48:25 crc kubenswrapper[4886]: I1124 09:48:25.392389 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-tjqb8/must-gather-ldxjf"] Nov 24 09:48:25 crc kubenswrapper[4886]: I1124 09:48:25.770640 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-tjqb8/must-gather-ldxjf" event={"ID":"a9b187e2-2e4e-4f8a-9c51-657f618463cd","Type":"ContainerStarted","Data":"eeb50bcba42277caa6353abd1fd47fa328e9ed56f5616fe3069be3dcd5606803"} Nov 24 09:48:27 crc kubenswrapper[4886]: I1124 09:48:27.808205 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6k45g" event={"ID":"984528ee-0a53-4678-8515-dab6a6ae710e","Type":"ContainerStarted","Data":"6c4edacd09972e8a3feea874dceac76a665aab637ea8c8eb25c81b693f7836c0"} Nov 24 09:48:27 crc kubenswrapper[4886]: I1124 09:48:27.837738 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-6k45g" podStartSLOduration=3.989717394 podStartE2EDuration="10.837723225s" podCreationTimestamp="2025-11-24 09:48:17 +0000 UTC" firstStartedPulling="2025-11-24 09:48:19.683583024 +0000 UTC m=+3555.570321159" lastFinishedPulling="2025-11-24 09:48:26.531588855 +0000 UTC m=+3562.418326990" observedRunningTime="2025-11-24 09:48:27.835853852 +0000 UTC m=+3563.722591997" watchObservedRunningTime="2025-11-24 09:48:27.837723225 +0000 UTC m=+3563.724461360" Nov 24 09:48:28 crc kubenswrapper[4886]: I1124 09:48:28.330774 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-6k45g" Nov 24 09:48:28 crc kubenswrapper[4886]: I1124 09:48:28.331111 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-6k45g" Nov 24 09:48:28 crc kubenswrapper[4886]: I1124 09:48:28.392884 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-6k45g" Nov 24 09:48:29 crc kubenswrapper[4886]: I1124 09:48:29.207739 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-p9g54"] Nov 24 09:48:29 crc kubenswrapper[4886]: I1124 09:48:29.210297 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-p9g54" Nov 24 09:48:29 crc kubenswrapper[4886]: I1124 09:48:29.218824 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-p9g54"] Nov 24 09:48:29 crc kubenswrapper[4886]: I1124 09:48:29.323801 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0c21564c-7aec-4c38-887c-2d2026bf99ad-catalog-content\") pod \"community-operators-p9g54\" (UID: \"0c21564c-7aec-4c38-887c-2d2026bf99ad\") " pod="openshift-marketplace/community-operators-p9g54" Nov 24 09:48:29 crc kubenswrapper[4886]: I1124 09:48:29.324045 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j45vm\" (UniqueName: \"kubernetes.io/projected/0c21564c-7aec-4c38-887c-2d2026bf99ad-kube-api-access-j45vm\") pod \"community-operators-p9g54\" (UID: \"0c21564c-7aec-4c38-887c-2d2026bf99ad\") " pod="openshift-marketplace/community-operators-p9g54" Nov 24 09:48:29 crc kubenswrapper[4886]: I1124 09:48:29.324381 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0c21564c-7aec-4c38-887c-2d2026bf99ad-utilities\") pod \"community-operators-p9g54\" (UID: \"0c21564c-7aec-4c38-887c-2d2026bf99ad\") " pod="openshift-marketplace/community-operators-p9g54" Nov 24 09:48:29 crc kubenswrapper[4886]: I1124 09:48:29.426619 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0c21564c-7aec-4c38-887c-2d2026bf99ad-catalog-content\") pod \"community-operators-p9g54\" (UID: \"0c21564c-7aec-4c38-887c-2d2026bf99ad\") " pod="openshift-marketplace/community-operators-p9g54" Nov 24 09:48:29 crc kubenswrapper[4886]: I1124 09:48:29.426708 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j45vm\" (UniqueName: \"kubernetes.io/projected/0c21564c-7aec-4c38-887c-2d2026bf99ad-kube-api-access-j45vm\") pod \"community-operators-p9g54\" (UID: \"0c21564c-7aec-4c38-887c-2d2026bf99ad\") " pod="openshift-marketplace/community-operators-p9g54" Nov 24 09:48:29 crc kubenswrapper[4886]: I1124 09:48:29.426763 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0c21564c-7aec-4c38-887c-2d2026bf99ad-utilities\") pod \"community-operators-p9g54\" (UID: \"0c21564c-7aec-4c38-887c-2d2026bf99ad\") " pod="openshift-marketplace/community-operators-p9g54" Nov 24 09:48:29 crc kubenswrapper[4886]: I1124 09:48:29.427341 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0c21564c-7aec-4c38-887c-2d2026bf99ad-utilities\") pod \"community-operators-p9g54\" (UID: \"0c21564c-7aec-4c38-887c-2d2026bf99ad\") " pod="openshift-marketplace/community-operators-p9g54" Nov 24 09:48:29 crc kubenswrapper[4886]: I1124 09:48:29.427710 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0c21564c-7aec-4c38-887c-2d2026bf99ad-catalog-content\") pod \"community-operators-p9g54\" (UID: \"0c21564c-7aec-4c38-887c-2d2026bf99ad\") " pod="openshift-marketplace/community-operators-p9g54" Nov 24 09:48:29 crc kubenswrapper[4886]: I1124 09:48:29.455092 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j45vm\" (UniqueName: \"kubernetes.io/projected/0c21564c-7aec-4c38-887c-2d2026bf99ad-kube-api-access-j45vm\") pod \"community-operators-p9g54\" (UID: \"0c21564c-7aec-4c38-887c-2d2026bf99ad\") " pod="openshift-marketplace/community-operators-p9g54" Nov 24 09:48:29 crc kubenswrapper[4886]: I1124 09:48:29.546605 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-p9g54" Nov 24 09:48:31 crc kubenswrapper[4886]: I1124 09:48:31.785190 4886 patch_prober.go:28] interesting pod/machine-config-daemon-zc46q container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 09:48:31 crc kubenswrapper[4886]: I1124 09:48:31.785510 4886 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zc46q" podUID="23cb993e-0360-4449-b604-8ddd825a6502" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 09:48:38 crc kubenswrapper[4886]: I1124 09:48:38.384088 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-6k45g" Nov 24 09:48:38 crc kubenswrapper[4886]: I1124 09:48:38.448717 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-6k45g"] Nov 24 09:48:38 crc kubenswrapper[4886]: I1124 09:48:38.945868 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-6k45g" podUID="984528ee-0a53-4678-8515-dab6a6ae710e" containerName="registry-server" containerID="cri-o://6c4edacd09972e8a3feea874dceac76a665aab637ea8c8eb25c81b693f7836c0" gracePeriod=2 Nov 24 09:48:39 crc kubenswrapper[4886]: I1124 09:48:39.100617 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-p9g54"] Nov 24 09:48:39 crc kubenswrapper[4886]: W1124 09:48:39.108560 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0c21564c_7aec_4c38_887c_2d2026bf99ad.slice/crio-8a1458225d17a0ee43ea0bccbb39b780ed33a72b13dd9c65cf2e467ecd36b8c7 WatchSource:0}: Error finding container 8a1458225d17a0ee43ea0bccbb39b780ed33a72b13dd9c65cf2e467ecd36b8c7: Status 404 returned error can't find the container with id 8a1458225d17a0ee43ea0bccbb39b780ed33a72b13dd9c65cf2e467ecd36b8c7 Nov 24 09:48:39 crc kubenswrapper[4886]: I1124 09:48:39.402833 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6k45g" Nov 24 09:48:39 crc kubenswrapper[4886]: I1124 09:48:39.454818 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qzbct\" (UniqueName: \"kubernetes.io/projected/984528ee-0a53-4678-8515-dab6a6ae710e-kube-api-access-qzbct\") pod \"984528ee-0a53-4678-8515-dab6a6ae710e\" (UID: \"984528ee-0a53-4678-8515-dab6a6ae710e\") " Nov 24 09:48:39 crc kubenswrapper[4886]: I1124 09:48:39.454915 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/984528ee-0a53-4678-8515-dab6a6ae710e-catalog-content\") pod \"984528ee-0a53-4678-8515-dab6a6ae710e\" (UID: \"984528ee-0a53-4678-8515-dab6a6ae710e\") " Nov 24 09:48:39 crc kubenswrapper[4886]: I1124 09:48:39.454955 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/984528ee-0a53-4678-8515-dab6a6ae710e-utilities\") pod \"984528ee-0a53-4678-8515-dab6a6ae710e\" (UID: \"984528ee-0a53-4678-8515-dab6a6ae710e\") " Nov 24 09:48:39 crc kubenswrapper[4886]: I1124 09:48:39.456905 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/984528ee-0a53-4678-8515-dab6a6ae710e-utilities" (OuterVolumeSpecName: "utilities") pod "984528ee-0a53-4678-8515-dab6a6ae710e" (UID: "984528ee-0a53-4678-8515-dab6a6ae710e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 09:48:39 crc kubenswrapper[4886]: I1124 09:48:39.470090 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/984528ee-0a53-4678-8515-dab6a6ae710e-kube-api-access-qzbct" (OuterVolumeSpecName: "kube-api-access-qzbct") pod "984528ee-0a53-4678-8515-dab6a6ae710e" (UID: "984528ee-0a53-4678-8515-dab6a6ae710e"). InnerVolumeSpecName "kube-api-access-qzbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 09:48:39 crc kubenswrapper[4886]: I1124 09:48:39.483761 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/984528ee-0a53-4678-8515-dab6a6ae710e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "984528ee-0a53-4678-8515-dab6a6ae710e" (UID: "984528ee-0a53-4678-8515-dab6a6ae710e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 09:48:39 crc kubenswrapper[4886]: I1124 09:48:39.557855 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qzbct\" (UniqueName: \"kubernetes.io/projected/984528ee-0a53-4678-8515-dab6a6ae710e-kube-api-access-qzbct\") on node \"crc\" DevicePath \"\"" Nov 24 09:48:39 crc kubenswrapper[4886]: I1124 09:48:39.557908 4886 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/984528ee-0a53-4678-8515-dab6a6ae710e-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 09:48:39 crc kubenswrapper[4886]: I1124 09:48:39.557922 4886 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/984528ee-0a53-4678-8515-dab6a6ae710e-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 09:48:39 crc kubenswrapper[4886]: I1124 09:48:39.963505 4886 generic.go:334] "Generic (PLEG): container finished" podID="0c21564c-7aec-4c38-887c-2d2026bf99ad" containerID="5e1ccb66dbc3a4dc94e0f786c62b9cb32ddd9af1034988a71edc5c0ae7616c95" exitCode=0 Nov 24 09:48:39 crc kubenswrapper[4886]: I1124 09:48:39.963629 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p9g54" event={"ID":"0c21564c-7aec-4c38-887c-2d2026bf99ad","Type":"ContainerDied","Data":"5e1ccb66dbc3a4dc94e0f786c62b9cb32ddd9af1034988a71edc5c0ae7616c95"} Nov 24 09:48:39 crc kubenswrapper[4886]: I1124 09:48:39.964211 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p9g54" event={"ID":"0c21564c-7aec-4c38-887c-2d2026bf99ad","Type":"ContainerStarted","Data":"8a1458225d17a0ee43ea0bccbb39b780ed33a72b13dd9c65cf2e467ecd36b8c7"} Nov 24 09:48:39 crc kubenswrapper[4886]: I1124 09:48:39.968768 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-tjqb8/must-gather-ldxjf" event={"ID":"a9b187e2-2e4e-4f8a-9c51-657f618463cd","Type":"ContainerStarted","Data":"9a6f6e336942656937875b8d88110cf8389e25f31b5c15d188edd521f3c968db"} Nov 24 09:48:39 crc kubenswrapper[4886]: I1124 09:48:39.968826 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-tjqb8/must-gather-ldxjf" event={"ID":"a9b187e2-2e4e-4f8a-9c51-657f618463cd","Type":"ContainerStarted","Data":"da83b0c6f665e65a902fe33f1527ec06ec3da2409d6cb5d5c7fc14661bff13e9"} Nov 24 09:48:39 crc kubenswrapper[4886]: I1124 09:48:39.973715 4886 generic.go:334] "Generic (PLEG): container finished" podID="984528ee-0a53-4678-8515-dab6a6ae710e" containerID="6c4edacd09972e8a3feea874dceac76a665aab637ea8c8eb25c81b693f7836c0" exitCode=0 Nov 24 09:48:39 crc kubenswrapper[4886]: I1124 09:48:39.973771 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6k45g" event={"ID":"984528ee-0a53-4678-8515-dab6a6ae710e","Type":"ContainerDied","Data":"6c4edacd09972e8a3feea874dceac76a665aab637ea8c8eb25c81b693f7836c0"} Nov 24 09:48:39 crc kubenswrapper[4886]: I1124 09:48:39.973801 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6k45g" event={"ID":"984528ee-0a53-4678-8515-dab6a6ae710e","Type":"ContainerDied","Data":"5d150e7fbac793e2037301493a15ee8b3ff6f634bb3d4ef3f2ec00ce15957210"} Nov 24 09:48:39 crc kubenswrapper[4886]: I1124 09:48:39.973823 4886 scope.go:117] "RemoveContainer" containerID="6c4edacd09972e8a3feea874dceac76a665aab637ea8c8eb25c81b693f7836c0" Nov 24 09:48:39 crc kubenswrapper[4886]: I1124 09:48:39.973946 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6k45g" Nov 24 09:48:40 crc kubenswrapper[4886]: I1124 09:48:40.012055 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-tjqb8/must-gather-ldxjf" podStartSLOduration=2.730184416 podStartE2EDuration="16.012030651s" podCreationTimestamp="2025-11-24 09:48:24 +0000 UTC" firstStartedPulling="2025-11-24 09:48:25.398562456 +0000 UTC m=+3561.285300591" lastFinishedPulling="2025-11-24 09:48:38.680408691 +0000 UTC m=+3574.567146826" observedRunningTime="2025-11-24 09:48:39.998048521 +0000 UTC m=+3575.884786656" watchObservedRunningTime="2025-11-24 09:48:40.012030651 +0000 UTC m=+3575.898768786" Nov 24 09:48:40 crc kubenswrapper[4886]: I1124 09:48:40.019280 4886 scope.go:117] "RemoveContainer" containerID="106c9b0894a97781251cea542c057a7d5e6bbb617f748b0565076d30edd0138e" Nov 24 09:48:40 crc kubenswrapper[4886]: I1124 09:48:40.032833 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-6k45g"] Nov 24 09:48:40 crc kubenswrapper[4886]: I1124 09:48:40.041304 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-6k45g"] Nov 24 09:48:40 crc kubenswrapper[4886]: I1124 09:48:40.055847 4886 scope.go:117] "RemoveContainer" containerID="64ceb2b91bc1a52bd71ae773023eee5958f57f500dd9a26f0e1b311fc7a0a7c1" Nov 24 09:48:40 crc kubenswrapper[4886]: I1124 09:48:40.091818 4886 scope.go:117] "RemoveContainer" containerID="6c4edacd09972e8a3feea874dceac76a665aab637ea8c8eb25c81b693f7836c0" Nov 24 09:48:40 crc kubenswrapper[4886]: E1124 09:48:40.092444 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6c4edacd09972e8a3feea874dceac76a665aab637ea8c8eb25c81b693f7836c0\": container with ID starting with 6c4edacd09972e8a3feea874dceac76a665aab637ea8c8eb25c81b693f7836c0 not found: ID does not exist" containerID="6c4edacd09972e8a3feea874dceac76a665aab637ea8c8eb25c81b693f7836c0" Nov 24 09:48:40 crc kubenswrapper[4886]: I1124 09:48:40.092503 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6c4edacd09972e8a3feea874dceac76a665aab637ea8c8eb25c81b693f7836c0"} err="failed to get container status \"6c4edacd09972e8a3feea874dceac76a665aab637ea8c8eb25c81b693f7836c0\": rpc error: code = NotFound desc = could not find container \"6c4edacd09972e8a3feea874dceac76a665aab637ea8c8eb25c81b693f7836c0\": container with ID starting with 6c4edacd09972e8a3feea874dceac76a665aab637ea8c8eb25c81b693f7836c0 not found: ID does not exist" Nov 24 09:48:40 crc kubenswrapper[4886]: I1124 09:48:40.092529 4886 scope.go:117] "RemoveContainer" containerID="106c9b0894a97781251cea542c057a7d5e6bbb617f748b0565076d30edd0138e" Nov 24 09:48:40 crc kubenswrapper[4886]: E1124 09:48:40.093210 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"106c9b0894a97781251cea542c057a7d5e6bbb617f748b0565076d30edd0138e\": container with ID starting with 106c9b0894a97781251cea542c057a7d5e6bbb617f748b0565076d30edd0138e not found: ID does not exist" containerID="106c9b0894a97781251cea542c057a7d5e6bbb617f748b0565076d30edd0138e" Nov 24 09:48:40 crc kubenswrapper[4886]: I1124 09:48:40.093320 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"106c9b0894a97781251cea542c057a7d5e6bbb617f748b0565076d30edd0138e"} err="failed to get container status \"106c9b0894a97781251cea542c057a7d5e6bbb617f748b0565076d30edd0138e\": rpc error: code = NotFound desc = could not find container \"106c9b0894a97781251cea542c057a7d5e6bbb617f748b0565076d30edd0138e\": container with ID starting with 106c9b0894a97781251cea542c057a7d5e6bbb617f748b0565076d30edd0138e not found: ID does not exist" Nov 24 09:48:40 crc kubenswrapper[4886]: I1124 09:48:40.093389 4886 scope.go:117] "RemoveContainer" containerID="64ceb2b91bc1a52bd71ae773023eee5958f57f500dd9a26f0e1b311fc7a0a7c1" Nov 24 09:48:40 crc kubenswrapper[4886]: E1124 09:48:40.093938 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"64ceb2b91bc1a52bd71ae773023eee5958f57f500dd9a26f0e1b311fc7a0a7c1\": container with ID starting with 64ceb2b91bc1a52bd71ae773023eee5958f57f500dd9a26f0e1b311fc7a0a7c1 not found: ID does not exist" containerID="64ceb2b91bc1a52bd71ae773023eee5958f57f500dd9a26f0e1b311fc7a0a7c1" Nov 24 09:48:40 crc kubenswrapper[4886]: I1124 09:48:40.094022 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"64ceb2b91bc1a52bd71ae773023eee5958f57f500dd9a26f0e1b311fc7a0a7c1"} err="failed to get container status \"64ceb2b91bc1a52bd71ae773023eee5958f57f500dd9a26f0e1b311fc7a0a7c1\": rpc error: code = NotFound desc = could not find container \"64ceb2b91bc1a52bd71ae773023eee5958f57f500dd9a26f0e1b311fc7a0a7c1\": container with ID starting with 64ceb2b91bc1a52bd71ae773023eee5958f57f500dd9a26f0e1b311fc7a0a7c1 not found: ID does not exist" Nov 24 09:48:40 crc kubenswrapper[4886]: I1124 09:48:40.860846 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="984528ee-0a53-4678-8515-dab6a6ae710e" path="/var/lib/kubelet/pods/984528ee-0a53-4678-8515-dab6a6ae710e/volumes" Nov 24 09:48:42 crc kubenswrapper[4886]: I1124 09:48:42.904084 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-tjqb8/crc-debug-9wlvx"] Nov 24 09:48:42 crc kubenswrapper[4886]: E1124 09:48:42.905183 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="984528ee-0a53-4678-8515-dab6a6ae710e" containerName="extract-content" Nov 24 09:48:42 crc kubenswrapper[4886]: I1124 09:48:42.905199 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="984528ee-0a53-4678-8515-dab6a6ae710e" containerName="extract-content" Nov 24 09:48:42 crc kubenswrapper[4886]: E1124 09:48:42.905215 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="984528ee-0a53-4678-8515-dab6a6ae710e" containerName="extract-utilities" Nov 24 09:48:42 crc kubenswrapper[4886]: I1124 09:48:42.905221 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="984528ee-0a53-4678-8515-dab6a6ae710e" containerName="extract-utilities" Nov 24 09:48:42 crc kubenswrapper[4886]: E1124 09:48:42.905243 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="984528ee-0a53-4678-8515-dab6a6ae710e" containerName="registry-server" Nov 24 09:48:42 crc kubenswrapper[4886]: I1124 09:48:42.905250 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="984528ee-0a53-4678-8515-dab6a6ae710e" containerName="registry-server" Nov 24 09:48:42 crc kubenswrapper[4886]: I1124 09:48:42.905440 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="984528ee-0a53-4678-8515-dab6a6ae710e" containerName="registry-server" Nov 24 09:48:42 crc kubenswrapper[4886]: I1124 09:48:42.906029 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-tjqb8/crc-debug-9wlvx" Nov 24 09:48:42 crc kubenswrapper[4886]: I1124 09:48:42.944176 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/dfc8100e-afdc-4beb-91bc-d2a3871be783-host\") pod \"crc-debug-9wlvx\" (UID: \"dfc8100e-afdc-4beb-91bc-d2a3871be783\") " pod="openshift-must-gather-tjqb8/crc-debug-9wlvx" Nov 24 09:48:42 crc kubenswrapper[4886]: I1124 09:48:42.944323 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-frslg\" (UniqueName: \"kubernetes.io/projected/dfc8100e-afdc-4beb-91bc-d2a3871be783-kube-api-access-frslg\") pod \"crc-debug-9wlvx\" (UID: \"dfc8100e-afdc-4beb-91bc-d2a3871be783\") " pod="openshift-must-gather-tjqb8/crc-debug-9wlvx" Nov 24 09:48:43 crc kubenswrapper[4886]: I1124 09:48:43.046870 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-frslg\" (UniqueName: \"kubernetes.io/projected/dfc8100e-afdc-4beb-91bc-d2a3871be783-kube-api-access-frslg\") pod \"crc-debug-9wlvx\" (UID: \"dfc8100e-afdc-4beb-91bc-d2a3871be783\") " pod="openshift-must-gather-tjqb8/crc-debug-9wlvx" Nov 24 09:48:43 crc kubenswrapper[4886]: I1124 09:48:43.047194 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/dfc8100e-afdc-4beb-91bc-d2a3871be783-host\") pod \"crc-debug-9wlvx\" (UID: \"dfc8100e-afdc-4beb-91bc-d2a3871be783\") " pod="openshift-must-gather-tjqb8/crc-debug-9wlvx" Nov 24 09:48:43 crc kubenswrapper[4886]: I1124 09:48:43.047410 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/dfc8100e-afdc-4beb-91bc-d2a3871be783-host\") pod \"crc-debug-9wlvx\" (UID: \"dfc8100e-afdc-4beb-91bc-d2a3871be783\") " pod="openshift-must-gather-tjqb8/crc-debug-9wlvx" Nov 24 09:48:43 crc kubenswrapper[4886]: I1124 09:48:43.074573 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-frslg\" (UniqueName: \"kubernetes.io/projected/dfc8100e-afdc-4beb-91bc-d2a3871be783-kube-api-access-frslg\") pod \"crc-debug-9wlvx\" (UID: \"dfc8100e-afdc-4beb-91bc-d2a3871be783\") " pod="openshift-must-gather-tjqb8/crc-debug-9wlvx" Nov 24 09:48:43 crc kubenswrapper[4886]: I1124 09:48:43.226779 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-tjqb8/crc-debug-9wlvx" Nov 24 09:48:44 crc kubenswrapper[4886]: I1124 09:48:44.015639 4886 generic.go:334] "Generic (PLEG): container finished" podID="0c21564c-7aec-4c38-887c-2d2026bf99ad" containerID="8107fff38e2544e06f13e8ff1aeaef286be53b5fbdb20d07febffb7f5df6a2e2" exitCode=0 Nov 24 09:48:44 crc kubenswrapper[4886]: I1124 09:48:44.015769 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p9g54" event={"ID":"0c21564c-7aec-4c38-887c-2d2026bf99ad","Type":"ContainerDied","Data":"8107fff38e2544e06f13e8ff1aeaef286be53b5fbdb20d07febffb7f5df6a2e2"} Nov 24 09:48:44 crc kubenswrapper[4886]: I1124 09:48:44.018985 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-tjqb8/crc-debug-9wlvx" event={"ID":"dfc8100e-afdc-4beb-91bc-d2a3871be783","Type":"ContainerStarted","Data":"2c2b11ac7cc686559c1345dedee2dcc85589ff00d8a82f99439c9ca6512475ce"} Nov 24 09:48:48 crc kubenswrapper[4886]: I1124 09:48:48.065297 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p9g54" event={"ID":"0c21564c-7aec-4c38-887c-2d2026bf99ad","Type":"ContainerStarted","Data":"7bb4fe098f9d7c0230153b9c4050585982a38aa247cd1238a6fc51abdb5da952"} Nov 24 09:48:48 crc kubenswrapper[4886]: I1124 09:48:48.089373 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-p9g54" podStartSLOduration=12.327215521 podStartE2EDuration="19.089341986s" podCreationTimestamp="2025-11-24 09:48:29 +0000 UTC" firstStartedPulling="2025-11-24 09:48:39.966934931 +0000 UTC m=+3575.853673066" lastFinishedPulling="2025-11-24 09:48:46.729061396 +0000 UTC m=+3582.615799531" observedRunningTime="2025-11-24 09:48:48.08248714 +0000 UTC m=+3583.969225285" watchObservedRunningTime="2025-11-24 09:48:48.089341986 +0000 UTC m=+3583.976080121" Nov 24 09:48:49 crc kubenswrapper[4886]: I1124 09:48:49.547964 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-p9g54" Nov 24 09:48:49 crc kubenswrapper[4886]: I1124 09:48:49.548476 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-p9g54" Nov 24 09:48:50 crc kubenswrapper[4886]: I1124 09:48:50.606228 4886 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-p9g54" podUID="0c21564c-7aec-4c38-887c-2d2026bf99ad" containerName="registry-server" probeResult="failure" output=< Nov 24 09:48:50 crc kubenswrapper[4886]: timeout: failed to connect service ":50051" within 1s Nov 24 09:48:50 crc kubenswrapper[4886]: > Nov 24 09:49:05 crc kubenswrapper[4886]: I1124 09:49:00.594975 4886 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-p9g54" podUID="0c21564c-7aec-4c38-887c-2d2026bf99ad" containerName="registry-server" probeResult="failure" output=< Nov 24 09:49:05 crc kubenswrapper[4886]: timeout: failed to connect service ":50051" within 1s Nov 24 09:49:05 crc kubenswrapper[4886]: > Nov 24 09:49:05 crc kubenswrapper[4886]: I1124 09:49:01.784660 4886 patch_prober.go:28] interesting pod/machine-config-daemon-zc46q container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 09:49:05 crc kubenswrapper[4886]: I1124 09:49:01.785040 4886 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zc46q" podUID="23cb993e-0360-4449-b604-8ddd825a6502" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 09:49:07 crc kubenswrapper[4886]: E1124 09:49:07.656096 4886 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6ab858aed98e4fe57e6b144da8e90ad5d6698bb4cc5521206f5c05809f0f9296" Nov 24 09:49:07 crc kubenswrapper[4886]: E1124 09:49:07.656826 4886 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:container-00,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6ab858aed98e4fe57e6b144da8e90ad5d6698bb4cc5521206f5c05809f0f9296,Command:[chroot /host bash -c echo 'TOOLBOX_NAME=toolbox-osp' > /root/.toolboxrc ; rm -rf \"/var/tmp/sos-osp\" && mkdir -p \"/var/tmp/sos-osp\" && sudo podman rm --force toolbox-osp; sudo --preserve-env podman pull --authfile /var/lib/kubelet/config.json registry.redhat.io/rhel9/support-tools && toolbox sos report --batch --all-logs --only-plugins block,cifs,crio,devicemapper,devices,firewall_tables,firewalld,iscsi,lvm2,memory,multipath,nfs,nis,nvme,podman,process,processor,selinux,scsi,udev,logs,crypto --tmp-dir=\"/var/tmp/sos-osp\" && if [[ \"$(ls /var/log/pods/*/{*.log.*,*/*.log.*} 2>/dev/null)\" != '' ]]; then tar --ignore-failed-read --warning=no-file-changed -cJf \"/var/tmp/sos-osp/podlogs.tar.xz\" --transform 's,^,podlogs/,' /var/log/pods/*/{*.log.*,*/*.log.*} || true; fi],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:TMOUT,Value:900,ValueFrom:nil,},EnvVar{Name:HOST,Value:/host,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:host,ReadOnly:false,MountPath:/host,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-frslg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod crc-debug-9wlvx_openshift-must-gather-tjqb8(dfc8100e-afdc-4beb-91bc-d2a3871be783): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 24 09:49:07 crc kubenswrapper[4886]: E1124 09:49:07.658059 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"container-00\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openshift-must-gather-tjqb8/crc-debug-9wlvx" podUID="dfc8100e-afdc-4beb-91bc-d2a3871be783" Nov 24 09:49:08 crc kubenswrapper[4886]: E1124 09:49:08.291427 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"container-00\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6ab858aed98e4fe57e6b144da8e90ad5d6698bb4cc5521206f5c05809f0f9296\\\"\"" pod="openshift-must-gather-tjqb8/crc-debug-9wlvx" podUID="dfc8100e-afdc-4beb-91bc-d2a3871be783" Nov 24 09:49:10 crc kubenswrapper[4886]: I1124 09:49:10.598542 4886 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-p9g54" podUID="0c21564c-7aec-4c38-887c-2d2026bf99ad" containerName="registry-server" probeResult="failure" output=< Nov 24 09:49:10 crc kubenswrapper[4886]: timeout: failed to connect service ":50051" within 1s Nov 24 09:49:10 crc kubenswrapper[4886]: > Nov 24 09:49:20 crc kubenswrapper[4886]: I1124 09:49:20.591842 4886 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-p9g54" podUID="0c21564c-7aec-4c38-887c-2d2026bf99ad" containerName="registry-server" probeResult="failure" output=< Nov 24 09:49:20 crc kubenswrapper[4886]: timeout: failed to connect service ":50051" within 1s Nov 24 09:49:20 crc kubenswrapper[4886]: > Nov 24 09:49:22 crc kubenswrapper[4886]: I1124 09:49:22.443988 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-tjqb8/crc-debug-9wlvx" event={"ID":"dfc8100e-afdc-4beb-91bc-d2a3871be783","Type":"ContainerStarted","Data":"7d6301c636c30267ffa61a1d39af112020d4a5b34ca9dd85d025083dd4e38ee5"} Nov 24 09:49:30 crc kubenswrapper[4886]: I1124 09:49:30.599415 4886 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-p9g54" podUID="0c21564c-7aec-4c38-887c-2d2026bf99ad" containerName="registry-server" probeResult="failure" output=< Nov 24 09:49:30 crc kubenswrapper[4886]: timeout: failed to connect service ":50051" within 1s Nov 24 09:49:30 crc kubenswrapper[4886]: > Nov 24 09:49:31 crc kubenswrapper[4886]: I1124 09:49:31.784246 4886 patch_prober.go:28] interesting pod/machine-config-daemon-zc46q container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 09:49:31 crc kubenswrapper[4886]: I1124 09:49:31.784828 4886 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zc46q" podUID="23cb993e-0360-4449-b604-8ddd825a6502" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 09:49:31 crc kubenswrapper[4886]: I1124 09:49:31.784902 4886 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-zc46q" Nov 24 09:49:31 crc kubenswrapper[4886]: I1124 09:49:31.786070 4886 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"4c81d65f580ad035ae3cb40985f146ce45574345f2884eeaa80d7f0764f1e262"} pod="openshift-machine-config-operator/machine-config-daemon-zc46q" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 24 09:49:31 crc kubenswrapper[4886]: I1124 09:49:31.786148 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-zc46q" podUID="23cb993e-0360-4449-b604-8ddd825a6502" containerName="machine-config-daemon" containerID="cri-o://4c81d65f580ad035ae3cb40985f146ce45574345f2884eeaa80d7f0764f1e262" gracePeriod=600 Nov 24 09:49:32 crc kubenswrapper[4886]: I1124 09:49:32.555000 4886 generic.go:334] "Generic (PLEG): container finished" podID="23cb993e-0360-4449-b604-8ddd825a6502" containerID="4c81d65f580ad035ae3cb40985f146ce45574345f2884eeaa80d7f0764f1e262" exitCode=0 Nov 24 09:49:32 crc kubenswrapper[4886]: I1124 09:49:32.555067 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zc46q" event={"ID":"23cb993e-0360-4449-b604-8ddd825a6502","Type":"ContainerDied","Data":"4c81d65f580ad035ae3cb40985f146ce45574345f2884eeaa80d7f0764f1e262"} Nov 24 09:49:32 crc kubenswrapper[4886]: I1124 09:49:32.555141 4886 scope.go:117] "RemoveContainer" containerID="aa909447043ab64320232fc597943f6e0932319ce7386a6b3d06298eb5b2bbf2" Nov 24 09:49:36 crc kubenswrapper[4886]: I1124 09:49:36.596201 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zc46q" event={"ID":"23cb993e-0360-4449-b604-8ddd825a6502","Type":"ContainerStarted","Data":"1e45cb7a19817212ac61663702e5e39e029304fff651e6302fbb94fac4e5b2b7"} Nov 24 09:49:36 crc kubenswrapper[4886]: I1124 09:49:36.621307 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-tjqb8/crc-debug-9wlvx" podStartSLOduration=15.715785742 podStartE2EDuration="54.62128555s" podCreationTimestamp="2025-11-24 09:48:42 +0000 UTC" firstStartedPulling="2025-11-24 09:48:43.285318751 +0000 UTC m=+3579.172056886" lastFinishedPulling="2025-11-24 09:49:22.190818559 +0000 UTC m=+3618.077556694" observedRunningTime="2025-11-24 09:49:22.465953949 +0000 UTC m=+3618.352692124" watchObservedRunningTime="2025-11-24 09:49:36.62128555 +0000 UTC m=+3632.508023685" Nov 24 09:49:40 crc kubenswrapper[4886]: I1124 09:49:40.609399 4886 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-p9g54" podUID="0c21564c-7aec-4c38-887c-2d2026bf99ad" containerName="registry-server" probeResult="failure" output=< Nov 24 09:49:40 crc kubenswrapper[4886]: timeout: failed to connect service ":50051" within 1s Nov 24 09:49:40 crc kubenswrapper[4886]: > Nov 24 09:49:49 crc kubenswrapper[4886]: I1124 09:49:49.601906 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-p9g54" Nov 24 09:49:49 crc kubenswrapper[4886]: I1124 09:49:49.672365 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-p9g54" Nov 24 09:49:49 crc kubenswrapper[4886]: I1124 09:49:49.851454 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-p9g54"] Nov 24 09:49:50 crc kubenswrapper[4886]: I1124 09:49:50.740878 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-p9g54" podUID="0c21564c-7aec-4c38-887c-2d2026bf99ad" containerName="registry-server" containerID="cri-o://7bb4fe098f9d7c0230153b9c4050585982a38aa247cd1238a6fc51abdb5da952" gracePeriod=2 Nov 24 09:49:51 crc kubenswrapper[4886]: I1124 09:49:51.252972 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-p9g54" Nov 24 09:49:51 crc kubenswrapper[4886]: I1124 09:49:51.379499 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j45vm\" (UniqueName: \"kubernetes.io/projected/0c21564c-7aec-4c38-887c-2d2026bf99ad-kube-api-access-j45vm\") pod \"0c21564c-7aec-4c38-887c-2d2026bf99ad\" (UID: \"0c21564c-7aec-4c38-887c-2d2026bf99ad\") " Nov 24 09:49:51 crc kubenswrapper[4886]: I1124 09:49:51.379667 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0c21564c-7aec-4c38-887c-2d2026bf99ad-utilities\") pod \"0c21564c-7aec-4c38-887c-2d2026bf99ad\" (UID: \"0c21564c-7aec-4c38-887c-2d2026bf99ad\") " Nov 24 09:49:51 crc kubenswrapper[4886]: I1124 09:49:51.379818 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0c21564c-7aec-4c38-887c-2d2026bf99ad-catalog-content\") pod \"0c21564c-7aec-4c38-887c-2d2026bf99ad\" (UID: \"0c21564c-7aec-4c38-887c-2d2026bf99ad\") " Nov 24 09:49:51 crc kubenswrapper[4886]: I1124 09:49:51.381667 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0c21564c-7aec-4c38-887c-2d2026bf99ad-utilities" (OuterVolumeSpecName: "utilities") pod "0c21564c-7aec-4c38-887c-2d2026bf99ad" (UID: "0c21564c-7aec-4c38-887c-2d2026bf99ad"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 09:49:51 crc kubenswrapper[4886]: I1124 09:49:51.387769 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0c21564c-7aec-4c38-887c-2d2026bf99ad-kube-api-access-j45vm" (OuterVolumeSpecName: "kube-api-access-j45vm") pod "0c21564c-7aec-4c38-887c-2d2026bf99ad" (UID: "0c21564c-7aec-4c38-887c-2d2026bf99ad"). InnerVolumeSpecName "kube-api-access-j45vm". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 09:49:51 crc kubenswrapper[4886]: I1124 09:49:51.445223 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0c21564c-7aec-4c38-887c-2d2026bf99ad-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0c21564c-7aec-4c38-887c-2d2026bf99ad" (UID: "0c21564c-7aec-4c38-887c-2d2026bf99ad"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 09:49:51 crc kubenswrapper[4886]: I1124 09:49:51.483231 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j45vm\" (UniqueName: \"kubernetes.io/projected/0c21564c-7aec-4c38-887c-2d2026bf99ad-kube-api-access-j45vm\") on node \"crc\" DevicePath \"\"" Nov 24 09:49:51 crc kubenswrapper[4886]: I1124 09:49:51.483299 4886 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0c21564c-7aec-4c38-887c-2d2026bf99ad-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 09:49:51 crc kubenswrapper[4886]: I1124 09:49:51.483324 4886 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0c21564c-7aec-4c38-887c-2d2026bf99ad-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 09:49:51 crc kubenswrapper[4886]: I1124 09:49:51.752973 4886 generic.go:334] "Generic (PLEG): container finished" podID="0c21564c-7aec-4c38-887c-2d2026bf99ad" containerID="7bb4fe098f9d7c0230153b9c4050585982a38aa247cd1238a6fc51abdb5da952" exitCode=0 Nov 24 09:49:51 crc kubenswrapper[4886]: I1124 09:49:51.753033 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p9g54" event={"ID":"0c21564c-7aec-4c38-887c-2d2026bf99ad","Type":"ContainerDied","Data":"7bb4fe098f9d7c0230153b9c4050585982a38aa247cd1238a6fc51abdb5da952"} Nov 24 09:49:51 crc kubenswrapper[4886]: I1124 09:49:51.753074 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p9g54" event={"ID":"0c21564c-7aec-4c38-887c-2d2026bf99ad","Type":"ContainerDied","Data":"8a1458225d17a0ee43ea0bccbb39b780ed33a72b13dd9c65cf2e467ecd36b8c7"} Nov 24 09:49:51 crc kubenswrapper[4886]: I1124 09:49:51.753098 4886 scope.go:117] "RemoveContainer" containerID="7bb4fe098f9d7c0230153b9c4050585982a38aa247cd1238a6fc51abdb5da952" Nov 24 09:49:51 crc kubenswrapper[4886]: I1124 09:49:51.753328 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-p9g54" Nov 24 09:49:51 crc kubenswrapper[4886]: I1124 09:49:51.803905 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-p9g54"] Nov 24 09:49:51 crc kubenswrapper[4886]: I1124 09:49:51.806220 4886 scope.go:117] "RemoveContainer" containerID="8107fff38e2544e06f13e8ff1aeaef286be53b5fbdb20d07febffb7f5df6a2e2" Nov 24 09:49:51 crc kubenswrapper[4886]: I1124 09:49:51.814617 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-p9g54"] Nov 24 09:49:51 crc kubenswrapper[4886]: I1124 09:49:51.839934 4886 scope.go:117] "RemoveContainer" containerID="5e1ccb66dbc3a4dc94e0f786c62b9cb32ddd9af1034988a71edc5c0ae7616c95" Nov 24 09:49:51 crc kubenswrapper[4886]: I1124 09:49:51.885791 4886 scope.go:117] "RemoveContainer" containerID="7bb4fe098f9d7c0230153b9c4050585982a38aa247cd1238a6fc51abdb5da952" Nov 24 09:49:51 crc kubenswrapper[4886]: E1124 09:49:51.886455 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7bb4fe098f9d7c0230153b9c4050585982a38aa247cd1238a6fc51abdb5da952\": container with ID starting with 7bb4fe098f9d7c0230153b9c4050585982a38aa247cd1238a6fc51abdb5da952 not found: ID does not exist" containerID="7bb4fe098f9d7c0230153b9c4050585982a38aa247cd1238a6fc51abdb5da952" Nov 24 09:49:51 crc kubenswrapper[4886]: I1124 09:49:51.886524 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7bb4fe098f9d7c0230153b9c4050585982a38aa247cd1238a6fc51abdb5da952"} err="failed to get container status \"7bb4fe098f9d7c0230153b9c4050585982a38aa247cd1238a6fc51abdb5da952\": rpc error: code = NotFound desc = could not find container \"7bb4fe098f9d7c0230153b9c4050585982a38aa247cd1238a6fc51abdb5da952\": container with ID starting with 7bb4fe098f9d7c0230153b9c4050585982a38aa247cd1238a6fc51abdb5da952 not found: ID does not exist" Nov 24 09:49:51 crc kubenswrapper[4886]: I1124 09:49:51.886555 4886 scope.go:117] "RemoveContainer" containerID="8107fff38e2544e06f13e8ff1aeaef286be53b5fbdb20d07febffb7f5df6a2e2" Nov 24 09:49:51 crc kubenswrapper[4886]: E1124 09:49:51.886966 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8107fff38e2544e06f13e8ff1aeaef286be53b5fbdb20d07febffb7f5df6a2e2\": container with ID starting with 8107fff38e2544e06f13e8ff1aeaef286be53b5fbdb20d07febffb7f5df6a2e2 not found: ID does not exist" containerID="8107fff38e2544e06f13e8ff1aeaef286be53b5fbdb20d07febffb7f5df6a2e2" Nov 24 09:49:51 crc kubenswrapper[4886]: I1124 09:49:51.886999 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8107fff38e2544e06f13e8ff1aeaef286be53b5fbdb20d07febffb7f5df6a2e2"} err="failed to get container status \"8107fff38e2544e06f13e8ff1aeaef286be53b5fbdb20d07febffb7f5df6a2e2\": rpc error: code = NotFound desc = could not find container \"8107fff38e2544e06f13e8ff1aeaef286be53b5fbdb20d07febffb7f5df6a2e2\": container with ID starting with 8107fff38e2544e06f13e8ff1aeaef286be53b5fbdb20d07febffb7f5df6a2e2 not found: ID does not exist" Nov 24 09:49:51 crc kubenswrapper[4886]: I1124 09:49:51.887019 4886 scope.go:117] "RemoveContainer" containerID="5e1ccb66dbc3a4dc94e0f786c62b9cb32ddd9af1034988a71edc5c0ae7616c95" Nov 24 09:49:51 crc kubenswrapper[4886]: E1124 09:49:51.887636 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5e1ccb66dbc3a4dc94e0f786c62b9cb32ddd9af1034988a71edc5c0ae7616c95\": container with ID starting with 5e1ccb66dbc3a4dc94e0f786c62b9cb32ddd9af1034988a71edc5c0ae7616c95 not found: ID does not exist" containerID="5e1ccb66dbc3a4dc94e0f786c62b9cb32ddd9af1034988a71edc5c0ae7616c95" Nov 24 09:49:51 crc kubenswrapper[4886]: I1124 09:49:51.887691 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5e1ccb66dbc3a4dc94e0f786c62b9cb32ddd9af1034988a71edc5c0ae7616c95"} err="failed to get container status \"5e1ccb66dbc3a4dc94e0f786c62b9cb32ddd9af1034988a71edc5c0ae7616c95\": rpc error: code = NotFound desc = could not find container \"5e1ccb66dbc3a4dc94e0f786c62b9cb32ddd9af1034988a71edc5c0ae7616c95\": container with ID starting with 5e1ccb66dbc3a4dc94e0f786c62b9cb32ddd9af1034988a71edc5c0ae7616c95 not found: ID does not exist" Nov 24 09:49:52 crc kubenswrapper[4886]: I1124 09:49:52.870031 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0c21564c-7aec-4c38-887c-2d2026bf99ad" path="/var/lib/kubelet/pods/0c21564c-7aec-4c38-887c-2d2026bf99ad/volumes" Nov 24 09:50:10 crc kubenswrapper[4886]: I1124 09:50:10.981406 4886 generic.go:334] "Generic (PLEG): container finished" podID="dfc8100e-afdc-4beb-91bc-d2a3871be783" containerID="7d6301c636c30267ffa61a1d39af112020d4a5b34ca9dd85d025083dd4e38ee5" exitCode=0 Nov 24 09:50:10 crc kubenswrapper[4886]: I1124 09:50:10.981503 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-tjqb8/crc-debug-9wlvx" event={"ID":"dfc8100e-afdc-4beb-91bc-d2a3871be783","Type":"ContainerDied","Data":"7d6301c636c30267ffa61a1d39af112020d4a5b34ca9dd85d025083dd4e38ee5"} Nov 24 09:50:12 crc kubenswrapper[4886]: I1124 09:50:12.093397 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-tjqb8/crc-debug-9wlvx" Nov 24 09:50:12 crc kubenswrapper[4886]: I1124 09:50:12.131560 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-tjqb8/crc-debug-9wlvx"] Nov 24 09:50:12 crc kubenswrapper[4886]: I1124 09:50:12.141181 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-tjqb8/crc-debug-9wlvx"] Nov 24 09:50:12 crc kubenswrapper[4886]: I1124 09:50:12.225887 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-frslg\" (UniqueName: \"kubernetes.io/projected/dfc8100e-afdc-4beb-91bc-d2a3871be783-kube-api-access-frslg\") pod \"dfc8100e-afdc-4beb-91bc-d2a3871be783\" (UID: \"dfc8100e-afdc-4beb-91bc-d2a3871be783\") " Nov 24 09:50:12 crc kubenswrapper[4886]: I1124 09:50:12.226217 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/dfc8100e-afdc-4beb-91bc-d2a3871be783-host\") pod \"dfc8100e-afdc-4beb-91bc-d2a3871be783\" (UID: \"dfc8100e-afdc-4beb-91bc-d2a3871be783\") " Nov 24 09:50:12 crc kubenswrapper[4886]: I1124 09:50:12.226447 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dfc8100e-afdc-4beb-91bc-d2a3871be783-host" (OuterVolumeSpecName: "host") pod "dfc8100e-afdc-4beb-91bc-d2a3871be783" (UID: "dfc8100e-afdc-4beb-91bc-d2a3871be783"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 09:50:12 crc kubenswrapper[4886]: I1124 09:50:12.227446 4886 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/dfc8100e-afdc-4beb-91bc-d2a3871be783-host\") on node \"crc\" DevicePath \"\"" Nov 24 09:50:12 crc kubenswrapper[4886]: I1124 09:50:12.231788 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dfc8100e-afdc-4beb-91bc-d2a3871be783-kube-api-access-frslg" (OuterVolumeSpecName: "kube-api-access-frslg") pod "dfc8100e-afdc-4beb-91bc-d2a3871be783" (UID: "dfc8100e-afdc-4beb-91bc-d2a3871be783"). InnerVolumeSpecName "kube-api-access-frslg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 09:50:12 crc kubenswrapper[4886]: I1124 09:50:12.330136 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-frslg\" (UniqueName: \"kubernetes.io/projected/dfc8100e-afdc-4beb-91bc-d2a3871be783-kube-api-access-frslg\") on node \"crc\" DevicePath \"\"" Nov 24 09:50:12 crc kubenswrapper[4886]: I1124 09:50:12.869466 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dfc8100e-afdc-4beb-91bc-d2a3871be783" path="/var/lib/kubelet/pods/dfc8100e-afdc-4beb-91bc-d2a3871be783/volumes" Nov 24 09:50:13 crc kubenswrapper[4886]: I1124 09:50:13.007129 4886 scope.go:117] "RemoveContainer" containerID="7d6301c636c30267ffa61a1d39af112020d4a5b34ca9dd85d025083dd4e38ee5" Nov 24 09:50:13 crc kubenswrapper[4886]: I1124 09:50:13.007265 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-tjqb8/crc-debug-9wlvx" Nov 24 09:50:13 crc kubenswrapper[4886]: I1124 09:50:13.320254 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-tjqb8/crc-debug-8hkjx"] Nov 24 09:50:13 crc kubenswrapper[4886]: E1124 09:50:13.320847 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c21564c-7aec-4c38-887c-2d2026bf99ad" containerName="extract-utilities" Nov 24 09:50:13 crc kubenswrapper[4886]: I1124 09:50:13.320870 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c21564c-7aec-4c38-887c-2d2026bf99ad" containerName="extract-utilities" Nov 24 09:50:13 crc kubenswrapper[4886]: E1124 09:50:13.320894 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c21564c-7aec-4c38-887c-2d2026bf99ad" containerName="registry-server" Nov 24 09:50:13 crc kubenswrapper[4886]: I1124 09:50:13.320907 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c21564c-7aec-4c38-887c-2d2026bf99ad" containerName="registry-server" Nov 24 09:50:13 crc kubenswrapper[4886]: E1124 09:50:13.320952 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dfc8100e-afdc-4beb-91bc-d2a3871be783" containerName="container-00" Nov 24 09:50:13 crc kubenswrapper[4886]: I1124 09:50:13.320967 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="dfc8100e-afdc-4beb-91bc-d2a3871be783" containerName="container-00" Nov 24 09:50:13 crc kubenswrapper[4886]: E1124 09:50:13.321008 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c21564c-7aec-4c38-887c-2d2026bf99ad" containerName="extract-content" Nov 24 09:50:13 crc kubenswrapper[4886]: I1124 09:50:13.321021 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c21564c-7aec-4c38-887c-2d2026bf99ad" containerName="extract-content" Nov 24 09:50:13 crc kubenswrapper[4886]: I1124 09:50:13.321371 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="0c21564c-7aec-4c38-887c-2d2026bf99ad" containerName="registry-server" Nov 24 09:50:13 crc kubenswrapper[4886]: I1124 09:50:13.321412 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="dfc8100e-afdc-4beb-91bc-d2a3871be783" containerName="container-00" Nov 24 09:50:13 crc kubenswrapper[4886]: I1124 09:50:13.322411 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-tjqb8/crc-debug-8hkjx" Nov 24 09:50:13 crc kubenswrapper[4886]: I1124 09:50:13.455945 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/8594ba94-ae16-4f6d-a33b-0912fff121b7-host\") pod \"crc-debug-8hkjx\" (UID: \"8594ba94-ae16-4f6d-a33b-0912fff121b7\") " pod="openshift-must-gather-tjqb8/crc-debug-8hkjx" Nov 24 09:50:13 crc kubenswrapper[4886]: I1124 09:50:13.456605 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-swgxp\" (UniqueName: \"kubernetes.io/projected/8594ba94-ae16-4f6d-a33b-0912fff121b7-kube-api-access-swgxp\") pod \"crc-debug-8hkjx\" (UID: \"8594ba94-ae16-4f6d-a33b-0912fff121b7\") " pod="openshift-must-gather-tjqb8/crc-debug-8hkjx" Nov 24 09:50:13 crc kubenswrapper[4886]: I1124 09:50:13.559507 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-swgxp\" (UniqueName: \"kubernetes.io/projected/8594ba94-ae16-4f6d-a33b-0912fff121b7-kube-api-access-swgxp\") pod \"crc-debug-8hkjx\" (UID: \"8594ba94-ae16-4f6d-a33b-0912fff121b7\") " pod="openshift-must-gather-tjqb8/crc-debug-8hkjx" Nov 24 09:50:13 crc kubenswrapper[4886]: I1124 09:50:13.560133 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/8594ba94-ae16-4f6d-a33b-0912fff121b7-host\") pod \"crc-debug-8hkjx\" (UID: \"8594ba94-ae16-4f6d-a33b-0912fff121b7\") " pod="openshift-must-gather-tjqb8/crc-debug-8hkjx" Nov 24 09:50:13 crc kubenswrapper[4886]: I1124 09:50:13.560374 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/8594ba94-ae16-4f6d-a33b-0912fff121b7-host\") pod \"crc-debug-8hkjx\" (UID: \"8594ba94-ae16-4f6d-a33b-0912fff121b7\") " pod="openshift-must-gather-tjqb8/crc-debug-8hkjx" Nov 24 09:50:13 crc kubenswrapper[4886]: I1124 09:50:13.583187 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-swgxp\" (UniqueName: \"kubernetes.io/projected/8594ba94-ae16-4f6d-a33b-0912fff121b7-kube-api-access-swgxp\") pod \"crc-debug-8hkjx\" (UID: \"8594ba94-ae16-4f6d-a33b-0912fff121b7\") " pod="openshift-must-gather-tjqb8/crc-debug-8hkjx" Nov 24 09:50:13 crc kubenswrapper[4886]: I1124 09:50:13.642223 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-tjqb8/crc-debug-8hkjx" Nov 24 09:50:14 crc kubenswrapper[4886]: I1124 09:50:14.021818 4886 generic.go:334] "Generic (PLEG): container finished" podID="8594ba94-ae16-4f6d-a33b-0912fff121b7" containerID="0fd839337cbfac1475a0aeb57d61a844cb0bb4c64e386c05a10202cc0926edaf" exitCode=0 Nov 24 09:50:14 crc kubenswrapper[4886]: I1124 09:50:14.021916 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-tjqb8/crc-debug-8hkjx" event={"ID":"8594ba94-ae16-4f6d-a33b-0912fff121b7","Type":"ContainerDied","Data":"0fd839337cbfac1475a0aeb57d61a844cb0bb4c64e386c05a10202cc0926edaf"} Nov 24 09:50:14 crc kubenswrapper[4886]: I1124 09:50:14.022413 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-tjqb8/crc-debug-8hkjx" event={"ID":"8594ba94-ae16-4f6d-a33b-0912fff121b7","Type":"ContainerStarted","Data":"f827e468ed151ce71fdcfd36b703c3e83144db60158813f6724481d50a5c5fac"} Nov 24 09:50:14 crc kubenswrapper[4886]: I1124 09:50:14.593102 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-tjqb8/crc-debug-8hkjx"] Nov 24 09:50:14 crc kubenswrapper[4886]: I1124 09:50:14.604101 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-tjqb8/crc-debug-8hkjx"] Nov 24 09:50:15 crc kubenswrapper[4886]: I1124 09:50:15.143236 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-tjqb8/crc-debug-8hkjx" Nov 24 09:50:15 crc kubenswrapper[4886]: I1124 09:50:15.296112 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/8594ba94-ae16-4f6d-a33b-0912fff121b7-host\") pod \"8594ba94-ae16-4f6d-a33b-0912fff121b7\" (UID: \"8594ba94-ae16-4f6d-a33b-0912fff121b7\") " Nov 24 09:50:15 crc kubenswrapper[4886]: I1124 09:50:15.296258 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-swgxp\" (UniqueName: \"kubernetes.io/projected/8594ba94-ae16-4f6d-a33b-0912fff121b7-kube-api-access-swgxp\") pod \"8594ba94-ae16-4f6d-a33b-0912fff121b7\" (UID: \"8594ba94-ae16-4f6d-a33b-0912fff121b7\") " Nov 24 09:50:15 crc kubenswrapper[4886]: I1124 09:50:15.296258 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8594ba94-ae16-4f6d-a33b-0912fff121b7-host" (OuterVolumeSpecName: "host") pod "8594ba94-ae16-4f6d-a33b-0912fff121b7" (UID: "8594ba94-ae16-4f6d-a33b-0912fff121b7"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 09:50:15 crc kubenswrapper[4886]: I1124 09:50:15.296705 4886 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/8594ba94-ae16-4f6d-a33b-0912fff121b7-host\") on node \"crc\" DevicePath \"\"" Nov 24 09:50:15 crc kubenswrapper[4886]: I1124 09:50:15.303347 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8594ba94-ae16-4f6d-a33b-0912fff121b7-kube-api-access-swgxp" (OuterVolumeSpecName: "kube-api-access-swgxp") pod "8594ba94-ae16-4f6d-a33b-0912fff121b7" (UID: "8594ba94-ae16-4f6d-a33b-0912fff121b7"). InnerVolumeSpecName "kube-api-access-swgxp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 09:50:15 crc kubenswrapper[4886]: I1124 09:50:15.398827 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-swgxp\" (UniqueName: \"kubernetes.io/projected/8594ba94-ae16-4f6d-a33b-0912fff121b7-kube-api-access-swgxp\") on node \"crc\" DevicePath \"\"" Nov 24 09:50:15 crc kubenswrapper[4886]: I1124 09:50:15.794600 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-tjqb8/crc-debug-7rgp6"] Nov 24 09:50:15 crc kubenswrapper[4886]: E1124 09:50:15.795052 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8594ba94-ae16-4f6d-a33b-0912fff121b7" containerName="container-00" Nov 24 09:50:15 crc kubenswrapper[4886]: I1124 09:50:15.795066 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="8594ba94-ae16-4f6d-a33b-0912fff121b7" containerName="container-00" Nov 24 09:50:15 crc kubenswrapper[4886]: I1124 09:50:15.795347 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="8594ba94-ae16-4f6d-a33b-0912fff121b7" containerName="container-00" Nov 24 09:50:15 crc kubenswrapper[4886]: I1124 09:50:15.796102 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-tjqb8/crc-debug-7rgp6" Nov 24 09:50:15 crc kubenswrapper[4886]: I1124 09:50:15.907312 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l8vfg\" (UniqueName: \"kubernetes.io/projected/cfdb31c7-bb90-407b-b7ab-7ff7bc2349cd-kube-api-access-l8vfg\") pod \"crc-debug-7rgp6\" (UID: \"cfdb31c7-bb90-407b-b7ab-7ff7bc2349cd\") " pod="openshift-must-gather-tjqb8/crc-debug-7rgp6" Nov 24 09:50:15 crc kubenswrapper[4886]: I1124 09:50:15.907638 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/cfdb31c7-bb90-407b-b7ab-7ff7bc2349cd-host\") pod \"crc-debug-7rgp6\" (UID: \"cfdb31c7-bb90-407b-b7ab-7ff7bc2349cd\") " pod="openshift-must-gather-tjqb8/crc-debug-7rgp6" Nov 24 09:50:16 crc kubenswrapper[4886]: I1124 09:50:16.009452 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/cfdb31c7-bb90-407b-b7ab-7ff7bc2349cd-host\") pod \"crc-debug-7rgp6\" (UID: \"cfdb31c7-bb90-407b-b7ab-7ff7bc2349cd\") " pod="openshift-must-gather-tjqb8/crc-debug-7rgp6" Nov 24 09:50:16 crc kubenswrapper[4886]: I1124 09:50:16.009571 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/cfdb31c7-bb90-407b-b7ab-7ff7bc2349cd-host\") pod \"crc-debug-7rgp6\" (UID: \"cfdb31c7-bb90-407b-b7ab-7ff7bc2349cd\") " pod="openshift-must-gather-tjqb8/crc-debug-7rgp6" Nov 24 09:50:16 crc kubenswrapper[4886]: I1124 09:50:16.009810 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l8vfg\" (UniqueName: \"kubernetes.io/projected/cfdb31c7-bb90-407b-b7ab-7ff7bc2349cd-kube-api-access-l8vfg\") pod \"crc-debug-7rgp6\" (UID: \"cfdb31c7-bb90-407b-b7ab-7ff7bc2349cd\") " pod="openshift-must-gather-tjqb8/crc-debug-7rgp6" Nov 24 09:50:16 crc kubenswrapper[4886]: I1124 09:50:16.028087 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l8vfg\" (UniqueName: \"kubernetes.io/projected/cfdb31c7-bb90-407b-b7ab-7ff7bc2349cd-kube-api-access-l8vfg\") pod \"crc-debug-7rgp6\" (UID: \"cfdb31c7-bb90-407b-b7ab-7ff7bc2349cd\") " pod="openshift-must-gather-tjqb8/crc-debug-7rgp6" Nov 24 09:50:16 crc kubenswrapper[4886]: I1124 09:50:16.046820 4886 scope.go:117] "RemoveContainer" containerID="0fd839337cbfac1475a0aeb57d61a844cb0bb4c64e386c05a10202cc0926edaf" Nov 24 09:50:16 crc kubenswrapper[4886]: I1124 09:50:16.046885 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-tjqb8/crc-debug-8hkjx" Nov 24 09:50:16 crc kubenswrapper[4886]: I1124 09:50:16.116498 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-tjqb8/crc-debug-7rgp6" Nov 24 09:50:16 crc kubenswrapper[4886]: W1124 09:50:16.144969 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcfdb31c7_bb90_407b_b7ab_7ff7bc2349cd.slice/crio-7f782ac6ba2fa4ad5e3a8ef1af90e800d5f638b977135f5173a223886c562143 WatchSource:0}: Error finding container 7f782ac6ba2fa4ad5e3a8ef1af90e800d5f638b977135f5173a223886c562143: Status 404 returned error can't find the container with id 7f782ac6ba2fa4ad5e3a8ef1af90e800d5f638b977135f5173a223886c562143 Nov 24 09:50:16 crc kubenswrapper[4886]: I1124 09:50:16.866138 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8594ba94-ae16-4f6d-a33b-0912fff121b7" path="/var/lib/kubelet/pods/8594ba94-ae16-4f6d-a33b-0912fff121b7/volumes" Nov 24 09:50:17 crc kubenswrapper[4886]: I1124 09:50:17.059682 4886 generic.go:334] "Generic (PLEG): container finished" podID="cfdb31c7-bb90-407b-b7ab-7ff7bc2349cd" containerID="7d460bb0bbd1ccb00d06ab75e20763e589085c7a4fcebfcb0dd944849c56133b" exitCode=0 Nov 24 09:50:17 crc kubenswrapper[4886]: I1124 09:50:17.059744 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-tjqb8/crc-debug-7rgp6" event={"ID":"cfdb31c7-bb90-407b-b7ab-7ff7bc2349cd","Type":"ContainerDied","Data":"7d460bb0bbd1ccb00d06ab75e20763e589085c7a4fcebfcb0dd944849c56133b"} Nov 24 09:50:17 crc kubenswrapper[4886]: I1124 09:50:17.059823 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-tjqb8/crc-debug-7rgp6" event={"ID":"cfdb31c7-bb90-407b-b7ab-7ff7bc2349cd","Type":"ContainerStarted","Data":"7f782ac6ba2fa4ad5e3a8ef1af90e800d5f638b977135f5173a223886c562143"} Nov 24 09:50:17 crc kubenswrapper[4886]: I1124 09:50:17.103620 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-tjqb8/crc-debug-7rgp6"] Nov 24 09:50:17 crc kubenswrapper[4886]: I1124 09:50:17.111350 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-tjqb8/crc-debug-7rgp6"] Nov 24 09:50:18 crc kubenswrapper[4886]: I1124 09:50:18.193113 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-tjqb8/crc-debug-7rgp6" Nov 24 09:50:18 crc kubenswrapper[4886]: I1124 09:50:18.356807 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/cfdb31c7-bb90-407b-b7ab-7ff7bc2349cd-host\") pod \"cfdb31c7-bb90-407b-b7ab-7ff7bc2349cd\" (UID: \"cfdb31c7-bb90-407b-b7ab-7ff7bc2349cd\") " Nov 24 09:50:18 crc kubenswrapper[4886]: I1124 09:50:18.356938 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cfdb31c7-bb90-407b-b7ab-7ff7bc2349cd-host" (OuterVolumeSpecName: "host") pod "cfdb31c7-bb90-407b-b7ab-7ff7bc2349cd" (UID: "cfdb31c7-bb90-407b-b7ab-7ff7bc2349cd"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 09:50:18 crc kubenswrapper[4886]: I1124 09:50:18.357163 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l8vfg\" (UniqueName: \"kubernetes.io/projected/cfdb31c7-bb90-407b-b7ab-7ff7bc2349cd-kube-api-access-l8vfg\") pod \"cfdb31c7-bb90-407b-b7ab-7ff7bc2349cd\" (UID: \"cfdb31c7-bb90-407b-b7ab-7ff7bc2349cd\") " Nov 24 09:50:18 crc kubenswrapper[4886]: I1124 09:50:18.357661 4886 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/cfdb31c7-bb90-407b-b7ab-7ff7bc2349cd-host\") on node \"crc\" DevicePath \"\"" Nov 24 09:50:18 crc kubenswrapper[4886]: I1124 09:50:18.363136 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cfdb31c7-bb90-407b-b7ab-7ff7bc2349cd-kube-api-access-l8vfg" (OuterVolumeSpecName: "kube-api-access-l8vfg") pod "cfdb31c7-bb90-407b-b7ab-7ff7bc2349cd" (UID: "cfdb31c7-bb90-407b-b7ab-7ff7bc2349cd"). InnerVolumeSpecName "kube-api-access-l8vfg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 09:50:18 crc kubenswrapper[4886]: I1124 09:50:18.459786 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l8vfg\" (UniqueName: \"kubernetes.io/projected/cfdb31c7-bb90-407b-b7ab-7ff7bc2349cd-kube-api-access-l8vfg\") on node \"crc\" DevicePath \"\"" Nov 24 09:50:18 crc kubenswrapper[4886]: I1124 09:50:18.865838 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cfdb31c7-bb90-407b-b7ab-7ff7bc2349cd" path="/var/lib/kubelet/pods/cfdb31c7-bb90-407b-b7ab-7ff7bc2349cd/volumes" Nov 24 09:50:19 crc kubenswrapper[4886]: E1124 09:50:19.000591 4886 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcfdb31c7_bb90_407b_b7ab_7ff7bc2349cd.slice\": RecentStats: unable to find data in memory cache]" Nov 24 09:50:19 crc kubenswrapper[4886]: I1124 09:50:19.084288 4886 scope.go:117] "RemoveContainer" containerID="7d460bb0bbd1ccb00d06ab75e20763e589085c7a4fcebfcb0dd944849c56133b" Nov 24 09:50:19 crc kubenswrapper[4886]: I1124 09:50:19.084345 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-tjqb8/crc-debug-7rgp6" Nov 24 09:50:34 crc kubenswrapper[4886]: I1124 09:50:34.077799 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-76455fdd78-8k9rz_5555aeec-470e-473c-ad74-de78791861dc/barbican-api/0.log" Nov 24 09:50:34 crc kubenswrapper[4886]: I1124 09:50:34.305989 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-76455fdd78-8k9rz_5555aeec-470e-473c-ad74-de78791861dc/barbican-api-log/0.log" Nov 24 09:50:34 crc kubenswrapper[4886]: I1124 09:50:34.346434 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-8b4cf4966-gt5q7_cf27d89f-7c4b-49b5-a993-b851f86a2994/barbican-keystone-listener/0.log" Nov 24 09:50:34 crc kubenswrapper[4886]: I1124 09:50:34.436986 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-8b4cf4966-gt5q7_cf27d89f-7c4b-49b5-a993-b851f86a2994/barbican-keystone-listener-log/0.log" Nov 24 09:50:34 crc kubenswrapper[4886]: I1124 09:50:34.583781 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-df69f5cf-v8lvl_903a1b7e-92e3-455b-af86-c46c9a290f11/barbican-worker/0.log" Nov 24 09:50:34 crc kubenswrapper[4886]: I1124 09:50:34.647759 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-df69f5cf-v8lvl_903a1b7e-92e3-455b-af86-c46c9a290f11/barbican-worker-log/0.log" Nov 24 09:50:34 crc kubenswrapper[4886]: I1124 09:50:34.840205 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-k624c_e26edc4e-16ec-494e-9011-1dcaf51099be/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Nov 24 09:50:34 crc kubenswrapper[4886]: I1124 09:50:34.912910 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_6c1ffc60-4954-4d55-800e-00cb24c6cfa4/ceilometer-central-agent/0.log" Nov 24 09:50:35 crc kubenswrapper[4886]: I1124 09:50:35.016865 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_6c1ffc60-4954-4d55-800e-00cb24c6cfa4/ceilometer-notification-agent/0.log" Nov 24 09:50:35 crc kubenswrapper[4886]: I1124 09:50:35.087288 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_6c1ffc60-4954-4d55-800e-00cb24c6cfa4/sg-core/0.log" Nov 24 09:50:35 crc kubenswrapper[4886]: I1124 09:50:35.096050 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_6c1ffc60-4954-4d55-800e-00cb24c6cfa4/proxy-httpd/0.log" Nov 24 09:50:35 crc kubenswrapper[4886]: I1124 09:50:35.386924 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_22fb5c5f-d94b-4069-bef0-62e95c42e89e/cinder-api/0.log" Nov 24 09:50:35 crc kubenswrapper[4886]: I1124 09:50:35.393271 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_22fb5c5f-d94b-4069-bef0-62e95c42e89e/cinder-api-log/0.log" Nov 24 09:50:35 crc kubenswrapper[4886]: I1124 09:50:35.496647 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_20a1d599-cfce-400c-a6d9-9a060ebe4b8e/cinder-scheduler/0.log" Nov 24 09:50:35 crc kubenswrapper[4886]: I1124 09:50:35.623863 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-w42vq_352e856d-6e0d-4aba-b2ce-8063ed40a041/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Nov 24 09:50:35 crc kubenswrapper[4886]: I1124 09:50:35.648253 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_20a1d599-cfce-400c-a6d9-9a060ebe4b8e/probe/0.log" Nov 24 09:50:35 crc kubenswrapper[4886]: I1124 09:50:35.857187 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-qc5h5_f7b875a5-9e9f-43bc-b6da-48223ea2c653/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Nov 24 09:50:35 crc kubenswrapper[4886]: I1124 09:50:35.886882 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-78c64bc9c5-bw54t_b6b3b0d2-a417-4be6-a6b0-8b30f1a25a06/init/0.log" Nov 24 09:50:36 crc kubenswrapper[4886]: I1124 09:50:36.139878 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-78c64bc9c5-bw54t_b6b3b0d2-a417-4be6-a6b0-8b30f1a25a06/init/0.log" Nov 24 09:50:36 crc kubenswrapper[4886]: I1124 09:50:36.162606 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-78c64bc9c5-bw54t_b6b3b0d2-a417-4be6-a6b0-8b30f1a25a06/dnsmasq-dns/0.log" Nov 24 09:50:36 crc kubenswrapper[4886]: I1124 09:50:36.257708 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-h68td_5efd6888-9bb2-49d6-ba3b-b9e30c87e5a1/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Nov 24 09:50:36 crc kubenswrapper[4886]: I1124 09:50:36.390086 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_f223fa66-cb1a-4f97-970b-9c64793d34b9/glance-httpd/0.log" Nov 24 09:50:36 crc kubenswrapper[4886]: I1124 09:50:36.480446 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_f223fa66-cb1a-4f97-970b-9c64793d34b9/glance-log/0.log" Nov 24 09:50:36 crc kubenswrapper[4886]: I1124 09:50:36.631775 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_c2a34763-2dcf-4e0d-a6b6-7e26dc1f0404/glance-httpd/0.log" Nov 24 09:50:36 crc kubenswrapper[4886]: I1124 09:50:36.676381 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_c2a34763-2dcf-4e0d-a6b6-7e26dc1f0404/glance-log/0.log" Nov 24 09:50:36 crc kubenswrapper[4886]: I1124 09:50:36.854600 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-664f9d77dd-zw4gm_19e275c2-5fd6-4ea7-a023-6d7478ae5750/horizon/0.log" Nov 24 09:50:37 crc kubenswrapper[4886]: I1124 09:50:37.058103 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-twcff_06314c58-da5f-46e4-ac6d-63f95ca6a6f9/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Nov 24 09:50:37 crc kubenswrapper[4886]: I1124 09:50:37.266601 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-664f9d77dd-zw4gm_19e275c2-5fd6-4ea7-a023-6d7478ae5750/horizon-log/0.log" Nov 24 09:50:37 crc kubenswrapper[4886]: I1124 09:50:37.332524 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-4wj4b_06b5ba9e-eb74-4f82-ba9e-2fe84eb7d255/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Nov 24 09:50:37 crc kubenswrapper[4886]: I1124 09:50:37.588728 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-78ff5b5cf5-swx4n_ba9d3f7a-c442-4fac-bc1f-4863e157b084/keystone-api/0.log" Nov 24 09:50:37 crc kubenswrapper[4886]: I1124 09:50:37.601833 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_39f8779a-9800-4658-aa0a-8603669d7fbe/kube-state-metrics/0.log" Nov 24 09:50:37 crc kubenswrapper[4886]: I1124 09:50:37.790082 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-k47md_ce68d69b-17a7-483e-be9c-5a39b0e2dee8/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Nov 24 09:50:38 crc kubenswrapper[4886]: I1124 09:50:38.244319 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-58775dd67f-bvv4s_f7e8a3f6-7c28-4b33-a355-1c7f38b2cc7b/neutron-httpd/0.log" Nov 24 09:50:38 crc kubenswrapper[4886]: I1124 09:50:38.297090 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-n778s_ad4158ea-36b4-499a-bfb0-d6743c87340a/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Nov 24 09:50:38 crc kubenswrapper[4886]: I1124 09:50:38.313076 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-58775dd67f-bvv4s_f7e8a3f6-7c28-4b33-a355-1c7f38b2cc7b/neutron-api/0.log" Nov 24 09:50:38 crc kubenswrapper[4886]: I1124 09:50:38.951660 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_f31eee06-9a4d-4956-b314-b4413ac5aba0/nova-api-log/0.log" Nov 24 09:50:39 crc kubenswrapper[4886]: I1124 09:50:39.075819 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_2f98453e-9a49-498a-bcc6-6a4d82f39fc7/nova-cell0-conductor-conductor/0.log" Nov 24 09:50:39 crc kubenswrapper[4886]: I1124 09:50:39.346973 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_ec788e35-0154-4b74-86b4-5a21037b3e4a/nova-cell1-conductor-conductor/0.log" Nov 24 09:50:39 crc kubenswrapper[4886]: I1124 09:50:39.389720 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_f31eee06-9a4d-4956-b314-b4413ac5aba0/nova-api-api/0.log" Nov 24 09:50:39 crc kubenswrapper[4886]: I1124 09:50:39.454818 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_ffad73ea-b29e-4f4c-aa2c-ad30a864a8b4/nova-cell1-novncproxy-novncproxy/0.log" Nov 24 09:50:39 crc kubenswrapper[4886]: I1124 09:50:39.702411 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-f2b4z_36804e58-9c67-454c-a7b2-6aca006eb481/nova-edpm-deployment-openstack-edpm-ipam/0.log" Nov 24 09:50:39 crc kubenswrapper[4886]: I1124 09:50:39.877520 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_6d1021e4-f165-4881-9bcc-2cc19416ab64/nova-metadata-log/0.log" Nov 24 09:50:40 crc kubenswrapper[4886]: I1124 09:50:40.100208 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_8288e829-a6d4-4f11-abf2-e9cd50df6c4b/nova-scheduler-scheduler/0.log" Nov 24 09:50:40 crc kubenswrapper[4886]: I1124 09:50:40.199289 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_11ea3612-3583-4b82-9047-d11cd751adcd/mysql-bootstrap/0.log" Nov 24 09:50:40 crc kubenswrapper[4886]: I1124 09:50:40.478995 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_11ea3612-3583-4b82-9047-d11cd751adcd/mysql-bootstrap/0.log" Nov 24 09:50:40 crc kubenswrapper[4886]: I1124 09:50:40.535782 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_11ea3612-3583-4b82-9047-d11cd751adcd/galera/0.log" Nov 24 09:50:40 crc kubenswrapper[4886]: I1124 09:50:40.712934 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_3ec7bf38-594d-4606-ab2c-76f4fc8b6a29/mysql-bootstrap/0.log" Nov 24 09:50:40 crc kubenswrapper[4886]: I1124 09:50:40.931603 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_3ec7bf38-594d-4606-ab2c-76f4fc8b6a29/galera/0.log" Nov 24 09:50:40 crc kubenswrapper[4886]: I1124 09:50:40.932084 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_3ec7bf38-594d-4606-ab2c-76f4fc8b6a29/mysql-bootstrap/0.log" Nov 24 09:50:41 crc kubenswrapper[4886]: I1124 09:50:41.148522 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_801740d3-12c4-4576-a79d-186b36e3f079/openstackclient/0.log" Nov 24 09:50:41 crc kubenswrapper[4886]: I1124 09:50:41.195274 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-hqh7m_abe55c7e-0682-4591-bd60-59ee1de24094/openstack-network-exporter/0.log" Nov 24 09:50:41 crc kubenswrapper[4886]: I1124 09:50:41.208135 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_6d1021e4-f165-4881-9bcc-2cc19416ab64/nova-metadata-metadata/0.log" Nov 24 09:50:41 crc kubenswrapper[4886]: I1124 09:50:41.437861 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-vclvw_6111b0c6-fec0-4738-8a86-e433a2b5c673/ovsdb-server-init/0.log" Nov 24 09:50:41 crc kubenswrapper[4886]: I1124 09:50:41.675744 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-vclvw_6111b0c6-fec0-4738-8a86-e433a2b5c673/ovsdb-server/0.log" Nov 24 09:50:41 crc kubenswrapper[4886]: I1124 09:50:41.685583 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-vclvw_6111b0c6-fec0-4738-8a86-e433a2b5c673/ovsdb-server-init/0.log" Nov 24 09:50:41 crc kubenswrapper[4886]: I1124 09:50:41.686495 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-vclvw_6111b0c6-fec0-4738-8a86-e433a2b5c673/ovs-vswitchd/0.log" Nov 24 09:50:41 crc kubenswrapper[4886]: I1124 09:50:41.930908 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-rzmth_b7951685-e0e7-4524-ba49-b720357aa59c/ovn-controller/0.log" Nov 24 09:50:42 crc kubenswrapper[4886]: I1124 09:50:42.029918 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-mqb42_cde6df39-d639-4855-a34f-29ff9af5c870/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Nov 24 09:50:42 crc kubenswrapper[4886]: I1124 09:50:42.210938 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_28013454-2b4a-4d68-87fa-272095c8a651/openstack-network-exporter/0.log" Nov 24 09:50:42 crc kubenswrapper[4886]: I1124 09:50:42.302728 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_28013454-2b4a-4d68-87fa-272095c8a651/ovn-northd/0.log" Nov 24 09:50:42 crc kubenswrapper[4886]: I1124 09:50:42.466713 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_43ae2d6a-6a35-4cf5-81fd-76c9a41c6db1/openstack-network-exporter/0.log" Nov 24 09:50:42 crc kubenswrapper[4886]: I1124 09:50:42.599631 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_43ae2d6a-6a35-4cf5-81fd-76c9a41c6db1/ovsdbserver-nb/0.log" Nov 24 09:50:42 crc kubenswrapper[4886]: I1124 09:50:42.635784 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_495262a2-0785-4f84-aeb5-00eff9c76e9a/openstack-network-exporter/0.log" Nov 24 09:50:42 crc kubenswrapper[4886]: I1124 09:50:42.693144 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_495262a2-0785-4f84-aeb5-00eff9c76e9a/ovsdbserver-sb/0.log" Nov 24 09:50:42 crc kubenswrapper[4886]: I1124 09:50:42.946990 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-646878466-vzd4z_98af9edc-5cf6-4dd9-93e0-2e320d0d0939/placement-api/0.log" Nov 24 09:50:43 crc kubenswrapper[4886]: I1124 09:50:43.022534 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-646878466-vzd4z_98af9edc-5cf6-4dd9-93e0-2e320d0d0939/placement-log/0.log" Nov 24 09:50:43 crc kubenswrapper[4886]: I1124 09:50:43.059114 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_533cb212-964b-4427-ac3f-ebafca6d8787/setup-container/0.log" Nov 24 09:50:43 crc kubenswrapper[4886]: I1124 09:50:43.421484 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_f14f0ef7-768e-4fc8-a2d1-b852fe44d773/setup-container/0.log" Nov 24 09:50:43 crc kubenswrapper[4886]: I1124 09:50:43.446118 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_533cb212-964b-4427-ac3f-ebafca6d8787/rabbitmq/0.log" Nov 24 09:50:43 crc kubenswrapper[4886]: I1124 09:50:43.447467 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_533cb212-964b-4427-ac3f-ebafca6d8787/setup-container/0.log" Nov 24 09:50:43 crc kubenswrapper[4886]: I1124 09:50:43.635207 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_f14f0ef7-768e-4fc8-a2d1-b852fe44d773/setup-container/0.log" Nov 24 09:50:43 crc kubenswrapper[4886]: I1124 09:50:43.707814 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-7mg9l_cc0c00e3-1e23-4800-9a47-8d86397ba6f3/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Nov 24 09:50:43 crc kubenswrapper[4886]: I1124 09:50:43.720699 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_f14f0ef7-768e-4fc8-a2d1-b852fe44d773/rabbitmq/0.log" Nov 24 09:50:43 crc kubenswrapper[4886]: I1124 09:50:43.954546 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-g9sf7_21022c6d-8637-4952-b0c1-33b80b316a3a/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Nov 24 09:50:44 crc kubenswrapper[4886]: I1124 09:50:44.038970 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-4z2nz_b715926a-c856-44c7-b863-95bd080cbe24/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Nov 24 09:50:44 crc kubenswrapper[4886]: I1124 09:50:44.228255 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-m2f4l_80cccca8-e8d6-4772-b514-83482acf917e/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Nov 24 09:50:44 crc kubenswrapper[4886]: I1124 09:50:44.354440 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-r4dj2_c9133cae-660e-41cc-ad42-4b3772bdcdfe/ssh-known-hosts-edpm-deployment/0.log" Nov 24 09:50:44 crc kubenswrapper[4886]: I1124 09:50:44.595618 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-558564f98c-jl2ms_c1f11d5d-8b31-47b7-9ceb-197d5ca23475/proxy-server/0.log" Nov 24 09:50:44 crc kubenswrapper[4886]: I1124 09:50:44.650422 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-558564f98c-jl2ms_c1f11d5d-8b31-47b7-9ceb-197d5ca23475/proxy-httpd/0.log" Nov 24 09:50:44 crc kubenswrapper[4886]: I1124 09:50:44.759826 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-dxnk5_c9a54508-7f70-4e5d-952a-587f8fabeb1c/swift-ring-rebalance/0.log" Nov 24 09:50:44 crc kubenswrapper[4886]: I1124 09:50:44.897331 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_65b7f4e6-3f5e-419b-9761-c0fc78a4632d/account-reaper/0.log" Nov 24 09:50:44 crc kubenswrapper[4886]: I1124 09:50:44.906790 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_65b7f4e6-3f5e-419b-9761-c0fc78a4632d/account-auditor/0.log" Nov 24 09:50:45 crc kubenswrapper[4886]: I1124 09:50:45.040593 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_65b7f4e6-3f5e-419b-9761-c0fc78a4632d/account-replicator/0.log" Nov 24 09:50:45 crc kubenswrapper[4886]: I1124 09:50:45.157906 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_65b7f4e6-3f5e-419b-9761-c0fc78a4632d/container-auditor/0.log" Nov 24 09:50:45 crc kubenswrapper[4886]: I1124 09:50:45.176746 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_65b7f4e6-3f5e-419b-9761-c0fc78a4632d/account-server/0.log" Nov 24 09:50:45 crc kubenswrapper[4886]: I1124 09:50:45.204108 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_65b7f4e6-3f5e-419b-9761-c0fc78a4632d/container-replicator/0.log" Nov 24 09:50:45 crc kubenswrapper[4886]: I1124 09:50:45.330613 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_65b7f4e6-3f5e-419b-9761-c0fc78a4632d/container-server/0.log" Nov 24 09:50:45 crc kubenswrapper[4886]: I1124 09:50:45.370530 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_65b7f4e6-3f5e-419b-9761-c0fc78a4632d/container-updater/0.log" Nov 24 09:50:45 crc kubenswrapper[4886]: I1124 09:50:45.421502 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_65b7f4e6-3f5e-419b-9761-c0fc78a4632d/object-expirer/0.log" Nov 24 09:50:45 crc kubenswrapper[4886]: I1124 09:50:45.493985 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_65b7f4e6-3f5e-419b-9761-c0fc78a4632d/object-auditor/0.log" Nov 24 09:50:45 crc kubenswrapper[4886]: I1124 09:50:45.570809 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_65b7f4e6-3f5e-419b-9761-c0fc78a4632d/object-replicator/0.log" Nov 24 09:50:45 crc kubenswrapper[4886]: I1124 09:50:45.626191 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_65b7f4e6-3f5e-419b-9761-c0fc78a4632d/object-server/0.log" Nov 24 09:50:45 crc kubenswrapper[4886]: I1124 09:50:45.705735 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_65b7f4e6-3f5e-419b-9761-c0fc78a4632d/object-updater/0.log" Nov 24 09:50:45 crc kubenswrapper[4886]: I1124 09:50:45.722011 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_65b7f4e6-3f5e-419b-9761-c0fc78a4632d/rsync/0.log" Nov 24 09:50:45 crc kubenswrapper[4886]: I1124 09:50:45.829786 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_65b7f4e6-3f5e-419b-9761-c0fc78a4632d/swift-recon-cron/0.log" Nov 24 09:50:46 crc kubenswrapper[4886]: I1124 09:50:46.004100 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-5f9t4_ecc70fd6-3ec9-4488-89c6-2a9a6d803bcb/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Nov 24 09:50:46 crc kubenswrapper[4886]: I1124 09:50:46.132485 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_347e0b0b-6caf-4b65-8fd5-a8cf2c61acc7/tempest-tests-tempest-tests-runner/0.log" Nov 24 09:50:46 crc kubenswrapper[4886]: I1124 09:50:46.295415 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_2e959b23-6fa2-4d10-b235-5fdc8c476ff9/test-operator-logs-container/0.log" Nov 24 09:50:46 crc kubenswrapper[4886]: I1124 09:50:46.438817 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-q6b2v_23e016b0-6143-48d5-85e3-fad3392b2de4/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Nov 24 09:50:56 crc kubenswrapper[4886]: I1124 09:50:56.269585 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_7db518ac-866a-47c8-a5fb-264625a1c1fd/memcached/0.log" Nov 24 09:51:13 crc kubenswrapper[4886]: I1124 09:51:13.952587 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_0542c7b7525972e13a7189603754e544003130cfecb9b535ff041f20e1wkb22_603fdc43-36f5-4e80-9037-36c972f7cf05/util/0.log" Nov 24 09:51:14 crc kubenswrapper[4886]: I1124 09:51:14.275054 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_0542c7b7525972e13a7189603754e544003130cfecb9b535ff041f20e1wkb22_603fdc43-36f5-4e80-9037-36c972f7cf05/pull/0.log" Nov 24 09:51:14 crc kubenswrapper[4886]: I1124 09:51:14.297752 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_0542c7b7525972e13a7189603754e544003130cfecb9b535ff041f20e1wkb22_603fdc43-36f5-4e80-9037-36c972f7cf05/pull/0.log" Nov 24 09:51:14 crc kubenswrapper[4886]: I1124 09:51:14.303799 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_0542c7b7525972e13a7189603754e544003130cfecb9b535ff041f20e1wkb22_603fdc43-36f5-4e80-9037-36c972f7cf05/util/0.log" Nov 24 09:51:14 crc kubenswrapper[4886]: I1124 09:51:14.554806 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_0542c7b7525972e13a7189603754e544003130cfecb9b535ff041f20e1wkb22_603fdc43-36f5-4e80-9037-36c972f7cf05/pull/0.log" Nov 24 09:51:14 crc kubenswrapper[4886]: I1124 09:51:14.574596 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_0542c7b7525972e13a7189603754e544003130cfecb9b535ff041f20e1wkb22_603fdc43-36f5-4e80-9037-36c972f7cf05/extract/0.log" Nov 24 09:51:14 crc kubenswrapper[4886]: I1124 09:51:14.597186 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_0542c7b7525972e13a7189603754e544003130cfecb9b535ff041f20e1wkb22_603fdc43-36f5-4e80-9037-36c972f7cf05/util/0.log" Nov 24 09:51:14 crc kubenswrapper[4886]: I1124 09:51:14.759997 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-75fb479bcc-pvdd8_6c8c64e0-e4d5-45c1-a697-205deeb19c54/kube-rbac-proxy/0.log" Nov 24 09:51:14 crc kubenswrapper[4886]: I1124 09:51:14.882263 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-75fb479bcc-pvdd8_6c8c64e0-e4d5-45c1-a697-205deeb19c54/manager/0.log" Nov 24 09:51:14 crc kubenswrapper[4886]: I1124 09:51:14.893286 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-6498cbf48f-6pwgl_0ca0fbbb-1734-4a4a-b996-c96aa000131c/kube-rbac-proxy/0.log" Nov 24 09:51:15 crc kubenswrapper[4886]: I1124 09:51:15.082721 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-6498cbf48f-6pwgl_0ca0fbbb-1734-4a4a-b996-c96aa000131c/manager/0.log" Nov 24 09:51:15 crc kubenswrapper[4886]: I1124 09:51:15.147800 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-767ccfd65f-9lqmh_ad04acbe-59a4-490c-ae4e-eacfbd65257c/kube-rbac-proxy/0.log" Nov 24 09:51:15 crc kubenswrapper[4886]: I1124 09:51:15.164967 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-767ccfd65f-9lqmh_ad04acbe-59a4-490c-ae4e-eacfbd65257c/manager/0.log" Nov 24 09:51:15 crc kubenswrapper[4886]: I1124 09:51:15.412887 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-7969689c84-jb6p4_a991f440-958e-42d4-b062-7369966d84c3/kube-rbac-proxy/0.log" Nov 24 09:51:15 crc kubenswrapper[4886]: I1124 09:51:15.495258 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-7969689c84-jb6p4_a991f440-958e-42d4-b062-7369966d84c3/manager/0.log" Nov 24 09:51:15 crc kubenswrapper[4886]: I1124 09:51:15.602748 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-56f54d6746-glmkz_f52431d9-53d4-415b-9e99-3e92fe7be4ca/kube-rbac-proxy/0.log" Nov 24 09:51:15 crc kubenswrapper[4886]: I1124 09:51:15.655504 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-56f54d6746-glmkz_f52431d9-53d4-415b-9e99-3e92fe7be4ca/manager/0.log" Nov 24 09:51:16 crc kubenswrapper[4886]: I1124 09:51:16.006860 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-6df98c44d8-rsqm2_0f03538e-297e-410d-bf6e-0f947cba868c/kube-rbac-proxy/0.log" Nov 24 09:51:16 crc kubenswrapper[4886]: I1124 09:51:16.039443 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-598f69df5d-z7c6j_def4f2b0-daf8-48c1-95ab-98c2c6f8c72d/kube-rbac-proxy/0.log" Nov 24 09:51:16 crc kubenswrapper[4886]: I1124 09:51:16.235159 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-598f69df5d-z7c6j_def4f2b0-daf8-48c1-95ab-98c2c6f8c72d/manager/0.log" Nov 24 09:51:16 crc kubenswrapper[4886]: I1124 09:51:16.448488 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-6df98c44d8-rsqm2_0f03538e-297e-410d-bf6e-0f947cba868c/manager/0.log" Nov 24 09:51:16 crc kubenswrapper[4886]: I1124 09:51:16.517122 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-99b499f4-tjkbx_6fc8a4d5-fad4-4eca-95c0-329b968d5c9d/kube-rbac-proxy/0.log" Nov 24 09:51:16 crc kubenswrapper[4886]: I1124 09:51:16.567379 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-99b499f4-tjkbx_6fc8a4d5-fad4-4eca-95c0-329b968d5c9d/manager/0.log" Nov 24 09:51:16 crc kubenswrapper[4886]: I1124 09:51:16.660055 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-7454b96578-zks44_607c4e63-3cb6-43f8-86b0-7af4b07e81e4/kube-rbac-proxy/0.log" Nov 24 09:51:16 crc kubenswrapper[4886]: I1124 09:51:16.810038 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-7454b96578-zks44_607c4e63-3cb6-43f8-86b0-7af4b07e81e4/manager/0.log" Nov 24 09:51:16 crc kubenswrapper[4886]: I1124 09:51:16.914499 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-58f887965d-5zcvh_73e41e35-4218-492b-93d6-d068c687ee6e/kube-rbac-proxy/0.log" Nov 24 09:51:16 crc kubenswrapper[4886]: I1124 09:51:16.929141 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-58f887965d-5zcvh_73e41e35-4218-492b-93d6-d068c687ee6e/manager/0.log" Nov 24 09:51:17 crc kubenswrapper[4886]: I1124 09:51:17.122206 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-54b5986bb8-47vf5_671e2772-1d7f-4c97-91f6-83f0782b4f6b/kube-rbac-proxy/0.log" Nov 24 09:51:17 crc kubenswrapper[4886]: I1124 09:51:17.191492 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-54b5986bb8-47vf5_671e2772-1d7f-4c97-91f6-83f0782b4f6b/manager/0.log" Nov 24 09:51:17 crc kubenswrapper[4886]: I1124 09:51:17.350212 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-78bd47f458-kczhh_9a2dc275-73a5-4caf-89fe-120ce9401655/kube-rbac-proxy/0.log" Nov 24 09:51:17 crc kubenswrapper[4886]: I1124 09:51:17.414858 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-78bd47f458-kczhh_9a2dc275-73a5-4caf-89fe-120ce9401655/manager/0.log" Nov 24 09:51:17 crc kubenswrapper[4886]: I1124 09:51:17.478741 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-cfbb9c588-bpzxz_f269ac9a-b191-4262-93bf-6cbd27c0d445/kube-rbac-proxy/0.log" Nov 24 09:51:17 crc kubenswrapper[4886]: I1124 09:51:17.678583 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-cfbb9c588-bpzxz_f269ac9a-b191-4262-93bf-6cbd27c0d445/manager/0.log" Nov 24 09:51:17 crc kubenswrapper[4886]: I1124 09:51:17.796955 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-54cfbf4c7d-qnv8p_8aadf5e6-b19e-4b19-b812-50c5bd4721a4/kube-rbac-proxy/0.log" Nov 24 09:51:17 crc kubenswrapper[4886]: I1124 09:51:17.798577 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-54cfbf4c7d-qnv8p_8aadf5e6-b19e-4b19-b812-50c5bd4721a4/manager/0.log" Nov 24 09:51:17 crc kubenswrapper[4886]: I1124 09:51:17.923797 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-8c7444f48-n5gmw_6f4398e5-a5b8-4853-ac68-76385d1a749d/kube-rbac-proxy/0.log" Nov 24 09:51:18 crc kubenswrapper[4886]: I1124 09:51:18.042663 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-8c7444f48-n5gmw_6f4398e5-a5b8-4853-ac68-76385d1a749d/manager/0.log" Nov 24 09:51:18 crc kubenswrapper[4886]: I1124 09:51:18.169549 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-5cd7fdf8c-ztg92_33c0c863-6350-4195-acb5-0dcc801d867b/kube-rbac-proxy/0.log" Nov 24 09:51:18 crc kubenswrapper[4886]: I1124 09:51:18.355797 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-operator-5968c54bfb-nfhfk_48f1853b-9770-4f82-af2b-fc2be2f426b6/kube-rbac-proxy/0.log" Nov 24 09:51:18 crc kubenswrapper[4886]: I1124 09:51:18.580033 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-wfxd2_f134bfae-349d-4078-b49c-7aba86c32093/registry-server/0.log" Nov 24 09:51:18 crc kubenswrapper[4886]: I1124 09:51:18.592127 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-operator-5968c54bfb-nfhfk_48f1853b-9770-4f82-af2b-fc2be2f426b6/operator/0.log" Nov 24 09:51:18 crc kubenswrapper[4886]: I1124 09:51:18.703143 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-54fc5f65b7-z6p4s_26b9db43-5cbd-4513-8685-976bc2bccad8/kube-rbac-proxy/0.log" Nov 24 09:51:18 crc kubenswrapper[4886]: I1124 09:51:18.959242 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-54fc5f65b7-z6p4s_26b9db43-5cbd-4513-8685-976bc2bccad8/manager/0.log" Nov 24 09:51:18 crc kubenswrapper[4886]: I1124 09:51:18.996441 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-5b797b8dff-nwx4f_789de7d5-5a8b-4005-b37d-83057da5b4e7/kube-rbac-proxy/0.log" Nov 24 09:51:19 crc kubenswrapper[4886]: I1124 09:51:19.039728 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-5b797b8dff-nwx4f_789de7d5-5a8b-4005-b37d-83057da5b4e7/manager/0.log" Nov 24 09:51:19 crc kubenswrapper[4886]: I1124 09:51:19.298011 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-d656998f4-qmlpw_213c4726-cd5c-4f79-ac2a-bc3ca07f0019/kube-rbac-proxy/0.log" Nov 24 09:51:19 crc kubenswrapper[4886]: I1124 09:51:19.306561 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-5f97d8c699-sgnjz_50b161b3-4911-4ab1-b348-b1b52713c856/operator/0.log" Nov 24 09:51:19 crc kubenswrapper[4886]: I1124 09:51:19.525585 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-d656998f4-qmlpw_213c4726-cd5c-4f79-ac2a-bc3ca07f0019/manager/0.log" Nov 24 09:51:19 crc kubenswrapper[4886]: I1124 09:51:19.666351 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-6d4bf84b58-62fz7_ac24d05a-4485-4fad-a03c-2fb381960d7b/kube-rbac-proxy/0.log" Nov 24 09:51:19 crc kubenswrapper[4886]: I1124 09:51:19.675052 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-5cd7fdf8c-ztg92_33c0c863-6350-4195-acb5-0dcc801d867b/manager/0.log" Nov 24 09:51:19 crc kubenswrapper[4886]: I1124 09:51:19.682635 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-6d4bf84b58-62fz7_ac24d05a-4485-4fad-a03c-2fb381960d7b/manager/0.log" Nov 24 09:51:19 crc kubenswrapper[4886]: I1124 09:51:19.874291 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-b4c496f69-kgnpt_e69be7ce-2069-42ab-a8c9-7b4c29243ff0/manager/0.log" Nov 24 09:51:19 crc kubenswrapper[4886]: I1124 09:51:19.879111 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-b4c496f69-kgnpt_e69be7ce-2069-42ab-a8c9-7b4c29243ff0/kube-rbac-proxy/0.log" Nov 24 09:51:19 crc kubenswrapper[4886]: I1124 09:51:19.922853 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-8c6448b9f-7zbrr_dc151242-3f76-4414-9a2b-a5e28adf12af/kube-rbac-proxy/0.log" Nov 24 09:51:20 crc kubenswrapper[4886]: I1124 09:51:20.049373 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-8c6448b9f-7zbrr_dc151242-3f76-4414-9a2b-a5e28adf12af/manager/0.log" Nov 24 09:51:36 crc kubenswrapper[4886]: I1124 09:51:36.387746 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-5fr58_afdfb747-0bc0-40a4-89e6-dc6970617398/control-plane-machine-set-operator/0.log" Nov 24 09:51:36 crc kubenswrapper[4886]: I1124 09:51:36.582282 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-fqpl9_e4cfb4a2-69e1-4c38-b07f-cb1e628cbc2f/machine-api-operator/0.log" Nov 24 09:51:36 crc kubenswrapper[4886]: I1124 09:51:36.606007 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-fqpl9_e4cfb4a2-69e1-4c38-b07f-cb1e628cbc2f/kube-rbac-proxy/0.log" Nov 24 09:51:49 crc kubenswrapper[4886]: I1124 09:51:49.322246 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-5b446d88c5-ff82d_fc0d7b30-aa61-4f00-a908-d13689ed0b04/cert-manager-controller/0.log" Nov 24 09:51:49 crc kubenswrapper[4886]: I1124 09:51:49.500324 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-7f985d654d-gsdqn_9475a865-8fb9-4c93-aeb0-09e9b8285a88/cert-manager-cainjector/0.log" Nov 24 09:51:49 crc kubenswrapper[4886]: I1124 09:51:49.586799 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-5655c58dd6-9cfx6_7b1b394b-0362-4ee6-a956-48d7598ef6a2/cert-manager-webhook/0.log" Nov 24 09:52:01 crc kubenswrapper[4886]: I1124 09:52:01.755673 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-5874bd7bc5-nxtnd_166ba125-3d7b-4ab8-bbca-7f707fd9261b/nmstate-console-plugin/0.log" Nov 24 09:52:01 crc kubenswrapper[4886]: I1124 09:52:01.784043 4886 patch_prober.go:28] interesting pod/machine-config-daemon-zc46q container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 09:52:01 crc kubenswrapper[4886]: I1124 09:52:01.784109 4886 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zc46q" podUID="23cb993e-0360-4449-b604-8ddd825a6502" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 09:52:01 crc kubenswrapper[4886]: I1124 09:52:01.945440 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-bxczf_50df2428-7c0e-4f4a-9c13-dd5cb4038f2e/nmstate-handler/0.log" Nov 24 09:52:01 crc kubenswrapper[4886]: I1124 09:52:01.966463 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-5dcf9c57c5-646k5_028a41e3-6c82-4e95-a4e5-fc835e4d75af/nmstate-metrics/0.log" Nov 24 09:52:02 crc kubenswrapper[4886]: I1124 09:52:02.003484 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-5dcf9c57c5-646k5_028a41e3-6c82-4e95-a4e5-fc835e4d75af/kube-rbac-proxy/0.log" Nov 24 09:52:02 crc kubenswrapper[4886]: I1124 09:52:02.222164 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-6b89b748d8-dvjwb_33d55c5c-55cd-453e-8888-c064a7e0e36d/nmstate-webhook/0.log" Nov 24 09:52:02 crc kubenswrapper[4886]: I1124 09:52:02.223898 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-557fdffb88-mc67m_2c6833a8-49fc-4959-b487-21009d6da024/nmstate-operator/0.log" Nov 24 09:52:17 crc kubenswrapper[4886]: I1124 09:52:17.846491 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6c7b4b5f48-znxdl_8bfe8a52-0472-407d-a1c4-a828c81e5032/controller/0.log" Nov 24 09:52:17 crc kubenswrapper[4886]: I1124 09:52:17.974064 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6c7b4b5f48-znxdl_8bfe8a52-0472-407d-a1c4-a828c81e5032/kube-rbac-proxy/0.log" Nov 24 09:52:18 crc kubenswrapper[4886]: I1124 09:52:18.138471 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-gr6pn_68ac7d9f-558c-415d-a499-9aca2c3c7d62/cp-frr-files/0.log" Nov 24 09:52:18 crc kubenswrapper[4886]: I1124 09:52:18.391941 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-gr6pn_68ac7d9f-558c-415d-a499-9aca2c3c7d62/cp-reloader/0.log" Nov 24 09:52:18 crc kubenswrapper[4886]: I1124 09:52:18.399493 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-gr6pn_68ac7d9f-558c-415d-a499-9aca2c3c7d62/cp-frr-files/0.log" Nov 24 09:52:18 crc kubenswrapper[4886]: I1124 09:52:18.431507 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-gr6pn_68ac7d9f-558c-415d-a499-9aca2c3c7d62/cp-metrics/0.log" Nov 24 09:52:18 crc kubenswrapper[4886]: I1124 09:52:18.437743 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-gr6pn_68ac7d9f-558c-415d-a499-9aca2c3c7d62/cp-reloader/0.log" Nov 24 09:52:18 crc kubenswrapper[4886]: I1124 09:52:18.604812 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-gr6pn_68ac7d9f-558c-415d-a499-9aca2c3c7d62/cp-reloader/0.log" Nov 24 09:52:18 crc kubenswrapper[4886]: I1124 09:52:18.608188 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-gr6pn_68ac7d9f-558c-415d-a499-9aca2c3c7d62/cp-frr-files/0.log" Nov 24 09:52:18 crc kubenswrapper[4886]: I1124 09:52:18.681951 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-gr6pn_68ac7d9f-558c-415d-a499-9aca2c3c7d62/cp-metrics/0.log" Nov 24 09:52:18 crc kubenswrapper[4886]: I1124 09:52:18.745227 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-gr6pn_68ac7d9f-558c-415d-a499-9aca2c3c7d62/cp-metrics/0.log" Nov 24 09:52:18 crc kubenswrapper[4886]: I1124 09:52:18.959845 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-gr6pn_68ac7d9f-558c-415d-a499-9aca2c3c7d62/cp-frr-files/0.log" Nov 24 09:52:18 crc kubenswrapper[4886]: I1124 09:52:18.964361 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-gr6pn_68ac7d9f-558c-415d-a499-9aca2c3c7d62/controller/0.log" Nov 24 09:52:19 crc kubenswrapper[4886]: I1124 09:52:19.003937 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-gr6pn_68ac7d9f-558c-415d-a499-9aca2c3c7d62/cp-reloader/0.log" Nov 24 09:52:19 crc kubenswrapper[4886]: I1124 09:52:19.023201 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-gr6pn_68ac7d9f-558c-415d-a499-9aca2c3c7d62/cp-metrics/0.log" Nov 24 09:52:19 crc kubenswrapper[4886]: I1124 09:52:19.287165 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-gr6pn_68ac7d9f-558c-415d-a499-9aca2c3c7d62/kube-rbac-proxy/0.log" Nov 24 09:52:19 crc kubenswrapper[4886]: I1124 09:52:19.299087 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-gr6pn_68ac7d9f-558c-415d-a499-9aca2c3c7d62/kube-rbac-proxy-frr/0.log" Nov 24 09:52:19 crc kubenswrapper[4886]: I1124 09:52:19.303024 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-gr6pn_68ac7d9f-558c-415d-a499-9aca2c3c7d62/frr-metrics/0.log" Nov 24 09:52:19 crc kubenswrapper[4886]: I1124 09:52:19.540921 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-gr6pn_68ac7d9f-558c-415d-a499-9aca2c3c7d62/reloader/0.log" Nov 24 09:52:19 crc kubenswrapper[4886]: I1124 09:52:19.612813 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-6998585d5-npnj4_3d2f363e-5545-4437-90ff-060ba6628fa9/frr-k8s-webhook-server/0.log" Nov 24 09:52:19 crc kubenswrapper[4886]: I1124 09:52:19.858866 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-688456bb67-dhj9s_24f2f5da-80b6-49b8-abe7-43f1301c84db/manager/0.log" Nov 24 09:52:20 crc kubenswrapper[4886]: I1124 09:52:20.062247 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-7fd5b69667-tg7zm_58ed8691-0e33-4c91-aecb-d8bfcceab2de/webhook-server/0.log" Nov 24 09:52:20 crc kubenswrapper[4886]: I1124 09:52:20.114117 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-gwqzh_795ee8d3-ac1e-4a6f-ba0a-8b75eda08e00/kube-rbac-proxy/0.log" Nov 24 09:52:20 crc kubenswrapper[4886]: I1124 09:52:20.641990 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-gr6pn_68ac7d9f-558c-415d-a499-9aca2c3c7d62/frr/0.log" Nov 24 09:52:20 crc kubenswrapper[4886]: I1124 09:52:20.794772 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-gwqzh_795ee8d3-ac1e-4a6f-ba0a-8b75eda08e00/speaker/0.log" Nov 24 09:52:31 crc kubenswrapper[4886]: I1124 09:52:31.785017 4886 patch_prober.go:28] interesting pod/machine-config-daemon-zc46q container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 09:52:31 crc kubenswrapper[4886]: I1124 09:52:31.787783 4886 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zc46q" podUID="23cb993e-0360-4449-b604-8ddd825a6502" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 09:52:35 crc kubenswrapper[4886]: I1124 09:52:35.568629 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e5rzdb_45839f6c-5966-4fa5-84da-187fc952f624/util/0.log" Nov 24 09:52:35 crc kubenswrapper[4886]: I1124 09:52:35.791984 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e5rzdb_45839f6c-5966-4fa5-84da-187fc952f624/util/0.log" Nov 24 09:52:35 crc kubenswrapper[4886]: I1124 09:52:35.794232 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e5rzdb_45839f6c-5966-4fa5-84da-187fc952f624/pull/0.log" Nov 24 09:52:35 crc kubenswrapper[4886]: I1124 09:52:35.884124 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e5rzdb_45839f6c-5966-4fa5-84da-187fc952f624/pull/0.log" Nov 24 09:52:36 crc kubenswrapper[4886]: I1124 09:52:36.109668 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e5rzdb_45839f6c-5966-4fa5-84da-187fc952f624/extract/0.log" Nov 24 09:52:36 crc kubenswrapper[4886]: I1124 09:52:36.139562 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e5rzdb_45839f6c-5966-4fa5-84da-187fc952f624/pull/0.log" Nov 24 09:52:36 crc kubenswrapper[4886]: I1124 09:52:36.145088 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e5rzdb_45839f6c-5966-4fa5-84da-187fc952f624/util/0.log" Nov 24 09:52:36 crc kubenswrapper[4886]: I1124 09:52:36.322299 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-5zslr_acc3d90c-d18b-48b6-94b2-8ef5250fd6c3/extract-utilities/0.log" Nov 24 09:52:36 crc kubenswrapper[4886]: I1124 09:52:36.551648 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-5zslr_acc3d90c-d18b-48b6-94b2-8ef5250fd6c3/extract-content/0.log" Nov 24 09:52:36 crc kubenswrapper[4886]: I1124 09:52:36.579026 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-5zslr_acc3d90c-d18b-48b6-94b2-8ef5250fd6c3/extract-utilities/0.log" Nov 24 09:52:36 crc kubenswrapper[4886]: I1124 09:52:36.603062 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-5zslr_acc3d90c-d18b-48b6-94b2-8ef5250fd6c3/extract-content/0.log" Nov 24 09:52:36 crc kubenswrapper[4886]: I1124 09:52:36.875990 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-5zslr_acc3d90c-d18b-48b6-94b2-8ef5250fd6c3/extract-content/0.log" Nov 24 09:52:36 crc kubenswrapper[4886]: I1124 09:52:36.888348 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-5zslr_acc3d90c-d18b-48b6-94b2-8ef5250fd6c3/extract-utilities/0.log" Nov 24 09:52:37 crc kubenswrapper[4886]: I1124 09:52:37.172834 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-2wf7k_1fb9d8ba-cdd5-4186-8905-8e06876efe9c/extract-utilities/0.log" Nov 24 09:52:37 crc kubenswrapper[4886]: I1124 09:52:37.397165 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-2wf7k_1fb9d8ba-cdd5-4186-8905-8e06876efe9c/extract-utilities/0.log" Nov 24 09:52:37 crc kubenswrapper[4886]: I1124 09:52:37.484601 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-2wf7k_1fb9d8ba-cdd5-4186-8905-8e06876efe9c/extract-content/0.log" Nov 24 09:52:37 crc kubenswrapper[4886]: I1124 09:52:37.510483 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-2wf7k_1fb9d8ba-cdd5-4186-8905-8e06876efe9c/extract-content/0.log" Nov 24 09:52:37 crc kubenswrapper[4886]: I1124 09:52:37.527583 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-5zslr_acc3d90c-d18b-48b6-94b2-8ef5250fd6c3/registry-server/0.log" Nov 24 09:52:37 crc kubenswrapper[4886]: I1124 09:52:37.631163 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-2wf7k_1fb9d8ba-cdd5-4186-8905-8e06876efe9c/extract-utilities/0.log" Nov 24 09:52:37 crc kubenswrapper[4886]: I1124 09:52:37.679220 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-2wf7k_1fb9d8ba-cdd5-4186-8905-8e06876efe9c/extract-content/0.log" Nov 24 09:52:37 crc kubenswrapper[4886]: I1124 09:52:37.786898 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-c6t9k"] Nov 24 09:52:37 crc kubenswrapper[4886]: E1124 09:52:37.789271 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cfdb31c7-bb90-407b-b7ab-7ff7bc2349cd" containerName="container-00" Nov 24 09:52:37 crc kubenswrapper[4886]: I1124 09:52:37.789314 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="cfdb31c7-bb90-407b-b7ab-7ff7bc2349cd" containerName="container-00" Nov 24 09:52:37 crc kubenswrapper[4886]: I1124 09:52:37.789565 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="cfdb31c7-bb90-407b-b7ab-7ff7bc2349cd" containerName="container-00" Nov 24 09:52:37 crc kubenswrapper[4886]: I1124 09:52:37.791266 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-c6t9k" Nov 24 09:52:37 crc kubenswrapper[4886]: I1124 09:52:37.804521 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-c6t9k"] Nov 24 09:52:37 crc kubenswrapper[4886]: I1124 09:52:37.936595 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mbwrl\" (UniqueName: \"kubernetes.io/projected/2dd60250-81ae-417e-8936-7cba3901703b-kube-api-access-mbwrl\") pod \"redhat-operators-c6t9k\" (UID: \"2dd60250-81ae-417e-8936-7cba3901703b\") " pod="openshift-marketplace/redhat-operators-c6t9k" Nov 24 09:52:37 crc kubenswrapper[4886]: I1124 09:52:37.936661 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2dd60250-81ae-417e-8936-7cba3901703b-utilities\") pod \"redhat-operators-c6t9k\" (UID: \"2dd60250-81ae-417e-8936-7cba3901703b\") " pod="openshift-marketplace/redhat-operators-c6t9k" Nov 24 09:52:37 crc kubenswrapper[4886]: I1124 09:52:37.936691 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2dd60250-81ae-417e-8936-7cba3901703b-catalog-content\") pod \"redhat-operators-c6t9k\" (UID: \"2dd60250-81ae-417e-8936-7cba3901703b\") " pod="openshift-marketplace/redhat-operators-c6t9k" Nov 24 09:52:38 crc kubenswrapper[4886]: I1124 09:52:38.038389 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mbwrl\" (UniqueName: \"kubernetes.io/projected/2dd60250-81ae-417e-8936-7cba3901703b-kube-api-access-mbwrl\") pod \"redhat-operators-c6t9k\" (UID: \"2dd60250-81ae-417e-8936-7cba3901703b\") " pod="openshift-marketplace/redhat-operators-c6t9k" Nov 24 09:52:38 crc kubenswrapper[4886]: I1124 09:52:38.038935 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2dd60250-81ae-417e-8936-7cba3901703b-utilities\") pod \"redhat-operators-c6t9k\" (UID: \"2dd60250-81ae-417e-8936-7cba3901703b\") " pod="openshift-marketplace/redhat-operators-c6t9k" Nov 24 09:52:38 crc kubenswrapper[4886]: I1124 09:52:38.038997 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2dd60250-81ae-417e-8936-7cba3901703b-catalog-content\") pod \"redhat-operators-c6t9k\" (UID: \"2dd60250-81ae-417e-8936-7cba3901703b\") " pod="openshift-marketplace/redhat-operators-c6t9k" Nov 24 09:52:38 crc kubenswrapper[4886]: I1124 09:52:38.040212 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2dd60250-81ae-417e-8936-7cba3901703b-utilities\") pod \"redhat-operators-c6t9k\" (UID: \"2dd60250-81ae-417e-8936-7cba3901703b\") " pod="openshift-marketplace/redhat-operators-c6t9k" Nov 24 09:52:38 crc kubenswrapper[4886]: I1124 09:52:38.040560 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2dd60250-81ae-417e-8936-7cba3901703b-catalog-content\") pod \"redhat-operators-c6t9k\" (UID: \"2dd60250-81ae-417e-8936-7cba3901703b\") " pod="openshift-marketplace/redhat-operators-c6t9k" Nov 24 09:52:38 crc kubenswrapper[4886]: I1124 09:52:38.066479 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mbwrl\" (UniqueName: \"kubernetes.io/projected/2dd60250-81ae-417e-8936-7cba3901703b-kube-api-access-mbwrl\") pod \"redhat-operators-c6t9k\" (UID: \"2dd60250-81ae-417e-8936-7cba3901703b\") " pod="openshift-marketplace/redhat-operators-c6t9k" Nov 24 09:52:38 crc kubenswrapper[4886]: I1124 09:52:38.133951 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-c6t9k" Nov 24 09:52:38 crc kubenswrapper[4886]: I1124 09:52:38.374623 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-2wf7k_1fb9d8ba-cdd5-4186-8905-8e06876efe9c/registry-server/0.log" Nov 24 09:52:38 crc kubenswrapper[4886]: I1124 09:52:38.396403 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c68lz9q_e28dd825-a491-4ac3-a3b1-0e19192a40b9/util/0.log" Nov 24 09:52:38 crc kubenswrapper[4886]: I1124 09:52:38.781444 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c68lz9q_e28dd825-a491-4ac3-a3b1-0e19192a40b9/pull/0.log" Nov 24 09:52:38 crc kubenswrapper[4886]: I1124 09:52:38.789684 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c68lz9q_e28dd825-a491-4ac3-a3b1-0e19192a40b9/util/0.log" Nov 24 09:52:38 crc kubenswrapper[4886]: I1124 09:52:38.815794 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c68lz9q_e28dd825-a491-4ac3-a3b1-0e19192a40b9/pull/0.log" Nov 24 09:52:38 crc kubenswrapper[4886]: I1124 09:52:38.866111 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-c6t9k"] Nov 24 09:52:39 crc kubenswrapper[4886]: I1124 09:52:39.136302 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c68lz9q_e28dd825-a491-4ac3-a3b1-0e19192a40b9/util/0.log" Nov 24 09:52:39 crc kubenswrapper[4886]: I1124 09:52:39.171120 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c68lz9q_e28dd825-a491-4ac3-a3b1-0e19192a40b9/extract/0.log" Nov 24 09:52:39 crc kubenswrapper[4886]: I1124 09:52:39.230744 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c68lz9q_e28dd825-a491-4ac3-a3b1-0e19192a40b9/pull/0.log" Nov 24 09:52:39 crc kubenswrapper[4886]: I1124 09:52:39.508246 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-psbrg_5c32598f-bb74-4615-b8f9-77f36f97f80a/marketplace-operator/0.log" Nov 24 09:52:39 crc kubenswrapper[4886]: I1124 09:52:39.547612 4886 generic.go:334] "Generic (PLEG): container finished" podID="2dd60250-81ae-417e-8936-7cba3901703b" containerID="1ed06cd5dbb9e94cd8c0534ccd9070e11afa004f0af33b46b4f8b75027424404" exitCode=0 Nov 24 09:52:39 crc kubenswrapper[4886]: I1124 09:52:39.547694 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-c6t9k" event={"ID":"2dd60250-81ae-417e-8936-7cba3901703b","Type":"ContainerDied","Data":"1ed06cd5dbb9e94cd8c0534ccd9070e11afa004f0af33b46b4f8b75027424404"} Nov 24 09:52:39 crc kubenswrapper[4886]: I1124 09:52:39.547757 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-c6t9k" event={"ID":"2dd60250-81ae-417e-8936-7cba3901703b","Type":"ContainerStarted","Data":"8853314757c9120380fbc566e3f532b9901c2b9fef47b8c9af62126589fbf5f9"} Nov 24 09:52:39 crc kubenswrapper[4886]: I1124 09:52:39.551898 4886 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 24 09:52:39 crc kubenswrapper[4886]: I1124 09:52:39.594025 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-hpdtc_b7685eb7-7670-424e-834e-cbe8c0a62dc9/extract-utilities/0.log" Nov 24 09:52:39 crc kubenswrapper[4886]: I1124 09:52:39.957958 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-hpdtc_b7685eb7-7670-424e-834e-cbe8c0a62dc9/extract-utilities/0.log" Nov 24 09:52:40 crc kubenswrapper[4886]: I1124 09:52:40.024782 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-hpdtc_b7685eb7-7670-424e-834e-cbe8c0a62dc9/extract-content/0.log" Nov 24 09:52:40 crc kubenswrapper[4886]: I1124 09:52:40.045185 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-hpdtc_b7685eb7-7670-424e-834e-cbe8c0a62dc9/extract-content/0.log" Nov 24 09:52:40 crc kubenswrapper[4886]: I1124 09:52:40.315091 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-hpdtc_b7685eb7-7670-424e-834e-cbe8c0a62dc9/extract-utilities/0.log" Nov 24 09:52:40 crc kubenswrapper[4886]: I1124 09:52:40.339804 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-hpdtc_b7685eb7-7670-424e-834e-cbe8c0a62dc9/extract-content/0.log" Nov 24 09:52:40 crc kubenswrapper[4886]: I1124 09:52:40.479399 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-lvvtb_7472f270-eb34-4f9e-b332-d7f53ff1e014/extract-utilities/0.log" Nov 24 09:52:40 crc kubenswrapper[4886]: I1124 09:52:40.509450 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-hpdtc_b7685eb7-7670-424e-834e-cbe8c0a62dc9/registry-server/0.log" Nov 24 09:52:40 crc kubenswrapper[4886]: I1124 09:52:40.725401 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-lvvtb_7472f270-eb34-4f9e-b332-d7f53ff1e014/extract-content/0.log" Nov 24 09:52:40 crc kubenswrapper[4886]: I1124 09:52:40.725423 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-lvvtb_7472f270-eb34-4f9e-b332-d7f53ff1e014/extract-content/0.log" Nov 24 09:52:40 crc kubenswrapper[4886]: I1124 09:52:40.735904 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-lvvtb_7472f270-eb34-4f9e-b332-d7f53ff1e014/extract-utilities/0.log" Nov 24 09:52:40 crc kubenswrapper[4886]: I1124 09:52:40.973415 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-lvvtb_7472f270-eb34-4f9e-b332-d7f53ff1e014/extract-content/0.log" Nov 24 09:52:40 crc kubenswrapper[4886]: I1124 09:52:40.992906 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-lvvtb_7472f270-eb34-4f9e-b332-d7f53ff1e014/extract-utilities/0.log" Nov 24 09:52:41 crc kubenswrapper[4886]: I1124 09:52:41.223682 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-lvvtb_7472f270-eb34-4f9e-b332-d7f53ff1e014/registry-server/0.log" Nov 24 09:52:41 crc kubenswrapper[4886]: I1124 09:52:41.584198 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-c6t9k" event={"ID":"2dd60250-81ae-417e-8936-7cba3901703b","Type":"ContainerStarted","Data":"797f5563156595a12f45361a791b83570461329c80d50b1c6e76d37a99faa201"} Nov 24 09:52:43 crc kubenswrapper[4886]: I1124 09:52:43.604332 4886 generic.go:334] "Generic (PLEG): container finished" podID="2dd60250-81ae-417e-8936-7cba3901703b" containerID="797f5563156595a12f45361a791b83570461329c80d50b1c6e76d37a99faa201" exitCode=0 Nov 24 09:52:43 crc kubenswrapper[4886]: I1124 09:52:43.604425 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-c6t9k" event={"ID":"2dd60250-81ae-417e-8936-7cba3901703b","Type":"ContainerDied","Data":"797f5563156595a12f45361a791b83570461329c80d50b1c6e76d37a99faa201"} Nov 24 09:52:44 crc kubenswrapper[4886]: I1124 09:52:44.617982 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-c6t9k" event={"ID":"2dd60250-81ae-417e-8936-7cba3901703b","Type":"ContainerStarted","Data":"d47d87d62b3eba322beff88ec3f846df226254c42f3539727419b5c6ab7bbf8b"} Nov 24 09:52:44 crc kubenswrapper[4886]: I1124 09:52:44.643063 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-c6t9k" podStartSLOduration=3.11790947 podStartE2EDuration="7.643038602s" podCreationTimestamp="2025-11-24 09:52:37 +0000 UTC" firstStartedPulling="2025-11-24 09:52:39.551615994 +0000 UTC m=+3815.438354129" lastFinishedPulling="2025-11-24 09:52:44.076745136 +0000 UTC m=+3819.963483261" observedRunningTime="2025-11-24 09:52:44.638258355 +0000 UTC m=+3820.524996490" watchObservedRunningTime="2025-11-24 09:52:44.643038602 +0000 UTC m=+3820.529776737" Nov 24 09:52:48 crc kubenswrapper[4886]: I1124 09:52:48.136480 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-c6t9k" Nov 24 09:52:48 crc kubenswrapper[4886]: I1124 09:52:48.137257 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-c6t9k" Nov 24 09:52:49 crc kubenswrapper[4886]: I1124 09:52:49.186500 4886 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-c6t9k" podUID="2dd60250-81ae-417e-8936-7cba3901703b" containerName="registry-server" probeResult="failure" output=< Nov 24 09:52:49 crc kubenswrapper[4886]: timeout: failed to connect service ":50051" within 1s Nov 24 09:52:49 crc kubenswrapper[4886]: > Nov 24 09:52:58 crc kubenswrapper[4886]: I1124 09:52:58.201013 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-c6t9k" Nov 24 09:52:58 crc kubenswrapper[4886]: I1124 09:52:58.288584 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-c6t9k" Nov 24 09:52:58 crc kubenswrapper[4886]: I1124 09:52:58.462559 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-c6t9k"] Nov 24 09:52:59 crc kubenswrapper[4886]: I1124 09:52:59.773561 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-c6t9k" podUID="2dd60250-81ae-417e-8936-7cba3901703b" containerName="registry-server" containerID="cri-o://d47d87d62b3eba322beff88ec3f846df226254c42f3539727419b5c6ab7bbf8b" gracePeriod=2 Nov 24 09:53:00 crc kubenswrapper[4886]: I1124 09:53:00.478413 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-c6t9k" Nov 24 09:53:00 crc kubenswrapper[4886]: I1124 09:53:00.576070 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mbwrl\" (UniqueName: \"kubernetes.io/projected/2dd60250-81ae-417e-8936-7cba3901703b-kube-api-access-mbwrl\") pod \"2dd60250-81ae-417e-8936-7cba3901703b\" (UID: \"2dd60250-81ae-417e-8936-7cba3901703b\") " Nov 24 09:53:00 crc kubenswrapper[4886]: I1124 09:53:00.576427 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2dd60250-81ae-417e-8936-7cba3901703b-catalog-content\") pod \"2dd60250-81ae-417e-8936-7cba3901703b\" (UID: \"2dd60250-81ae-417e-8936-7cba3901703b\") " Nov 24 09:53:00 crc kubenswrapper[4886]: I1124 09:53:00.576488 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2dd60250-81ae-417e-8936-7cba3901703b-utilities\") pod \"2dd60250-81ae-417e-8936-7cba3901703b\" (UID: \"2dd60250-81ae-417e-8936-7cba3901703b\") " Nov 24 09:53:00 crc kubenswrapper[4886]: I1124 09:53:00.577659 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2dd60250-81ae-417e-8936-7cba3901703b-utilities" (OuterVolumeSpecName: "utilities") pod "2dd60250-81ae-417e-8936-7cba3901703b" (UID: "2dd60250-81ae-417e-8936-7cba3901703b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 09:53:00 crc kubenswrapper[4886]: I1124 09:53:00.600357 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2dd60250-81ae-417e-8936-7cba3901703b-kube-api-access-mbwrl" (OuterVolumeSpecName: "kube-api-access-mbwrl") pod "2dd60250-81ae-417e-8936-7cba3901703b" (UID: "2dd60250-81ae-417e-8936-7cba3901703b"). InnerVolumeSpecName "kube-api-access-mbwrl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 09:53:00 crc kubenswrapper[4886]: I1124 09:53:00.665809 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2dd60250-81ae-417e-8936-7cba3901703b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2dd60250-81ae-417e-8936-7cba3901703b" (UID: "2dd60250-81ae-417e-8936-7cba3901703b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 09:53:00 crc kubenswrapper[4886]: I1124 09:53:00.678431 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mbwrl\" (UniqueName: \"kubernetes.io/projected/2dd60250-81ae-417e-8936-7cba3901703b-kube-api-access-mbwrl\") on node \"crc\" DevicePath \"\"" Nov 24 09:53:00 crc kubenswrapper[4886]: I1124 09:53:00.678470 4886 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2dd60250-81ae-417e-8936-7cba3901703b-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 09:53:00 crc kubenswrapper[4886]: I1124 09:53:00.678480 4886 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2dd60250-81ae-417e-8936-7cba3901703b-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 09:53:00 crc kubenswrapper[4886]: I1124 09:53:00.788079 4886 generic.go:334] "Generic (PLEG): container finished" podID="2dd60250-81ae-417e-8936-7cba3901703b" containerID="d47d87d62b3eba322beff88ec3f846df226254c42f3539727419b5c6ab7bbf8b" exitCode=0 Nov 24 09:53:00 crc kubenswrapper[4886]: I1124 09:53:00.788127 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-c6t9k" event={"ID":"2dd60250-81ae-417e-8936-7cba3901703b","Type":"ContainerDied","Data":"d47d87d62b3eba322beff88ec3f846df226254c42f3539727419b5c6ab7bbf8b"} Nov 24 09:53:00 crc kubenswrapper[4886]: I1124 09:53:00.788176 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-c6t9k" Nov 24 09:53:00 crc kubenswrapper[4886]: I1124 09:53:00.788216 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-c6t9k" event={"ID":"2dd60250-81ae-417e-8936-7cba3901703b","Type":"ContainerDied","Data":"8853314757c9120380fbc566e3f532b9901c2b9fef47b8c9af62126589fbf5f9"} Nov 24 09:53:00 crc kubenswrapper[4886]: I1124 09:53:00.788244 4886 scope.go:117] "RemoveContainer" containerID="d47d87d62b3eba322beff88ec3f846df226254c42f3539727419b5c6ab7bbf8b" Nov 24 09:53:00 crc kubenswrapper[4886]: I1124 09:53:00.810279 4886 scope.go:117] "RemoveContainer" containerID="797f5563156595a12f45361a791b83570461329c80d50b1c6e76d37a99faa201" Nov 24 09:53:00 crc kubenswrapper[4886]: I1124 09:53:00.823876 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-c6t9k"] Nov 24 09:53:00 crc kubenswrapper[4886]: I1124 09:53:00.845346 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-c6t9k"] Nov 24 09:53:00 crc kubenswrapper[4886]: I1124 09:53:00.880611 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2dd60250-81ae-417e-8936-7cba3901703b" path="/var/lib/kubelet/pods/2dd60250-81ae-417e-8936-7cba3901703b/volumes" Nov 24 09:53:00 crc kubenswrapper[4886]: I1124 09:53:00.886927 4886 scope.go:117] "RemoveContainer" containerID="1ed06cd5dbb9e94cd8c0534ccd9070e11afa004f0af33b46b4f8b75027424404" Nov 24 09:53:00 crc kubenswrapper[4886]: I1124 09:53:00.955384 4886 scope.go:117] "RemoveContainer" containerID="d47d87d62b3eba322beff88ec3f846df226254c42f3539727419b5c6ab7bbf8b" Nov 24 09:53:00 crc kubenswrapper[4886]: E1124 09:53:00.956550 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d47d87d62b3eba322beff88ec3f846df226254c42f3539727419b5c6ab7bbf8b\": container with ID starting with d47d87d62b3eba322beff88ec3f846df226254c42f3539727419b5c6ab7bbf8b not found: ID does not exist" containerID="d47d87d62b3eba322beff88ec3f846df226254c42f3539727419b5c6ab7bbf8b" Nov 24 09:53:00 crc kubenswrapper[4886]: I1124 09:53:00.956599 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d47d87d62b3eba322beff88ec3f846df226254c42f3539727419b5c6ab7bbf8b"} err="failed to get container status \"d47d87d62b3eba322beff88ec3f846df226254c42f3539727419b5c6ab7bbf8b\": rpc error: code = NotFound desc = could not find container \"d47d87d62b3eba322beff88ec3f846df226254c42f3539727419b5c6ab7bbf8b\": container with ID starting with d47d87d62b3eba322beff88ec3f846df226254c42f3539727419b5c6ab7bbf8b not found: ID does not exist" Nov 24 09:53:00 crc kubenswrapper[4886]: I1124 09:53:00.956630 4886 scope.go:117] "RemoveContainer" containerID="797f5563156595a12f45361a791b83570461329c80d50b1c6e76d37a99faa201" Nov 24 09:53:00 crc kubenswrapper[4886]: E1124 09:53:00.963837 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"797f5563156595a12f45361a791b83570461329c80d50b1c6e76d37a99faa201\": container with ID starting with 797f5563156595a12f45361a791b83570461329c80d50b1c6e76d37a99faa201 not found: ID does not exist" containerID="797f5563156595a12f45361a791b83570461329c80d50b1c6e76d37a99faa201" Nov 24 09:53:00 crc kubenswrapper[4886]: I1124 09:53:00.963897 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"797f5563156595a12f45361a791b83570461329c80d50b1c6e76d37a99faa201"} err="failed to get container status \"797f5563156595a12f45361a791b83570461329c80d50b1c6e76d37a99faa201\": rpc error: code = NotFound desc = could not find container \"797f5563156595a12f45361a791b83570461329c80d50b1c6e76d37a99faa201\": container with ID starting with 797f5563156595a12f45361a791b83570461329c80d50b1c6e76d37a99faa201 not found: ID does not exist" Nov 24 09:53:00 crc kubenswrapper[4886]: I1124 09:53:00.963929 4886 scope.go:117] "RemoveContainer" containerID="1ed06cd5dbb9e94cd8c0534ccd9070e11afa004f0af33b46b4f8b75027424404" Nov 24 09:53:00 crc kubenswrapper[4886]: E1124 09:53:00.967373 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1ed06cd5dbb9e94cd8c0534ccd9070e11afa004f0af33b46b4f8b75027424404\": container with ID starting with 1ed06cd5dbb9e94cd8c0534ccd9070e11afa004f0af33b46b4f8b75027424404 not found: ID does not exist" containerID="1ed06cd5dbb9e94cd8c0534ccd9070e11afa004f0af33b46b4f8b75027424404" Nov 24 09:53:00 crc kubenswrapper[4886]: I1124 09:53:00.967417 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1ed06cd5dbb9e94cd8c0534ccd9070e11afa004f0af33b46b4f8b75027424404"} err="failed to get container status \"1ed06cd5dbb9e94cd8c0534ccd9070e11afa004f0af33b46b4f8b75027424404\": rpc error: code = NotFound desc = could not find container \"1ed06cd5dbb9e94cd8c0534ccd9070e11afa004f0af33b46b4f8b75027424404\": container with ID starting with 1ed06cd5dbb9e94cd8c0534ccd9070e11afa004f0af33b46b4f8b75027424404 not found: ID does not exist" Nov 24 09:53:01 crc kubenswrapper[4886]: I1124 09:53:01.784493 4886 patch_prober.go:28] interesting pod/machine-config-daemon-zc46q container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 09:53:01 crc kubenswrapper[4886]: I1124 09:53:01.785197 4886 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zc46q" podUID="23cb993e-0360-4449-b604-8ddd825a6502" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 09:53:01 crc kubenswrapper[4886]: I1124 09:53:01.785341 4886 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-zc46q" Nov 24 09:53:01 crc kubenswrapper[4886]: I1124 09:53:01.786249 4886 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"1e45cb7a19817212ac61663702e5e39e029304fff651e6302fbb94fac4e5b2b7"} pod="openshift-machine-config-operator/machine-config-daemon-zc46q" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 24 09:53:01 crc kubenswrapper[4886]: I1124 09:53:01.786431 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-zc46q" podUID="23cb993e-0360-4449-b604-8ddd825a6502" containerName="machine-config-daemon" containerID="cri-o://1e45cb7a19817212ac61663702e5e39e029304fff651e6302fbb94fac4e5b2b7" gracePeriod=600 Nov 24 09:53:01 crc kubenswrapper[4886]: E1124 09:53:01.914059 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zc46q_openshift-machine-config-operator(23cb993e-0360-4449-b604-8ddd825a6502)\"" pod="openshift-machine-config-operator/machine-config-daemon-zc46q" podUID="23cb993e-0360-4449-b604-8ddd825a6502" Nov 24 09:53:02 crc kubenswrapper[4886]: I1124 09:53:02.812179 4886 generic.go:334] "Generic (PLEG): container finished" podID="23cb993e-0360-4449-b604-8ddd825a6502" containerID="1e45cb7a19817212ac61663702e5e39e029304fff651e6302fbb94fac4e5b2b7" exitCode=0 Nov 24 09:53:02 crc kubenswrapper[4886]: I1124 09:53:02.812291 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zc46q" event={"ID":"23cb993e-0360-4449-b604-8ddd825a6502","Type":"ContainerDied","Data":"1e45cb7a19817212ac61663702e5e39e029304fff651e6302fbb94fac4e5b2b7"} Nov 24 09:53:02 crc kubenswrapper[4886]: I1124 09:53:02.812619 4886 scope.go:117] "RemoveContainer" containerID="4c81d65f580ad035ae3cb40985f146ce45574345f2884eeaa80d7f0764f1e262" Nov 24 09:53:02 crc kubenswrapper[4886]: I1124 09:53:02.813463 4886 scope.go:117] "RemoveContainer" containerID="1e45cb7a19817212ac61663702e5e39e029304fff651e6302fbb94fac4e5b2b7" Nov 24 09:53:02 crc kubenswrapper[4886]: E1124 09:53:02.813870 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zc46q_openshift-machine-config-operator(23cb993e-0360-4449-b604-8ddd825a6502)\"" pod="openshift-machine-config-operator/machine-config-daemon-zc46q" podUID="23cb993e-0360-4449-b604-8ddd825a6502" Nov 24 09:53:06 crc kubenswrapper[4886]: I1124 09:53:06.742978 4886 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-cell1-galera-0" podUID="11ea3612-3583-4b82-9047-d11cd751adcd" containerName="galera" probeResult="failure" output="command timed out" Nov 24 09:53:13 crc kubenswrapper[4886]: I1124 09:53:13.851430 4886 scope.go:117] "RemoveContainer" containerID="1e45cb7a19817212ac61663702e5e39e029304fff651e6302fbb94fac4e5b2b7" Nov 24 09:53:13 crc kubenswrapper[4886]: E1124 09:53:13.852673 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zc46q_openshift-machine-config-operator(23cb993e-0360-4449-b604-8ddd825a6502)\"" pod="openshift-machine-config-operator/machine-config-daemon-zc46q" podUID="23cb993e-0360-4449-b604-8ddd825a6502" Nov 24 09:53:27 crc kubenswrapper[4886]: I1124 09:53:27.849668 4886 scope.go:117] "RemoveContainer" containerID="1e45cb7a19817212ac61663702e5e39e029304fff651e6302fbb94fac4e5b2b7" Nov 24 09:53:27 crc kubenswrapper[4886]: E1124 09:53:27.851909 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zc46q_openshift-machine-config-operator(23cb993e-0360-4449-b604-8ddd825a6502)\"" pod="openshift-machine-config-operator/machine-config-daemon-zc46q" podUID="23cb993e-0360-4449-b604-8ddd825a6502" Nov 24 09:53:40 crc kubenswrapper[4886]: I1124 09:53:40.851292 4886 scope.go:117] "RemoveContainer" containerID="1e45cb7a19817212ac61663702e5e39e029304fff651e6302fbb94fac4e5b2b7" Nov 24 09:53:40 crc kubenswrapper[4886]: E1124 09:53:40.852631 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zc46q_openshift-machine-config-operator(23cb993e-0360-4449-b604-8ddd825a6502)\"" pod="openshift-machine-config-operator/machine-config-daemon-zc46q" podUID="23cb993e-0360-4449-b604-8ddd825a6502" Nov 24 09:53:52 crc kubenswrapper[4886]: I1124 09:53:52.850276 4886 scope.go:117] "RemoveContainer" containerID="1e45cb7a19817212ac61663702e5e39e029304fff651e6302fbb94fac4e5b2b7" Nov 24 09:53:52 crc kubenswrapper[4886]: E1124 09:53:52.851613 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zc46q_openshift-machine-config-operator(23cb993e-0360-4449-b604-8ddd825a6502)\"" pod="openshift-machine-config-operator/machine-config-daemon-zc46q" podUID="23cb993e-0360-4449-b604-8ddd825a6502" Nov 24 09:54:06 crc kubenswrapper[4886]: I1124 09:54:06.849789 4886 scope.go:117] "RemoveContainer" containerID="1e45cb7a19817212ac61663702e5e39e029304fff651e6302fbb94fac4e5b2b7" Nov 24 09:54:06 crc kubenswrapper[4886]: E1124 09:54:06.851169 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zc46q_openshift-machine-config-operator(23cb993e-0360-4449-b604-8ddd825a6502)\"" pod="openshift-machine-config-operator/machine-config-daemon-zc46q" podUID="23cb993e-0360-4449-b604-8ddd825a6502" Nov 24 09:54:20 crc kubenswrapper[4886]: I1124 09:54:20.849078 4886 scope.go:117] "RemoveContainer" containerID="1e45cb7a19817212ac61663702e5e39e029304fff651e6302fbb94fac4e5b2b7" Nov 24 09:54:20 crc kubenswrapper[4886]: E1124 09:54:20.849997 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zc46q_openshift-machine-config-operator(23cb993e-0360-4449-b604-8ddd825a6502)\"" pod="openshift-machine-config-operator/machine-config-daemon-zc46q" podUID="23cb993e-0360-4449-b604-8ddd825a6502" Nov 24 09:54:34 crc kubenswrapper[4886]: I1124 09:54:34.860064 4886 scope.go:117] "RemoveContainer" containerID="1e45cb7a19817212ac61663702e5e39e029304fff651e6302fbb94fac4e5b2b7" Nov 24 09:54:34 crc kubenswrapper[4886]: E1124 09:54:34.861064 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zc46q_openshift-machine-config-operator(23cb993e-0360-4449-b604-8ddd825a6502)\"" pod="openshift-machine-config-operator/machine-config-daemon-zc46q" podUID="23cb993e-0360-4449-b604-8ddd825a6502" Nov 24 09:54:37 crc kubenswrapper[4886]: I1124 09:54:37.824881 4886 generic.go:334] "Generic (PLEG): container finished" podID="a9b187e2-2e4e-4f8a-9c51-657f618463cd" containerID="9a6f6e336942656937875b8d88110cf8389e25f31b5c15d188edd521f3c968db" exitCode=0 Nov 24 09:54:37 crc kubenswrapper[4886]: I1124 09:54:37.824989 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-tjqb8/must-gather-ldxjf" event={"ID":"a9b187e2-2e4e-4f8a-9c51-657f618463cd","Type":"ContainerDied","Data":"9a6f6e336942656937875b8d88110cf8389e25f31b5c15d188edd521f3c968db"} Nov 24 09:54:37 crc kubenswrapper[4886]: I1124 09:54:37.826340 4886 scope.go:117] "RemoveContainer" containerID="9a6f6e336942656937875b8d88110cf8389e25f31b5c15d188edd521f3c968db" Nov 24 09:54:38 crc kubenswrapper[4886]: I1124 09:54:38.761679 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-tjqb8_must-gather-ldxjf_a9b187e2-2e4e-4f8a-9c51-657f618463cd/gather/0.log" Nov 24 09:54:46 crc kubenswrapper[4886]: I1124 09:54:46.834869 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-tjqb8/must-gather-ldxjf"] Nov 24 09:54:46 crc kubenswrapper[4886]: I1124 09:54:46.835739 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-tjqb8/must-gather-ldxjf" podUID="a9b187e2-2e4e-4f8a-9c51-657f618463cd" containerName="copy" containerID="cri-o://da83b0c6f665e65a902fe33f1527ec06ec3da2409d6cb5d5c7fc14661bff13e9" gracePeriod=2 Nov 24 09:54:46 crc kubenswrapper[4886]: I1124 09:54:46.860159 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-tjqb8/must-gather-ldxjf"] Nov 24 09:54:47 crc kubenswrapper[4886]: I1124 09:54:47.391866 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-tjqb8_must-gather-ldxjf_a9b187e2-2e4e-4f8a-9c51-657f618463cd/copy/0.log" Nov 24 09:54:47 crc kubenswrapper[4886]: I1124 09:54:47.395017 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-tjqb8/must-gather-ldxjf" Nov 24 09:54:47 crc kubenswrapper[4886]: I1124 09:54:47.532805 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-574f2\" (UniqueName: \"kubernetes.io/projected/a9b187e2-2e4e-4f8a-9c51-657f618463cd-kube-api-access-574f2\") pod \"a9b187e2-2e4e-4f8a-9c51-657f618463cd\" (UID: \"a9b187e2-2e4e-4f8a-9c51-657f618463cd\") " Nov 24 09:54:47 crc kubenswrapper[4886]: I1124 09:54:47.532908 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/a9b187e2-2e4e-4f8a-9c51-657f618463cd-must-gather-output\") pod \"a9b187e2-2e4e-4f8a-9c51-657f618463cd\" (UID: \"a9b187e2-2e4e-4f8a-9c51-657f618463cd\") " Nov 24 09:54:47 crc kubenswrapper[4886]: I1124 09:54:47.547451 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a9b187e2-2e4e-4f8a-9c51-657f618463cd-kube-api-access-574f2" (OuterVolumeSpecName: "kube-api-access-574f2") pod "a9b187e2-2e4e-4f8a-9c51-657f618463cd" (UID: "a9b187e2-2e4e-4f8a-9c51-657f618463cd"). InnerVolumeSpecName "kube-api-access-574f2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 09:54:47 crc kubenswrapper[4886]: I1124 09:54:47.635289 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-574f2\" (UniqueName: \"kubernetes.io/projected/a9b187e2-2e4e-4f8a-9c51-657f618463cd-kube-api-access-574f2\") on node \"crc\" DevicePath \"\"" Nov 24 09:54:47 crc kubenswrapper[4886]: I1124 09:54:47.702199 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a9b187e2-2e4e-4f8a-9c51-657f618463cd-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "a9b187e2-2e4e-4f8a-9c51-657f618463cd" (UID: "a9b187e2-2e4e-4f8a-9c51-657f618463cd"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 09:54:47 crc kubenswrapper[4886]: I1124 09:54:47.738227 4886 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/a9b187e2-2e4e-4f8a-9c51-657f618463cd-must-gather-output\") on node \"crc\" DevicePath \"\"" Nov 24 09:54:47 crc kubenswrapper[4886]: I1124 09:54:47.914741 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-tjqb8_must-gather-ldxjf_a9b187e2-2e4e-4f8a-9c51-657f618463cd/copy/0.log" Nov 24 09:54:47 crc kubenswrapper[4886]: I1124 09:54:47.915616 4886 generic.go:334] "Generic (PLEG): container finished" podID="a9b187e2-2e4e-4f8a-9c51-657f618463cd" containerID="da83b0c6f665e65a902fe33f1527ec06ec3da2409d6cb5d5c7fc14661bff13e9" exitCode=143 Nov 24 09:54:47 crc kubenswrapper[4886]: I1124 09:54:47.915685 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-tjqb8/must-gather-ldxjf" Nov 24 09:54:47 crc kubenswrapper[4886]: I1124 09:54:47.915705 4886 scope.go:117] "RemoveContainer" containerID="da83b0c6f665e65a902fe33f1527ec06ec3da2409d6cb5d5c7fc14661bff13e9" Nov 24 09:54:47 crc kubenswrapper[4886]: I1124 09:54:47.950217 4886 scope.go:117] "RemoveContainer" containerID="9a6f6e336942656937875b8d88110cf8389e25f31b5c15d188edd521f3c968db" Nov 24 09:54:48 crc kubenswrapper[4886]: I1124 09:54:48.048321 4886 scope.go:117] "RemoveContainer" containerID="da83b0c6f665e65a902fe33f1527ec06ec3da2409d6cb5d5c7fc14661bff13e9" Nov 24 09:54:48 crc kubenswrapper[4886]: E1124 09:54:48.048873 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"da83b0c6f665e65a902fe33f1527ec06ec3da2409d6cb5d5c7fc14661bff13e9\": container with ID starting with da83b0c6f665e65a902fe33f1527ec06ec3da2409d6cb5d5c7fc14661bff13e9 not found: ID does not exist" containerID="da83b0c6f665e65a902fe33f1527ec06ec3da2409d6cb5d5c7fc14661bff13e9" Nov 24 09:54:48 crc kubenswrapper[4886]: I1124 09:54:48.048911 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"da83b0c6f665e65a902fe33f1527ec06ec3da2409d6cb5d5c7fc14661bff13e9"} err="failed to get container status \"da83b0c6f665e65a902fe33f1527ec06ec3da2409d6cb5d5c7fc14661bff13e9\": rpc error: code = NotFound desc = could not find container \"da83b0c6f665e65a902fe33f1527ec06ec3da2409d6cb5d5c7fc14661bff13e9\": container with ID starting with da83b0c6f665e65a902fe33f1527ec06ec3da2409d6cb5d5c7fc14661bff13e9 not found: ID does not exist" Nov 24 09:54:48 crc kubenswrapper[4886]: I1124 09:54:48.048951 4886 scope.go:117] "RemoveContainer" containerID="9a6f6e336942656937875b8d88110cf8389e25f31b5c15d188edd521f3c968db" Nov 24 09:54:48 crc kubenswrapper[4886]: E1124 09:54:48.049445 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9a6f6e336942656937875b8d88110cf8389e25f31b5c15d188edd521f3c968db\": container with ID starting with 9a6f6e336942656937875b8d88110cf8389e25f31b5c15d188edd521f3c968db not found: ID does not exist" containerID="9a6f6e336942656937875b8d88110cf8389e25f31b5c15d188edd521f3c968db" Nov 24 09:54:48 crc kubenswrapper[4886]: I1124 09:54:48.049498 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9a6f6e336942656937875b8d88110cf8389e25f31b5c15d188edd521f3c968db"} err="failed to get container status \"9a6f6e336942656937875b8d88110cf8389e25f31b5c15d188edd521f3c968db\": rpc error: code = NotFound desc = could not find container \"9a6f6e336942656937875b8d88110cf8389e25f31b5c15d188edd521f3c968db\": container with ID starting with 9a6f6e336942656937875b8d88110cf8389e25f31b5c15d188edd521f3c968db not found: ID does not exist" Nov 24 09:54:48 crc kubenswrapper[4886]: I1124 09:54:48.849839 4886 scope.go:117] "RemoveContainer" containerID="1e45cb7a19817212ac61663702e5e39e029304fff651e6302fbb94fac4e5b2b7" Nov 24 09:54:48 crc kubenswrapper[4886]: E1124 09:54:48.850492 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zc46q_openshift-machine-config-operator(23cb993e-0360-4449-b604-8ddd825a6502)\"" pod="openshift-machine-config-operator/machine-config-daemon-zc46q" podUID="23cb993e-0360-4449-b604-8ddd825a6502" Nov 24 09:54:48 crc kubenswrapper[4886]: I1124 09:54:48.863424 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a9b187e2-2e4e-4f8a-9c51-657f618463cd" path="/var/lib/kubelet/pods/a9b187e2-2e4e-4f8a-9c51-657f618463cd/volumes" Nov 24 09:55:01 crc kubenswrapper[4886]: I1124 09:55:01.849899 4886 scope.go:117] "RemoveContainer" containerID="1e45cb7a19817212ac61663702e5e39e029304fff651e6302fbb94fac4e5b2b7" Nov 24 09:55:01 crc kubenswrapper[4886]: E1124 09:55:01.850750 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zc46q_openshift-machine-config-operator(23cb993e-0360-4449-b604-8ddd825a6502)\"" pod="openshift-machine-config-operator/machine-config-daemon-zc46q" podUID="23cb993e-0360-4449-b604-8ddd825a6502" Nov 24 09:55:13 crc kubenswrapper[4886]: I1124 09:55:13.849522 4886 scope.go:117] "RemoveContainer" containerID="1e45cb7a19817212ac61663702e5e39e029304fff651e6302fbb94fac4e5b2b7" Nov 24 09:55:13 crc kubenswrapper[4886]: E1124 09:55:13.850646 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zc46q_openshift-machine-config-operator(23cb993e-0360-4449-b604-8ddd825a6502)\"" pod="openshift-machine-config-operator/machine-config-daemon-zc46q" podUID="23cb993e-0360-4449-b604-8ddd825a6502" Nov 24 09:55:27 crc kubenswrapper[4886]: I1124 09:55:27.849921 4886 scope.go:117] "RemoveContainer" containerID="1e45cb7a19817212ac61663702e5e39e029304fff651e6302fbb94fac4e5b2b7" Nov 24 09:55:27 crc kubenswrapper[4886]: E1124 09:55:27.850666 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zc46q_openshift-machine-config-operator(23cb993e-0360-4449-b604-8ddd825a6502)\"" pod="openshift-machine-config-operator/machine-config-daemon-zc46q" podUID="23cb993e-0360-4449-b604-8ddd825a6502" Nov 24 09:55:38 crc kubenswrapper[4886]: I1124 09:55:38.850131 4886 scope.go:117] "RemoveContainer" containerID="1e45cb7a19817212ac61663702e5e39e029304fff651e6302fbb94fac4e5b2b7" Nov 24 09:55:38 crc kubenswrapper[4886]: E1124 09:55:38.852278 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zc46q_openshift-machine-config-operator(23cb993e-0360-4449-b604-8ddd825a6502)\"" pod="openshift-machine-config-operator/machine-config-daemon-zc46q" podUID="23cb993e-0360-4449-b604-8ddd825a6502" Nov 24 09:55:51 crc kubenswrapper[4886]: I1124 09:55:51.851107 4886 scope.go:117] "RemoveContainer" containerID="1e45cb7a19817212ac61663702e5e39e029304fff651e6302fbb94fac4e5b2b7" Nov 24 09:55:51 crc kubenswrapper[4886]: E1124 09:55:51.853467 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zc46q_openshift-machine-config-operator(23cb993e-0360-4449-b604-8ddd825a6502)\"" pod="openshift-machine-config-operator/machine-config-daemon-zc46q" podUID="23cb993e-0360-4449-b604-8ddd825a6502" Nov 24 09:56:03 crc kubenswrapper[4886]: I1124 09:56:03.849804 4886 scope.go:117] "RemoveContainer" containerID="1e45cb7a19817212ac61663702e5e39e029304fff651e6302fbb94fac4e5b2b7" Nov 24 09:56:03 crc kubenswrapper[4886]: E1124 09:56:03.850644 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zc46q_openshift-machine-config-operator(23cb993e-0360-4449-b604-8ddd825a6502)\"" pod="openshift-machine-config-operator/machine-config-daemon-zc46q" podUID="23cb993e-0360-4449-b604-8ddd825a6502" Nov 24 09:56:15 crc kubenswrapper[4886]: I1124 09:56:15.850225 4886 scope.go:117] "RemoveContainer" containerID="1e45cb7a19817212ac61663702e5e39e029304fff651e6302fbb94fac4e5b2b7" Nov 24 09:56:15 crc kubenswrapper[4886]: E1124 09:56:15.852997 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zc46q_openshift-machine-config-operator(23cb993e-0360-4449-b604-8ddd825a6502)\"" pod="openshift-machine-config-operator/machine-config-daemon-zc46q" podUID="23cb993e-0360-4449-b604-8ddd825a6502" Nov 24 09:56:27 crc kubenswrapper[4886]: I1124 09:56:27.850623 4886 scope.go:117] "RemoveContainer" containerID="1e45cb7a19817212ac61663702e5e39e029304fff651e6302fbb94fac4e5b2b7" Nov 24 09:56:27 crc kubenswrapper[4886]: E1124 09:56:27.851719 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zc46q_openshift-machine-config-operator(23cb993e-0360-4449-b604-8ddd825a6502)\"" pod="openshift-machine-config-operator/machine-config-daemon-zc46q" podUID="23cb993e-0360-4449-b604-8ddd825a6502" Nov 24 09:56:39 crc kubenswrapper[4886]: I1124 09:56:39.852570 4886 scope.go:117] "RemoveContainer" containerID="1e45cb7a19817212ac61663702e5e39e029304fff651e6302fbb94fac4e5b2b7" Nov 24 09:56:39 crc kubenswrapper[4886]: E1124 09:56:39.855551 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zc46q_openshift-machine-config-operator(23cb993e-0360-4449-b604-8ddd825a6502)\"" pod="openshift-machine-config-operator/machine-config-daemon-zc46q" podUID="23cb993e-0360-4449-b604-8ddd825a6502" Nov 24 09:56:53 crc kubenswrapper[4886]: I1124 09:56:53.849483 4886 scope.go:117] "RemoveContainer" containerID="1e45cb7a19817212ac61663702e5e39e029304fff651e6302fbb94fac4e5b2b7" Nov 24 09:56:53 crc kubenswrapper[4886]: E1124 09:56:53.850543 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zc46q_openshift-machine-config-operator(23cb993e-0360-4449-b604-8ddd825a6502)\"" pod="openshift-machine-config-operator/machine-config-daemon-zc46q" podUID="23cb993e-0360-4449-b604-8ddd825a6502" Nov 24 09:57:04 crc kubenswrapper[4886]: I1124 09:57:04.858173 4886 scope.go:117] "RemoveContainer" containerID="1e45cb7a19817212ac61663702e5e39e029304fff651e6302fbb94fac4e5b2b7" Nov 24 09:57:04 crc kubenswrapper[4886]: E1124 09:57:04.859171 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zc46q_openshift-machine-config-operator(23cb993e-0360-4449-b604-8ddd825a6502)\"" pod="openshift-machine-config-operator/machine-config-daemon-zc46q" podUID="23cb993e-0360-4449-b604-8ddd825a6502" Nov 24 09:57:17 crc kubenswrapper[4886]: I1124 09:57:17.849421 4886 scope.go:117] "RemoveContainer" containerID="1e45cb7a19817212ac61663702e5e39e029304fff651e6302fbb94fac4e5b2b7" Nov 24 09:57:17 crc kubenswrapper[4886]: E1124 09:57:17.850266 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zc46q_openshift-machine-config-operator(23cb993e-0360-4449-b604-8ddd825a6502)\"" pod="openshift-machine-config-operator/machine-config-daemon-zc46q" podUID="23cb993e-0360-4449-b604-8ddd825a6502" Nov 24 09:57:30 crc kubenswrapper[4886]: I1124 09:57:30.848957 4886 scope.go:117] "RemoveContainer" containerID="1e45cb7a19817212ac61663702e5e39e029304fff651e6302fbb94fac4e5b2b7" Nov 24 09:57:30 crc kubenswrapper[4886]: E1124 09:57:30.851120 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zc46q_openshift-machine-config-operator(23cb993e-0360-4449-b604-8ddd825a6502)\"" pod="openshift-machine-config-operator/machine-config-daemon-zc46q" podUID="23cb993e-0360-4449-b604-8ddd825a6502" Nov 24 09:57:34 crc kubenswrapper[4886]: I1124 09:57:34.103961 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-w5rqf/must-gather-55bft"] Nov 24 09:57:34 crc kubenswrapper[4886]: E1124 09:57:34.114087 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a9b187e2-2e4e-4f8a-9c51-657f618463cd" containerName="gather" Nov 24 09:57:34 crc kubenswrapper[4886]: I1124 09:57:34.114123 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9b187e2-2e4e-4f8a-9c51-657f618463cd" containerName="gather" Nov 24 09:57:34 crc kubenswrapper[4886]: E1124 09:57:34.114142 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a9b187e2-2e4e-4f8a-9c51-657f618463cd" containerName="copy" Nov 24 09:57:34 crc kubenswrapper[4886]: I1124 09:57:34.114166 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9b187e2-2e4e-4f8a-9c51-657f618463cd" containerName="copy" Nov 24 09:57:34 crc kubenswrapper[4886]: E1124 09:57:34.114179 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2dd60250-81ae-417e-8936-7cba3901703b" containerName="extract-utilities" Nov 24 09:57:34 crc kubenswrapper[4886]: I1124 09:57:34.114187 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="2dd60250-81ae-417e-8936-7cba3901703b" containerName="extract-utilities" Nov 24 09:57:34 crc kubenswrapper[4886]: E1124 09:57:34.114197 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2dd60250-81ae-417e-8936-7cba3901703b" containerName="extract-content" Nov 24 09:57:34 crc kubenswrapper[4886]: I1124 09:57:34.114204 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="2dd60250-81ae-417e-8936-7cba3901703b" containerName="extract-content" Nov 24 09:57:34 crc kubenswrapper[4886]: E1124 09:57:34.114217 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2dd60250-81ae-417e-8936-7cba3901703b" containerName="registry-server" Nov 24 09:57:34 crc kubenswrapper[4886]: I1124 09:57:34.114223 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="2dd60250-81ae-417e-8936-7cba3901703b" containerName="registry-server" Nov 24 09:57:34 crc kubenswrapper[4886]: I1124 09:57:34.114418 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="a9b187e2-2e4e-4f8a-9c51-657f618463cd" containerName="copy" Nov 24 09:57:34 crc kubenswrapper[4886]: I1124 09:57:34.114438 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="2dd60250-81ae-417e-8936-7cba3901703b" containerName="registry-server" Nov 24 09:57:34 crc kubenswrapper[4886]: I1124 09:57:34.114459 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="a9b187e2-2e4e-4f8a-9c51-657f618463cd" containerName="gather" Nov 24 09:57:34 crc kubenswrapper[4886]: I1124 09:57:34.115538 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-w5rqf/must-gather-55bft" Nov 24 09:57:34 crc kubenswrapper[4886]: I1124 09:57:34.119789 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-w5rqf"/"openshift-service-ca.crt" Nov 24 09:57:34 crc kubenswrapper[4886]: I1124 09:57:34.120668 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-w5rqf"/"kube-root-ca.crt" Nov 24 09:57:34 crc kubenswrapper[4886]: I1124 09:57:34.121791 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-w5rqf"/"default-dockercfg-c7xxt" Nov 24 09:57:34 crc kubenswrapper[4886]: I1124 09:57:34.146133 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-w5rqf/must-gather-55bft"] Nov 24 09:57:34 crc kubenswrapper[4886]: I1124 09:57:34.269912 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lsr8t\" (UniqueName: \"kubernetes.io/projected/795701b4-9f3a-4065-bc4f-54daab63c092-kube-api-access-lsr8t\") pod \"must-gather-55bft\" (UID: \"795701b4-9f3a-4065-bc4f-54daab63c092\") " pod="openshift-must-gather-w5rqf/must-gather-55bft" Nov 24 09:57:34 crc kubenswrapper[4886]: I1124 09:57:34.269970 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/795701b4-9f3a-4065-bc4f-54daab63c092-must-gather-output\") pod \"must-gather-55bft\" (UID: \"795701b4-9f3a-4065-bc4f-54daab63c092\") " pod="openshift-must-gather-w5rqf/must-gather-55bft" Nov 24 09:57:34 crc kubenswrapper[4886]: I1124 09:57:34.372450 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lsr8t\" (UniqueName: \"kubernetes.io/projected/795701b4-9f3a-4065-bc4f-54daab63c092-kube-api-access-lsr8t\") pod \"must-gather-55bft\" (UID: \"795701b4-9f3a-4065-bc4f-54daab63c092\") " pod="openshift-must-gather-w5rqf/must-gather-55bft" Nov 24 09:57:34 crc kubenswrapper[4886]: I1124 09:57:34.372490 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/795701b4-9f3a-4065-bc4f-54daab63c092-must-gather-output\") pod \"must-gather-55bft\" (UID: \"795701b4-9f3a-4065-bc4f-54daab63c092\") " pod="openshift-must-gather-w5rqf/must-gather-55bft" Nov 24 09:57:34 crc kubenswrapper[4886]: I1124 09:57:34.373017 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/795701b4-9f3a-4065-bc4f-54daab63c092-must-gather-output\") pod \"must-gather-55bft\" (UID: \"795701b4-9f3a-4065-bc4f-54daab63c092\") " pod="openshift-must-gather-w5rqf/must-gather-55bft" Nov 24 09:57:34 crc kubenswrapper[4886]: I1124 09:57:34.391083 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lsr8t\" (UniqueName: \"kubernetes.io/projected/795701b4-9f3a-4065-bc4f-54daab63c092-kube-api-access-lsr8t\") pod \"must-gather-55bft\" (UID: \"795701b4-9f3a-4065-bc4f-54daab63c092\") " pod="openshift-must-gather-w5rqf/must-gather-55bft" Nov 24 09:57:34 crc kubenswrapper[4886]: I1124 09:57:34.437883 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-w5rqf/must-gather-55bft" Nov 24 09:57:34 crc kubenswrapper[4886]: I1124 09:57:34.984565 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-w5rqf/must-gather-55bft"] Nov 24 09:57:35 crc kubenswrapper[4886]: I1124 09:57:35.560604 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-w5rqf/must-gather-55bft" event={"ID":"795701b4-9f3a-4065-bc4f-54daab63c092","Type":"ContainerStarted","Data":"0c65361fa9f287fe451ec91ff301e35a24ffbecc8acdb7a888638ab1f9ba1beb"} Nov 24 09:57:36 crc kubenswrapper[4886]: I1124 09:57:36.573827 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-w5rqf/must-gather-55bft" event={"ID":"795701b4-9f3a-4065-bc4f-54daab63c092","Type":"ContainerStarted","Data":"727762faabd1db2f814319e6e068d7022ec9057298b8c6b43b6b81ebec26a381"} Nov 24 09:57:36 crc kubenswrapper[4886]: I1124 09:57:36.574507 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-w5rqf/must-gather-55bft" event={"ID":"795701b4-9f3a-4065-bc4f-54daab63c092","Type":"ContainerStarted","Data":"d706c070b3265ad6535e1e677553c0937835772ba86a6a3e9af4c4d63886f956"} Nov 24 09:57:40 crc kubenswrapper[4886]: I1124 09:57:40.108656 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-w5rqf/must-gather-55bft" podStartSLOduration=6.108635048 podStartE2EDuration="6.108635048s" podCreationTimestamp="2025-11-24 09:57:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 09:57:36.591760007 +0000 UTC m=+4112.478498162" watchObservedRunningTime="2025-11-24 09:57:40.108635048 +0000 UTC m=+4115.995373173" Nov 24 09:57:40 crc kubenswrapper[4886]: I1124 09:57:40.110359 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-w5rqf/crc-debug-kxzx7"] Nov 24 09:57:40 crc kubenswrapper[4886]: I1124 09:57:40.112099 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-w5rqf/crc-debug-kxzx7" Nov 24 09:57:40 crc kubenswrapper[4886]: I1124 09:57:40.196613 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-64xgt\" (UniqueName: \"kubernetes.io/projected/63c9d540-08af-4036-a549-ac285f0e64f6-kube-api-access-64xgt\") pod \"crc-debug-kxzx7\" (UID: \"63c9d540-08af-4036-a549-ac285f0e64f6\") " pod="openshift-must-gather-w5rqf/crc-debug-kxzx7" Nov 24 09:57:40 crc kubenswrapper[4886]: I1124 09:57:40.196950 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/63c9d540-08af-4036-a549-ac285f0e64f6-host\") pod \"crc-debug-kxzx7\" (UID: \"63c9d540-08af-4036-a549-ac285f0e64f6\") " pod="openshift-must-gather-w5rqf/crc-debug-kxzx7" Nov 24 09:57:40 crc kubenswrapper[4886]: I1124 09:57:40.299421 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/63c9d540-08af-4036-a549-ac285f0e64f6-host\") pod \"crc-debug-kxzx7\" (UID: \"63c9d540-08af-4036-a549-ac285f0e64f6\") " pod="openshift-must-gather-w5rqf/crc-debug-kxzx7" Nov 24 09:57:40 crc kubenswrapper[4886]: I1124 09:57:40.299581 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-64xgt\" (UniqueName: \"kubernetes.io/projected/63c9d540-08af-4036-a549-ac285f0e64f6-kube-api-access-64xgt\") pod \"crc-debug-kxzx7\" (UID: \"63c9d540-08af-4036-a549-ac285f0e64f6\") " pod="openshift-must-gather-w5rqf/crc-debug-kxzx7" Nov 24 09:57:40 crc kubenswrapper[4886]: I1124 09:57:40.299627 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/63c9d540-08af-4036-a549-ac285f0e64f6-host\") pod \"crc-debug-kxzx7\" (UID: \"63c9d540-08af-4036-a549-ac285f0e64f6\") " pod="openshift-must-gather-w5rqf/crc-debug-kxzx7" Nov 24 09:57:40 crc kubenswrapper[4886]: I1124 09:57:40.318500 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-64xgt\" (UniqueName: \"kubernetes.io/projected/63c9d540-08af-4036-a549-ac285f0e64f6-kube-api-access-64xgt\") pod \"crc-debug-kxzx7\" (UID: \"63c9d540-08af-4036-a549-ac285f0e64f6\") " pod="openshift-must-gather-w5rqf/crc-debug-kxzx7" Nov 24 09:57:40 crc kubenswrapper[4886]: I1124 09:57:40.434984 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-w5rqf/crc-debug-kxzx7" Nov 24 09:57:40 crc kubenswrapper[4886]: W1124 09:57:40.469938 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod63c9d540_08af_4036_a549_ac285f0e64f6.slice/crio-17e358a3a53d1d8eb9f659a6c64ebd35840770899bcb59eb571abcb9eae3a209 WatchSource:0}: Error finding container 17e358a3a53d1d8eb9f659a6c64ebd35840770899bcb59eb571abcb9eae3a209: Status 404 returned error can't find the container with id 17e358a3a53d1d8eb9f659a6c64ebd35840770899bcb59eb571abcb9eae3a209 Nov 24 09:57:40 crc kubenswrapper[4886]: I1124 09:57:40.614217 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-w5rqf/crc-debug-kxzx7" event={"ID":"63c9d540-08af-4036-a549-ac285f0e64f6","Type":"ContainerStarted","Data":"17e358a3a53d1d8eb9f659a6c64ebd35840770899bcb59eb571abcb9eae3a209"} Nov 24 09:57:41 crc kubenswrapper[4886]: I1124 09:57:41.625584 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-w5rqf/crc-debug-kxzx7" event={"ID":"63c9d540-08af-4036-a549-ac285f0e64f6","Type":"ContainerStarted","Data":"d886205b78e8807bc1eb2f61aa0ffed6fd61c40b95ad657ab5bdf27c2dc95854"} Nov 24 09:57:41 crc kubenswrapper[4886]: I1124 09:57:41.646856 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-w5rqf/crc-debug-kxzx7" podStartSLOduration=1.646834796 podStartE2EDuration="1.646834796s" podCreationTimestamp="2025-11-24 09:57:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 09:57:41.638082746 +0000 UTC m=+4117.524820871" watchObservedRunningTime="2025-11-24 09:57:41.646834796 +0000 UTC m=+4117.533572921" Nov 24 09:57:42 crc kubenswrapper[4886]: I1124 09:57:42.848803 4886 scope.go:117] "RemoveContainer" containerID="1e45cb7a19817212ac61663702e5e39e029304fff651e6302fbb94fac4e5b2b7" Nov 24 09:57:42 crc kubenswrapper[4886]: E1124 09:57:42.849611 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zc46q_openshift-machine-config-operator(23cb993e-0360-4449-b604-8ddd825a6502)\"" pod="openshift-machine-config-operator/machine-config-daemon-zc46q" podUID="23cb993e-0360-4449-b604-8ddd825a6502" Nov 24 09:57:56 crc kubenswrapper[4886]: I1124 09:57:56.849479 4886 scope.go:117] "RemoveContainer" containerID="1e45cb7a19817212ac61663702e5e39e029304fff651e6302fbb94fac4e5b2b7" Nov 24 09:57:56 crc kubenswrapper[4886]: E1124 09:57:56.850583 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zc46q_openshift-machine-config-operator(23cb993e-0360-4449-b604-8ddd825a6502)\"" pod="openshift-machine-config-operator/machine-config-daemon-zc46q" podUID="23cb993e-0360-4449-b604-8ddd825a6502" Nov 24 09:58:10 crc kubenswrapper[4886]: I1124 09:58:10.852972 4886 scope.go:117] "RemoveContainer" containerID="1e45cb7a19817212ac61663702e5e39e029304fff651e6302fbb94fac4e5b2b7" Nov 24 09:58:11 crc kubenswrapper[4886]: I1124 09:58:11.929900 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zc46q" event={"ID":"23cb993e-0360-4449-b604-8ddd825a6502","Type":"ContainerStarted","Data":"36b26e4cf83aad3ea3d4859c5d848427409291cf3105e6e7735d206d24bf9ffb"} Nov 24 09:58:19 crc kubenswrapper[4886]: I1124 09:58:19.003892 4886 generic.go:334] "Generic (PLEG): container finished" podID="63c9d540-08af-4036-a549-ac285f0e64f6" containerID="d886205b78e8807bc1eb2f61aa0ffed6fd61c40b95ad657ab5bdf27c2dc95854" exitCode=0 Nov 24 09:58:19 crc kubenswrapper[4886]: I1124 09:58:19.003983 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-w5rqf/crc-debug-kxzx7" event={"ID":"63c9d540-08af-4036-a549-ac285f0e64f6","Type":"ContainerDied","Data":"d886205b78e8807bc1eb2f61aa0ffed6fd61c40b95ad657ab5bdf27c2dc95854"} Nov 24 09:58:20 crc kubenswrapper[4886]: I1124 09:58:20.474668 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-w5rqf/crc-debug-kxzx7" Nov 24 09:58:20 crc kubenswrapper[4886]: I1124 09:58:20.493014 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-64xgt\" (UniqueName: \"kubernetes.io/projected/63c9d540-08af-4036-a549-ac285f0e64f6-kube-api-access-64xgt\") pod \"63c9d540-08af-4036-a549-ac285f0e64f6\" (UID: \"63c9d540-08af-4036-a549-ac285f0e64f6\") " Nov 24 09:58:20 crc kubenswrapper[4886]: I1124 09:58:20.493069 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/63c9d540-08af-4036-a549-ac285f0e64f6-host\") pod \"63c9d540-08af-4036-a549-ac285f0e64f6\" (UID: \"63c9d540-08af-4036-a549-ac285f0e64f6\") " Nov 24 09:58:20 crc kubenswrapper[4886]: I1124 09:58:20.493195 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/63c9d540-08af-4036-a549-ac285f0e64f6-host" (OuterVolumeSpecName: "host") pod "63c9d540-08af-4036-a549-ac285f0e64f6" (UID: "63c9d540-08af-4036-a549-ac285f0e64f6"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 09:58:20 crc kubenswrapper[4886]: I1124 09:58:20.494116 4886 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/63c9d540-08af-4036-a549-ac285f0e64f6-host\") on node \"crc\" DevicePath \"\"" Nov 24 09:58:20 crc kubenswrapper[4886]: I1124 09:58:20.509216 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/63c9d540-08af-4036-a549-ac285f0e64f6-kube-api-access-64xgt" (OuterVolumeSpecName: "kube-api-access-64xgt") pod "63c9d540-08af-4036-a549-ac285f0e64f6" (UID: "63c9d540-08af-4036-a549-ac285f0e64f6"). InnerVolumeSpecName "kube-api-access-64xgt". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 09:58:20 crc kubenswrapper[4886]: I1124 09:58:20.516597 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-w5rqf/crc-debug-kxzx7"] Nov 24 09:58:20 crc kubenswrapper[4886]: I1124 09:58:20.527986 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-w5rqf/crc-debug-kxzx7"] Nov 24 09:58:20 crc kubenswrapper[4886]: I1124 09:58:20.595735 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-64xgt\" (UniqueName: \"kubernetes.io/projected/63c9d540-08af-4036-a549-ac285f0e64f6-kube-api-access-64xgt\") on node \"crc\" DevicePath \"\"" Nov 24 09:58:20 crc kubenswrapper[4886]: I1124 09:58:20.863327 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="63c9d540-08af-4036-a549-ac285f0e64f6" path="/var/lib/kubelet/pods/63c9d540-08af-4036-a549-ac285f0e64f6/volumes" Nov 24 09:58:21 crc kubenswrapper[4886]: I1124 09:58:21.031358 4886 scope.go:117] "RemoveContainer" containerID="d886205b78e8807bc1eb2f61aa0ffed6fd61c40b95ad657ab5bdf27c2dc95854" Nov 24 09:58:21 crc kubenswrapper[4886]: I1124 09:58:21.031423 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-w5rqf/crc-debug-kxzx7" Nov 24 09:58:21 crc kubenswrapper[4886]: I1124 09:58:21.864132 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-w5rqf/crc-debug-bnpp7"] Nov 24 09:58:21 crc kubenswrapper[4886]: E1124 09:58:21.864555 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="63c9d540-08af-4036-a549-ac285f0e64f6" containerName="container-00" Nov 24 09:58:21 crc kubenswrapper[4886]: I1124 09:58:21.864570 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="63c9d540-08af-4036-a549-ac285f0e64f6" containerName="container-00" Nov 24 09:58:21 crc kubenswrapper[4886]: I1124 09:58:21.864761 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="63c9d540-08af-4036-a549-ac285f0e64f6" containerName="container-00" Nov 24 09:58:21 crc kubenswrapper[4886]: I1124 09:58:21.865461 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-w5rqf/crc-debug-bnpp7" Nov 24 09:58:21 crc kubenswrapper[4886]: I1124 09:58:21.922667 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t7fss\" (UniqueName: \"kubernetes.io/projected/42df918b-ab48-4da1-82ce-661ad1ac2422-kube-api-access-t7fss\") pod \"crc-debug-bnpp7\" (UID: \"42df918b-ab48-4da1-82ce-661ad1ac2422\") " pod="openshift-must-gather-w5rqf/crc-debug-bnpp7" Nov 24 09:58:21 crc kubenswrapper[4886]: I1124 09:58:21.922763 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/42df918b-ab48-4da1-82ce-661ad1ac2422-host\") pod \"crc-debug-bnpp7\" (UID: \"42df918b-ab48-4da1-82ce-661ad1ac2422\") " pod="openshift-must-gather-w5rqf/crc-debug-bnpp7" Nov 24 09:58:22 crc kubenswrapper[4886]: I1124 09:58:22.025973 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t7fss\" (UniqueName: \"kubernetes.io/projected/42df918b-ab48-4da1-82ce-661ad1ac2422-kube-api-access-t7fss\") pod \"crc-debug-bnpp7\" (UID: \"42df918b-ab48-4da1-82ce-661ad1ac2422\") " pod="openshift-must-gather-w5rqf/crc-debug-bnpp7" Nov 24 09:58:22 crc kubenswrapper[4886]: I1124 09:58:22.026040 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/42df918b-ab48-4da1-82ce-661ad1ac2422-host\") pod \"crc-debug-bnpp7\" (UID: \"42df918b-ab48-4da1-82ce-661ad1ac2422\") " pod="openshift-must-gather-w5rqf/crc-debug-bnpp7" Nov 24 09:58:22 crc kubenswrapper[4886]: I1124 09:58:22.026192 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/42df918b-ab48-4da1-82ce-661ad1ac2422-host\") pod \"crc-debug-bnpp7\" (UID: \"42df918b-ab48-4da1-82ce-661ad1ac2422\") " pod="openshift-must-gather-w5rqf/crc-debug-bnpp7" Nov 24 09:58:22 crc kubenswrapper[4886]: I1124 09:58:22.054571 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t7fss\" (UniqueName: \"kubernetes.io/projected/42df918b-ab48-4da1-82ce-661ad1ac2422-kube-api-access-t7fss\") pod \"crc-debug-bnpp7\" (UID: \"42df918b-ab48-4da1-82ce-661ad1ac2422\") " pod="openshift-must-gather-w5rqf/crc-debug-bnpp7" Nov 24 09:58:22 crc kubenswrapper[4886]: I1124 09:58:22.185963 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-w5rqf/crc-debug-bnpp7" Nov 24 09:58:22 crc kubenswrapper[4886]: W1124 09:58:22.229281 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod42df918b_ab48_4da1_82ce_661ad1ac2422.slice/crio-f2ef1cc8579475d4cf80468ef0211e311e78dbe0fda4e82fb3f9edd6bc8b7b86 WatchSource:0}: Error finding container f2ef1cc8579475d4cf80468ef0211e311e78dbe0fda4e82fb3f9edd6bc8b7b86: Status 404 returned error can't find the container with id f2ef1cc8579475d4cf80468ef0211e311e78dbe0fda4e82fb3f9edd6bc8b7b86 Nov 24 09:58:23 crc kubenswrapper[4886]: I1124 09:58:23.069860 4886 generic.go:334] "Generic (PLEG): container finished" podID="42df918b-ab48-4da1-82ce-661ad1ac2422" containerID="1cbc598d14fe0a7bbd8b7318906ee037bea3ed7829bb51fb7d0c1bbcc42393c6" exitCode=0 Nov 24 09:58:23 crc kubenswrapper[4886]: I1124 09:58:23.070274 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-w5rqf/crc-debug-bnpp7" event={"ID":"42df918b-ab48-4da1-82ce-661ad1ac2422","Type":"ContainerDied","Data":"1cbc598d14fe0a7bbd8b7318906ee037bea3ed7829bb51fb7d0c1bbcc42393c6"} Nov 24 09:58:23 crc kubenswrapper[4886]: I1124 09:58:23.070317 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-w5rqf/crc-debug-bnpp7" event={"ID":"42df918b-ab48-4da1-82ce-661ad1ac2422","Type":"ContainerStarted","Data":"f2ef1cc8579475d4cf80468ef0211e311e78dbe0fda4e82fb3f9edd6bc8b7b86"} Nov 24 09:58:23 crc kubenswrapper[4886]: I1124 09:58:23.546029 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-w5rqf/crc-debug-bnpp7"] Nov 24 09:58:23 crc kubenswrapper[4886]: I1124 09:58:23.555512 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-w5rqf/crc-debug-bnpp7"] Nov 24 09:58:24 crc kubenswrapper[4886]: I1124 09:58:24.189384 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-w5rqf/crc-debug-bnpp7" Nov 24 09:58:24 crc kubenswrapper[4886]: I1124 09:58:24.277134 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/42df918b-ab48-4da1-82ce-661ad1ac2422-host\") pod \"42df918b-ab48-4da1-82ce-661ad1ac2422\" (UID: \"42df918b-ab48-4da1-82ce-661ad1ac2422\") " Nov 24 09:58:24 crc kubenswrapper[4886]: I1124 09:58:24.277343 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/42df918b-ab48-4da1-82ce-661ad1ac2422-host" (OuterVolumeSpecName: "host") pod "42df918b-ab48-4da1-82ce-661ad1ac2422" (UID: "42df918b-ab48-4da1-82ce-661ad1ac2422"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 09:58:24 crc kubenswrapper[4886]: I1124 09:58:24.277462 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t7fss\" (UniqueName: \"kubernetes.io/projected/42df918b-ab48-4da1-82ce-661ad1ac2422-kube-api-access-t7fss\") pod \"42df918b-ab48-4da1-82ce-661ad1ac2422\" (UID: \"42df918b-ab48-4da1-82ce-661ad1ac2422\") " Nov 24 09:58:24 crc kubenswrapper[4886]: I1124 09:58:24.278006 4886 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/42df918b-ab48-4da1-82ce-661ad1ac2422-host\") on node \"crc\" DevicePath \"\"" Nov 24 09:58:24 crc kubenswrapper[4886]: I1124 09:58:24.287461 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42df918b-ab48-4da1-82ce-661ad1ac2422-kube-api-access-t7fss" (OuterVolumeSpecName: "kube-api-access-t7fss") pod "42df918b-ab48-4da1-82ce-661ad1ac2422" (UID: "42df918b-ab48-4da1-82ce-661ad1ac2422"). InnerVolumeSpecName "kube-api-access-t7fss". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 09:58:24 crc kubenswrapper[4886]: I1124 09:58:24.380354 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t7fss\" (UniqueName: \"kubernetes.io/projected/42df918b-ab48-4da1-82ce-661ad1ac2422-kube-api-access-t7fss\") on node \"crc\" DevicePath \"\"" Nov 24 09:58:24 crc kubenswrapper[4886]: I1124 09:58:24.753708 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-w5rqf/crc-debug-qk72m"] Nov 24 09:58:24 crc kubenswrapper[4886]: E1124 09:58:24.754090 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42df918b-ab48-4da1-82ce-661ad1ac2422" containerName="container-00" Nov 24 09:58:24 crc kubenswrapper[4886]: I1124 09:58:24.754103 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="42df918b-ab48-4da1-82ce-661ad1ac2422" containerName="container-00" Nov 24 09:58:24 crc kubenswrapper[4886]: I1124 09:58:24.754337 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="42df918b-ab48-4da1-82ce-661ad1ac2422" containerName="container-00" Nov 24 09:58:24 crc kubenswrapper[4886]: I1124 09:58:24.755008 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-w5rqf/crc-debug-qk72m" Nov 24 09:58:24 crc kubenswrapper[4886]: I1124 09:58:24.789859 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fbvrc\" (UniqueName: \"kubernetes.io/projected/2d623dbc-e504-4217-a3f2-876826115a22-kube-api-access-fbvrc\") pod \"crc-debug-qk72m\" (UID: \"2d623dbc-e504-4217-a3f2-876826115a22\") " pod="openshift-must-gather-w5rqf/crc-debug-qk72m" Nov 24 09:58:24 crc kubenswrapper[4886]: I1124 09:58:24.790359 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/2d623dbc-e504-4217-a3f2-876826115a22-host\") pod \"crc-debug-qk72m\" (UID: \"2d623dbc-e504-4217-a3f2-876826115a22\") " pod="openshift-must-gather-w5rqf/crc-debug-qk72m" Nov 24 09:58:24 crc kubenswrapper[4886]: I1124 09:58:24.864195 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="42df918b-ab48-4da1-82ce-661ad1ac2422" path="/var/lib/kubelet/pods/42df918b-ab48-4da1-82ce-661ad1ac2422/volumes" Nov 24 09:58:24 crc kubenswrapper[4886]: I1124 09:58:24.893802 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fbvrc\" (UniqueName: \"kubernetes.io/projected/2d623dbc-e504-4217-a3f2-876826115a22-kube-api-access-fbvrc\") pod \"crc-debug-qk72m\" (UID: \"2d623dbc-e504-4217-a3f2-876826115a22\") " pod="openshift-must-gather-w5rqf/crc-debug-qk72m" Nov 24 09:58:24 crc kubenswrapper[4886]: I1124 09:58:24.893974 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/2d623dbc-e504-4217-a3f2-876826115a22-host\") pod \"crc-debug-qk72m\" (UID: \"2d623dbc-e504-4217-a3f2-876826115a22\") " pod="openshift-must-gather-w5rqf/crc-debug-qk72m" Nov 24 09:58:24 crc kubenswrapper[4886]: I1124 09:58:24.896471 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/2d623dbc-e504-4217-a3f2-876826115a22-host\") pod \"crc-debug-qk72m\" (UID: \"2d623dbc-e504-4217-a3f2-876826115a22\") " pod="openshift-must-gather-w5rqf/crc-debug-qk72m" Nov 24 09:58:24 crc kubenswrapper[4886]: I1124 09:58:24.927029 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fbvrc\" (UniqueName: \"kubernetes.io/projected/2d623dbc-e504-4217-a3f2-876826115a22-kube-api-access-fbvrc\") pod \"crc-debug-qk72m\" (UID: \"2d623dbc-e504-4217-a3f2-876826115a22\") " pod="openshift-must-gather-w5rqf/crc-debug-qk72m" Nov 24 09:58:25 crc kubenswrapper[4886]: I1124 09:58:25.074765 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-w5rqf/crc-debug-qk72m" Nov 24 09:58:25 crc kubenswrapper[4886]: I1124 09:58:25.098968 4886 scope.go:117] "RemoveContainer" containerID="1cbc598d14fe0a7bbd8b7318906ee037bea3ed7829bb51fb7d0c1bbcc42393c6" Nov 24 09:58:25 crc kubenswrapper[4886]: I1124 09:58:25.099028 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-w5rqf/crc-debug-bnpp7" Nov 24 09:58:25 crc kubenswrapper[4886]: W1124 09:58:25.126073 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2d623dbc_e504_4217_a3f2_876826115a22.slice/crio-807e6048be2ddd7ada6ae393bd233fdc7449172b972abbbbe2dea16a0ed48b0c WatchSource:0}: Error finding container 807e6048be2ddd7ada6ae393bd233fdc7449172b972abbbbe2dea16a0ed48b0c: Status 404 returned error can't find the container with id 807e6048be2ddd7ada6ae393bd233fdc7449172b972abbbbe2dea16a0ed48b0c Nov 24 09:58:26 crc kubenswrapper[4886]: I1124 09:58:26.110044 4886 generic.go:334] "Generic (PLEG): container finished" podID="2d623dbc-e504-4217-a3f2-876826115a22" containerID="0c0fe26545f744da3ab1bc782f7db113f2bb5c64663fb8e3f0b93fbc2629579e" exitCode=0 Nov 24 09:58:26 crc kubenswrapper[4886]: I1124 09:58:26.110108 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-w5rqf/crc-debug-qk72m" event={"ID":"2d623dbc-e504-4217-a3f2-876826115a22","Type":"ContainerDied","Data":"0c0fe26545f744da3ab1bc782f7db113f2bb5c64663fb8e3f0b93fbc2629579e"} Nov 24 09:58:26 crc kubenswrapper[4886]: I1124 09:58:26.110517 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-w5rqf/crc-debug-qk72m" event={"ID":"2d623dbc-e504-4217-a3f2-876826115a22","Type":"ContainerStarted","Data":"807e6048be2ddd7ada6ae393bd233fdc7449172b972abbbbe2dea16a0ed48b0c"} Nov 24 09:58:26 crc kubenswrapper[4886]: I1124 09:58:26.148932 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-w5rqf/crc-debug-qk72m"] Nov 24 09:58:26 crc kubenswrapper[4886]: I1124 09:58:26.156282 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-w5rqf/crc-debug-qk72m"] Nov 24 09:58:27 crc kubenswrapper[4886]: I1124 09:58:27.237709 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-w5rqf/crc-debug-qk72m" Nov 24 09:58:27 crc kubenswrapper[4886]: I1124 09:58:27.340258 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/2d623dbc-e504-4217-a3f2-876826115a22-host\") pod \"2d623dbc-e504-4217-a3f2-876826115a22\" (UID: \"2d623dbc-e504-4217-a3f2-876826115a22\") " Nov 24 09:58:27 crc kubenswrapper[4886]: I1124 09:58:27.340316 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fbvrc\" (UniqueName: \"kubernetes.io/projected/2d623dbc-e504-4217-a3f2-876826115a22-kube-api-access-fbvrc\") pod \"2d623dbc-e504-4217-a3f2-876826115a22\" (UID: \"2d623dbc-e504-4217-a3f2-876826115a22\") " Nov 24 09:58:27 crc kubenswrapper[4886]: I1124 09:58:27.340398 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2d623dbc-e504-4217-a3f2-876826115a22-host" (OuterVolumeSpecName: "host") pod "2d623dbc-e504-4217-a3f2-876826115a22" (UID: "2d623dbc-e504-4217-a3f2-876826115a22"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 09:58:27 crc kubenswrapper[4886]: I1124 09:58:27.341184 4886 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/2d623dbc-e504-4217-a3f2-876826115a22-host\") on node \"crc\" DevicePath \"\"" Nov 24 09:58:27 crc kubenswrapper[4886]: I1124 09:58:27.351635 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2d623dbc-e504-4217-a3f2-876826115a22-kube-api-access-fbvrc" (OuterVolumeSpecName: "kube-api-access-fbvrc") pod "2d623dbc-e504-4217-a3f2-876826115a22" (UID: "2d623dbc-e504-4217-a3f2-876826115a22"). InnerVolumeSpecName "kube-api-access-fbvrc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 09:58:27 crc kubenswrapper[4886]: I1124 09:58:27.444627 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fbvrc\" (UniqueName: \"kubernetes.io/projected/2d623dbc-e504-4217-a3f2-876826115a22-kube-api-access-fbvrc\") on node \"crc\" DevicePath \"\"" Nov 24 09:58:28 crc kubenswrapper[4886]: I1124 09:58:28.133451 4886 scope.go:117] "RemoveContainer" containerID="0c0fe26545f744da3ab1bc782f7db113f2bb5c64663fb8e3f0b93fbc2629579e" Nov 24 09:58:28 crc kubenswrapper[4886]: I1124 09:58:28.133934 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-w5rqf/crc-debug-qk72m" Nov 24 09:58:28 crc kubenswrapper[4886]: I1124 09:58:28.861025 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2d623dbc-e504-4217-a3f2-876826115a22" path="/var/lib/kubelet/pods/2d623dbc-e504-4217-a3f2-876826115a22/volumes" Nov 24 09:58:54 crc kubenswrapper[4886]: I1124 09:58:54.301480 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-76455fdd78-8k9rz_5555aeec-470e-473c-ad74-de78791861dc/barbican-api/0.log" Nov 24 09:58:54 crc kubenswrapper[4886]: I1124 09:58:54.375612 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-bd8cw"] Nov 24 09:58:54 crc kubenswrapper[4886]: E1124 09:58:54.376176 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d623dbc-e504-4217-a3f2-876826115a22" containerName="container-00" Nov 24 09:58:54 crc kubenswrapper[4886]: I1124 09:58:54.376200 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d623dbc-e504-4217-a3f2-876826115a22" containerName="container-00" Nov 24 09:58:54 crc kubenswrapper[4886]: I1124 09:58:54.376444 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d623dbc-e504-4217-a3f2-876826115a22" containerName="container-00" Nov 24 09:58:54 crc kubenswrapper[4886]: I1124 09:58:54.378215 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bd8cw" Nov 24 09:58:54 crc kubenswrapper[4886]: I1124 09:58:54.397459 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-bd8cw"] Nov 24 09:58:54 crc kubenswrapper[4886]: I1124 09:58:54.467712 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-76455fdd78-8k9rz_5555aeec-470e-473c-ad74-de78791861dc/barbican-api-log/0.log" Nov 24 09:58:54 crc kubenswrapper[4886]: I1124 09:58:54.525795 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2d53400e-5071-4c4a-a561-801e95f7b79b-utilities\") pod \"community-operators-bd8cw\" (UID: \"2d53400e-5071-4c4a-a561-801e95f7b79b\") " pod="openshift-marketplace/community-operators-bd8cw" Nov 24 09:58:54 crc kubenswrapper[4886]: I1124 09:58:54.526475 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2d53400e-5071-4c4a-a561-801e95f7b79b-catalog-content\") pod \"community-operators-bd8cw\" (UID: \"2d53400e-5071-4c4a-a561-801e95f7b79b\") " pod="openshift-marketplace/community-operators-bd8cw" Nov 24 09:58:54 crc kubenswrapper[4886]: I1124 09:58:54.526577 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fswtj\" (UniqueName: \"kubernetes.io/projected/2d53400e-5071-4c4a-a561-801e95f7b79b-kube-api-access-fswtj\") pod \"community-operators-bd8cw\" (UID: \"2d53400e-5071-4c4a-a561-801e95f7b79b\") " pod="openshift-marketplace/community-operators-bd8cw" Nov 24 09:58:54 crc kubenswrapper[4886]: I1124 09:58:54.573852 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-xt2dl"] Nov 24 09:58:54 crc kubenswrapper[4886]: I1124 09:58:54.576369 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xt2dl" Nov 24 09:58:54 crc kubenswrapper[4886]: I1124 09:58:54.603197 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-xt2dl"] Nov 24 09:58:54 crc kubenswrapper[4886]: I1124 09:58:54.610338 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-8b4cf4966-gt5q7_cf27d89f-7c4b-49b5-a993-b851f86a2994/barbican-keystone-listener/0.log" Nov 24 09:58:54 crc kubenswrapper[4886]: I1124 09:58:54.629258 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2d53400e-5071-4c4a-a561-801e95f7b79b-utilities\") pod \"community-operators-bd8cw\" (UID: \"2d53400e-5071-4c4a-a561-801e95f7b79b\") " pod="openshift-marketplace/community-operators-bd8cw" Nov 24 09:58:54 crc kubenswrapper[4886]: I1124 09:58:54.629372 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2d53400e-5071-4c4a-a561-801e95f7b79b-catalog-content\") pod \"community-operators-bd8cw\" (UID: \"2d53400e-5071-4c4a-a561-801e95f7b79b\") " pod="openshift-marketplace/community-operators-bd8cw" Nov 24 09:58:54 crc kubenswrapper[4886]: I1124 09:58:54.629419 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fswtj\" (UniqueName: \"kubernetes.io/projected/2d53400e-5071-4c4a-a561-801e95f7b79b-kube-api-access-fswtj\") pod \"community-operators-bd8cw\" (UID: \"2d53400e-5071-4c4a-a561-801e95f7b79b\") " pod="openshift-marketplace/community-operators-bd8cw" Nov 24 09:58:54 crc kubenswrapper[4886]: I1124 09:58:54.630536 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2d53400e-5071-4c4a-a561-801e95f7b79b-utilities\") pod \"community-operators-bd8cw\" (UID: \"2d53400e-5071-4c4a-a561-801e95f7b79b\") " pod="openshift-marketplace/community-operators-bd8cw" Nov 24 09:58:54 crc kubenswrapper[4886]: I1124 09:58:54.632950 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2d53400e-5071-4c4a-a561-801e95f7b79b-catalog-content\") pod \"community-operators-bd8cw\" (UID: \"2d53400e-5071-4c4a-a561-801e95f7b79b\") " pod="openshift-marketplace/community-operators-bd8cw" Nov 24 09:58:54 crc kubenswrapper[4886]: I1124 09:58:54.670688 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-8b4cf4966-gt5q7_cf27d89f-7c4b-49b5-a993-b851f86a2994/barbican-keystone-listener-log/0.log" Nov 24 09:58:54 crc kubenswrapper[4886]: I1124 09:58:54.674348 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fswtj\" (UniqueName: \"kubernetes.io/projected/2d53400e-5071-4c4a-a561-801e95f7b79b-kube-api-access-fswtj\") pod \"community-operators-bd8cw\" (UID: \"2d53400e-5071-4c4a-a561-801e95f7b79b\") " pod="openshift-marketplace/community-operators-bd8cw" Nov 24 09:58:54 crc kubenswrapper[4886]: I1124 09:58:54.722185 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bd8cw" Nov 24 09:58:54 crc kubenswrapper[4886]: I1124 09:58:54.732366 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9bd2fee0-8a88-4571-9c40-7585198593e1-catalog-content\") pod \"certified-operators-xt2dl\" (UID: \"9bd2fee0-8a88-4571-9c40-7585198593e1\") " pod="openshift-marketplace/certified-operators-xt2dl" Nov 24 09:58:54 crc kubenswrapper[4886]: I1124 09:58:54.732475 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9bd2fee0-8a88-4571-9c40-7585198593e1-utilities\") pod \"certified-operators-xt2dl\" (UID: \"9bd2fee0-8a88-4571-9c40-7585198593e1\") " pod="openshift-marketplace/certified-operators-xt2dl" Nov 24 09:58:54 crc kubenswrapper[4886]: I1124 09:58:54.732542 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ttjqx\" (UniqueName: \"kubernetes.io/projected/9bd2fee0-8a88-4571-9c40-7585198593e1-kube-api-access-ttjqx\") pod \"certified-operators-xt2dl\" (UID: \"9bd2fee0-8a88-4571-9c40-7585198593e1\") " pod="openshift-marketplace/certified-operators-xt2dl" Nov 24 09:58:54 crc kubenswrapper[4886]: I1124 09:58:54.834594 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ttjqx\" (UniqueName: \"kubernetes.io/projected/9bd2fee0-8a88-4571-9c40-7585198593e1-kube-api-access-ttjqx\") pod \"certified-operators-xt2dl\" (UID: \"9bd2fee0-8a88-4571-9c40-7585198593e1\") " pod="openshift-marketplace/certified-operators-xt2dl" Nov 24 09:58:54 crc kubenswrapper[4886]: I1124 09:58:54.834805 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9bd2fee0-8a88-4571-9c40-7585198593e1-catalog-content\") pod \"certified-operators-xt2dl\" (UID: \"9bd2fee0-8a88-4571-9c40-7585198593e1\") " pod="openshift-marketplace/certified-operators-xt2dl" Nov 24 09:58:54 crc kubenswrapper[4886]: I1124 09:58:54.834899 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9bd2fee0-8a88-4571-9c40-7585198593e1-utilities\") pod \"certified-operators-xt2dl\" (UID: \"9bd2fee0-8a88-4571-9c40-7585198593e1\") " pod="openshift-marketplace/certified-operators-xt2dl" Nov 24 09:58:54 crc kubenswrapper[4886]: I1124 09:58:54.835360 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9bd2fee0-8a88-4571-9c40-7585198593e1-utilities\") pod \"certified-operators-xt2dl\" (UID: \"9bd2fee0-8a88-4571-9c40-7585198593e1\") " pod="openshift-marketplace/certified-operators-xt2dl" Nov 24 09:58:54 crc kubenswrapper[4886]: I1124 09:58:54.835369 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9bd2fee0-8a88-4571-9c40-7585198593e1-catalog-content\") pod \"certified-operators-xt2dl\" (UID: \"9bd2fee0-8a88-4571-9c40-7585198593e1\") " pod="openshift-marketplace/certified-operators-xt2dl" Nov 24 09:58:54 crc kubenswrapper[4886]: I1124 09:58:54.862061 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ttjqx\" (UniqueName: \"kubernetes.io/projected/9bd2fee0-8a88-4571-9c40-7585198593e1-kube-api-access-ttjqx\") pod \"certified-operators-xt2dl\" (UID: \"9bd2fee0-8a88-4571-9c40-7585198593e1\") " pod="openshift-marketplace/certified-operators-xt2dl" Nov 24 09:58:54 crc kubenswrapper[4886]: I1124 09:58:54.901081 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xt2dl" Nov 24 09:58:55 crc kubenswrapper[4886]: I1124 09:58:55.282742 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-df69f5cf-v8lvl_903a1b7e-92e3-455b-af86-c46c9a290f11/barbican-worker-log/0.log" Nov 24 09:58:55 crc kubenswrapper[4886]: I1124 09:58:55.303947 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-df69f5cf-v8lvl_903a1b7e-92e3-455b-af86-c46c9a290f11/barbican-worker/0.log" Nov 24 09:58:55 crc kubenswrapper[4886]: I1124 09:58:55.404185 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-bd8cw"] Nov 24 09:58:55 crc kubenswrapper[4886]: I1124 09:58:55.445046 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bd8cw" event={"ID":"2d53400e-5071-4c4a-a561-801e95f7b79b","Type":"ContainerStarted","Data":"4e69f409e67bbf8230a106abc54b525819ac552f44321cf4c5301381a48dead8"} Nov 24 09:58:55 crc kubenswrapper[4886]: I1124 09:58:55.627959 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-xt2dl"] Nov 24 09:58:55 crc kubenswrapper[4886]: I1124 09:58:55.640992 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-k624c_e26edc4e-16ec-494e-9011-1dcaf51099be/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Nov 24 09:58:55 crc kubenswrapper[4886]: I1124 09:58:55.887484 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_6c1ffc60-4954-4d55-800e-00cb24c6cfa4/ceilometer-central-agent/0.log" Nov 24 09:58:55 crc kubenswrapper[4886]: I1124 09:58:55.954227 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_6c1ffc60-4954-4d55-800e-00cb24c6cfa4/ceilometer-notification-agent/0.log" Nov 24 09:58:55 crc kubenswrapper[4886]: I1124 09:58:55.974172 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_6c1ffc60-4954-4d55-800e-00cb24c6cfa4/proxy-httpd/0.log" Nov 24 09:58:56 crc kubenswrapper[4886]: I1124 09:58:56.138928 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_6c1ffc60-4954-4d55-800e-00cb24c6cfa4/sg-core/0.log" Nov 24 09:58:56 crc kubenswrapper[4886]: I1124 09:58:56.211042 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_22fb5c5f-d94b-4069-bef0-62e95c42e89e/cinder-api/0.log" Nov 24 09:58:56 crc kubenswrapper[4886]: I1124 09:58:56.229732 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_22fb5c5f-d94b-4069-bef0-62e95c42e89e/cinder-api-log/0.log" Nov 24 09:58:56 crc kubenswrapper[4886]: I1124 09:58:56.460252 4886 generic.go:334] "Generic (PLEG): container finished" podID="9bd2fee0-8a88-4571-9c40-7585198593e1" containerID="86b074a9968cb0582773c51615ca45957ad91294c18f0aaed2d2010445d55962" exitCode=0 Nov 24 09:58:56 crc kubenswrapper[4886]: I1124 09:58:56.460401 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xt2dl" event={"ID":"9bd2fee0-8a88-4571-9c40-7585198593e1","Type":"ContainerDied","Data":"86b074a9968cb0582773c51615ca45957ad91294c18f0aaed2d2010445d55962"} Nov 24 09:58:56 crc kubenswrapper[4886]: I1124 09:58:56.460613 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xt2dl" event={"ID":"9bd2fee0-8a88-4571-9c40-7585198593e1","Type":"ContainerStarted","Data":"33938b4f718e26b89ab349160cf3743aa9e36df41da0414d090493c74ce0586e"} Nov 24 09:58:56 crc kubenswrapper[4886]: I1124 09:58:56.463596 4886 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 24 09:58:56 crc kubenswrapper[4886]: I1124 09:58:56.465473 4886 generic.go:334] "Generic (PLEG): container finished" podID="2d53400e-5071-4c4a-a561-801e95f7b79b" containerID="cabf29518094cf60cddeb11e12d0267bcc84e1ad8f9164009cef5a3b76ec6062" exitCode=0 Nov 24 09:58:56 crc kubenswrapper[4886]: I1124 09:58:56.465518 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bd8cw" event={"ID":"2d53400e-5071-4c4a-a561-801e95f7b79b","Type":"ContainerDied","Data":"cabf29518094cf60cddeb11e12d0267bcc84e1ad8f9164009cef5a3b76ec6062"} Nov 24 09:58:57 crc kubenswrapper[4886]: I1124 09:58:57.098685 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_20a1d599-cfce-400c-a6d9-9a060ebe4b8e/cinder-scheduler/0.log" Nov 24 09:58:57 crc kubenswrapper[4886]: I1124 09:58:57.204730 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_20a1d599-cfce-400c-a6d9-9a060ebe4b8e/probe/0.log" Nov 24 09:58:57 crc kubenswrapper[4886]: I1124 09:58:57.368181 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-w42vq_352e856d-6e0d-4aba-b2ce-8063ed40a041/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Nov 24 09:58:57 crc kubenswrapper[4886]: I1124 09:58:57.406975 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-qc5h5_f7b875a5-9e9f-43bc-b6da-48223ea2c653/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Nov 24 09:58:57 crc kubenswrapper[4886]: I1124 09:58:57.493525 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xt2dl" event={"ID":"9bd2fee0-8a88-4571-9c40-7585198593e1","Type":"ContainerStarted","Data":"5990621bd3c0c02c4c431f5aff0ddf31a7c2465e3fb5ec2743731b2c6f0ca9ae"} Nov 24 09:58:57 crc kubenswrapper[4886]: I1124 09:58:57.651291 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-78c64bc9c5-bw54t_b6b3b0d2-a417-4be6-a6b0-8b30f1a25a06/init/0.log" Nov 24 09:58:57 crc kubenswrapper[4886]: I1124 09:58:57.772946 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-4vggj"] Nov 24 09:58:57 crc kubenswrapper[4886]: I1124 09:58:57.775524 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4vggj" Nov 24 09:58:57 crc kubenswrapper[4886]: I1124 09:58:57.788862 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-4vggj"] Nov 24 09:58:57 crc kubenswrapper[4886]: I1124 09:58:57.833714 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-78c64bc9c5-bw54t_b6b3b0d2-a417-4be6-a6b0-8b30f1a25a06/init/0.log" Nov 24 09:58:57 crc kubenswrapper[4886]: I1124 09:58:57.932489 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/48fa23b5-90f7-40a4-84b9-7dd6295d23ca-catalog-content\") pod \"redhat-marketplace-4vggj\" (UID: \"48fa23b5-90f7-40a4-84b9-7dd6295d23ca\") " pod="openshift-marketplace/redhat-marketplace-4vggj" Nov 24 09:58:57 crc kubenswrapper[4886]: I1124 09:58:57.932927 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-db78r\" (UniqueName: \"kubernetes.io/projected/48fa23b5-90f7-40a4-84b9-7dd6295d23ca-kube-api-access-db78r\") pod \"redhat-marketplace-4vggj\" (UID: \"48fa23b5-90f7-40a4-84b9-7dd6295d23ca\") " pod="openshift-marketplace/redhat-marketplace-4vggj" Nov 24 09:58:57 crc kubenswrapper[4886]: I1124 09:58:57.933031 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/48fa23b5-90f7-40a4-84b9-7dd6295d23ca-utilities\") pod \"redhat-marketplace-4vggj\" (UID: \"48fa23b5-90f7-40a4-84b9-7dd6295d23ca\") " pod="openshift-marketplace/redhat-marketplace-4vggj" Nov 24 09:58:57 crc kubenswrapper[4886]: I1124 09:58:57.962481 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-78c64bc9c5-bw54t_b6b3b0d2-a417-4be6-a6b0-8b30f1a25a06/dnsmasq-dns/0.log" Nov 24 09:58:57 crc kubenswrapper[4886]: I1124 09:58:57.992124 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-h68td_5efd6888-9bb2-49d6-ba3b-b9e30c87e5a1/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Nov 24 09:58:58 crc kubenswrapper[4886]: I1124 09:58:58.034713 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/48fa23b5-90f7-40a4-84b9-7dd6295d23ca-catalog-content\") pod \"redhat-marketplace-4vggj\" (UID: \"48fa23b5-90f7-40a4-84b9-7dd6295d23ca\") " pod="openshift-marketplace/redhat-marketplace-4vggj" Nov 24 09:58:58 crc kubenswrapper[4886]: I1124 09:58:58.034787 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-db78r\" (UniqueName: \"kubernetes.io/projected/48fa23b5-90f7-40a4-84b9-7dd6295d23ca-kube-api-access-db78r\") pod \"redhat-marketplace-4vggj\" (UID: \"48fa23b5-90f7-40a4-84b9-7dd6295d23ca\") " pod="openshift-marketplace/redhat-marketplace-4vggj" Nov 24 09:58:58 crc kubenswrapper[4886]: I1124 09:58:58.034893 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/48fa23b5-90f7-40a4-84b9-7dd6295d23ca-utilities\") pod \"redhat-marketplace-4vggj\" (UID: \"48fa23b5-90f7-40a4-84b9-7dd6295d23ca\") " pod="openshift-marketplace/redhat-marketplace-4vggj" Nov 24 09:58:58 crc kubenswrapper[4886]: I1124 09:58:58.035494 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/48fa23b5-90f7-40a4-84b9-7dd6295d23ca-utilities\") pod \"redhat-marketplace-4vggj\" (UID: \"48fa23b5-90f7-40a4-84b9-7dd6295d23ca\") " pod="openshift-marketplace/redhat-marketplace-4vggj" Nov 24 09:58:58 crc kubenswrapper[4886]: I1124 09:58:58.035523 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/48fa23b5-90f7-40a4-84b9-7dd6295d23ca-catalog-content\") pod \"redhat-marketplace-4vggj\" (UID: \"48fa23b5-90f7-40a4-84b9-7dd6295d23ca\") " pod="openshift-marketplace/redhat-marketplace-4vggj" Nov 24 09:58:58 crc kubenswrapper[4886]: I1124 09:58:58.064921 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-db78r\" (UniqueName: \"kubernetes.io/projected/48fa23b5-90f7-40a4-84b9-7dd6295d23ca-kube-api-access-db78r\") pod \"redhat-marketplace-4vggj\" (UID: \"48fa23b5-90f7-40a4-84b9-7dd6295d23ca\") " pod="openshift-marketplace/redhat-marketplace-4vggj" Nov 24 09:58:58 crc kubenswrapper[4886]: I1124 09:58:58.100513 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4vggj" Nov 24 09:58:58 crc kubenswrapper[4886]: I1124 09:58:58.399545 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_f223fa66-cb1a-4f97-970b-9c64793d34b9/glance-httpd/0.log" Nov 24 09:58:58 crc kubenswrapper[4886]: I1124 09:58:58.411801 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_f223fa66-cb1a-4f97-970b-9c64793d34b9/glance-log/0.log" Nov 24 09:58:58 crc kubenswrapper[4886]: I1124 09:58:58.509650 4886 generic.go:334] "Generic (PLEG): container finished" podID="9bd2fee0-8a88-4571-9c40-7585198593e1" containerID="5990621bd3c0c02c4c431f5aff0ddf31a7c2465e3fb5ec2743731b2c6f0ca9ae" exitCode=0 Nov 24 09:58:58 crc kubenswrapper[4886]: I1124 09:58:58.509734 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xt2dl" event={"ID":"9bd2fee0-8a88-4571-9c40-7585198593e1","Type":"ContainerDied","Data":"5990621bd3c0c02c4c431f5aff0ddf31a7c2465e3fb5ec2743731b2c6f0ca9ae"} Nov 24 09:58:58 crc kubenswrapper[4886]: I1124 09:58:58.516461 4886 generic.go:334] "Generic (PLEG): container finished" podID="2d53400e-5071-4c4a-a561-801e95f7b79b" containerID="d110618e21cff74c37128c52bc62b72ac2277c7db0498a97f9a7d1ad694b1e9e" exitCode=0 Nov 24 09:58:58 crc kubenswrapper[4886]: I1124 09:58:58.516536 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bd8cw" event={"ID":"2d53400e-5071-4c4a-a561-801e95f7b79b","Type":"ContainerDied","Data":"d110618e21cff74c37128c52bc62b72ac2277c7db0498a97f9a7d1ad694b1e9e"} Nov 24 09:58:58 crc kubenswrapper[4886]: I1124 09:58:58.610957 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_c2a34763-2dcf-4e0d-a6b6-7e26dc1f0404/glance-httpd/0.log" Nov 24 09:58:58 crc kubenswrapper[4886]: I1124 09:58:58.631395 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_c2a34763-2dcf-4e0d-a6b6-7e26dc1f0404/glance-log/0.log" Nov 24 09:58:58 crc kubenswrapper[4886]: I1124 09:58:58.682544 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-4vggj"] Nov 24 09:58:59 crc kubenswrapper[4886]: W1124 09:58:59.147999 4886 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod48fa23b5_90f7_40a4_84b9_7dd6295d23ca.slice/crio-6068d6ad7c21e81e8d26e37dfe7b650db11873d48affe06e603052d57c14c8ff WatchSource:0}: Error finding container 6068d6ad7c21e81e8d26e37dfe7b650db11873d48affe06e603052d57c14c8ff: Status 404 returned error can't find the container with id 6068d6ad7c21e81e8d26e37dfe7b650db11873d48affe06e603052d57c14c8ff Nov 24 09:58:59 crc kubenswrapper[4886]: I1124 09:58:59.395321 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-twcff_06314c58-da5f-46e4-ac6d-63f95ca6a6f9/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Nov 24 09:58:59 crc kubenswrapper[4886]: I1124 09:58:59.429011 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-664f9d77dd-zw4gm_19e275c2-5fd6-4ea7-a023-6d7478ae5750/horizon/0.log" Nov 24 09:58:59 crc kubenswrapper[4886]: I1124 09:58:59.533087 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4vggj" event={"ID":"48fa23b5-90f7-40a4-84b9-7dd6295d23ca","Type":"ContainerStarted","Data":"6068d6ad7c21e81e8d26e37dfe7b650db11873d48affe06e603052d57c14c8ff"} Nov 24 09:58:59 crc kubenswrapper[4886]: I1124 09:58:59.723749 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-4wj4b_06b5ba9e-eb74-4f82-ba9e-2fe84eb7d255/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Nov 24 09:58:59 crc kubenswrapper[4886]: I1124 09:58:59.757520 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-664f9d77dd-zw4gm_19e275c2-5fd6-4ea7-a023-6d7478ae5750/horizon-log/0.log" Nov 24 09:58:59 crc kubenswrapper[4886]: I1124 09:58:59.951237 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-78ff5b5cf5-swx4n_ba9d3f7a-c442-4fac-bc1f-4863e157b084/keystone-api/0.log" Nov 24 09:59:00 crc kubenswrapper[4886]: I1124 09:59:00.028536 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_39f8779a-9800-4658-aa0a-8603669d7fbe/kube-state-metrics/0.log" Nov 24 09:59:00 crc kubenswrapper[4886]: I1124 09:59:00.199371 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-k47md_ce68d69b-17a7-483e-be9c-5a39b0e2dee8/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Nov 24 09:59:00 crc kubenswrapper[4886]: I1124 09:59:00.567076 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xt2dl" event={"ID":"9bd2fee0-8a88-4571-9c40-7585198593e1","Type":"ContainerStarted","Data":"2361f447ecb096b909c51240a5aecc3771a828d4d620142b7c31509194e17cce"} Nov 24 09:59:00 crc kubenswrapper[4886]: I1124 09:59:00.570578 4886 generic.go:334] "Generic (PLEG): container finished" podID="48fa23b5-90f7-40a4-84b9-7dd6295d23ca" containerID="87a5ce9770e1e5828a9dd3bfd120e9af8ab87b2ca1c7ccedcdde3f10b97d8fe5" exitCode=0 Nov 24 09:59:00 crc kubenswrapper[4886]: I1124 09:59:00.570659 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4vggj" event={"ID":"48fa23b5-90f7-40a4-84b9-7dd6295d23ca","Type":"ContainerDied","Data":"87a5ce9770e1e5828a9dd3bfd120e9af8ab87b2ca1c7ccedcdde3f10b97d8fe5"} Nov 24 09:59:00 crc kubenswrapper[4886]: I1124 09:59:00.585350 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bd8cw" event={"ID":"2d53400e-5071-4c4a-a561-801e95f7b79b","Type":"ContainerStarted","Data":"dbf0edc16020930c7823d84ce08469704c6e25227c28b919793ed598a2694531"} Nov 24 09:59:00 crc kubenswrapper[4886]: I1124 09:59:00.638989 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-xt2dl" podStartSLOduration=3.896777458 podStartE2EDuration="6.638840566s" podCreationTimestamp="2025-11-24 09:58:54 +0000 UTC" firstStartedPulling="2025-11-24 09:58:56.463309021 +0000 UTC m=+4192.350047156" lastFinishedPulling="2025-11-24 09:58:59.205372129 +0000 UTC m=+4195.092110264" observedRunningTime="2025-11-24 09:59:00.59348526 +0000 UTC m=+4196.480223405" watchObservedRunningTime="2025-11-24 09:59:00.638840566 +0000 UTC m=+4196.525578721" Nov 24 09:59:00 crc kubenswrapper[4886]: I1124 09:59:00.650939 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-bd8cw" podStartSLOduration=3.905265881 podStartE2EDuration="6.650920931s" podCreationTimestamp="2025-11-24 09:58:54 +0000 UTC" firstStartedPulling="2025-11-24 09:58:56.468348595 +0000 UTC m=+4192.355086730" lastFinishedPulling="2025-11-24 09:58:59.214003645 +0000 UTC m=+4195.100741780" observedRunningTime="2025-11-24 09:59:00.614900692 +0000 UTC m=+4196.501638827" watchObservedRunningTime="2025-11-24 09:59:00.650920931 +0000 UTC m=+4196.537659066" Nov 24 09:59:00 crc kubenswrapper[4886]: I1124 09:59:00.726744 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-58775dd67f-bvv4s_f7e8a3f6-7c28-4b33-a355-1c7f38b2cc7b/neutron-httpd/0.log" Nov 24 09:59:00 crc kubenswrapper[4886]: I1124 09:59:00.762201 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-58775dd67f-bvv4s_f7e8a3f6-7c28-4b33-a355-1c7f38b2cc7b/neutron-api/0.log" Nov 24 09:59:00 crc kubenswrapper[4886]: I1124 09:59:00.903293 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-n778s_ad4158ea-36b4-499a-bfb0-d6743c87340a/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Nov 24 09:59:01 crc kubenswrapper[4886]: I1124 09:59:01.661364 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_f31eee06-9a4d-4956-b314-b4413ac5aba0/nova-api-log/0.log" Nov 24 09:59:01 crc kubenswrapper[4886]: I1124 09:59:01.792561 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_2f98453e-9a49-498a-bcc6-6a4d82f39fc7/nova-cell0-conductor-conductor/0.log" Nov 24 09:59:02 crc kubenswrapper[4886]: I1124 09:59:02.180300 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_ec788e35-0154-4b74-86b4-5a21037b3e4a/nova-cell1-conductor-conductor/0.log" Nov 24 09:59:02 crc kubenswrapper[4886]: I1124 09:59:02.247226 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_ffad73ea-b29e-4f4c-aa2c-ad30a864a8b4/nova-cell1-novncproxy-novncproxy/0.log" Nov 24 09:59:02 crc kubenswrapper[4886]: I1124 09:59:02.287099 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_f31eee06-9a4d-4956-b314-b4413ac5aba0/nova-api-api/0.log" Nov 24 09:59:02 crc kubenswrapper[4886]: I1124 09:59:02.501537 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-f2b4z_36804e58-9c67-454c-a7b2-6aca006eb481/nova-edpm-deployment-openstack-edpm-ipam/0.log" Nov 24 09:59:02 crc kubenswrapper[4886]: I1124 09:59:02.777006 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_6d1021e4-f165-4881-9bcc-2cc19416ab64/nova-metadata-log/0.log" Nov 24 09:59:02 crc kubenswrapper[4886]: I1124 09:59:02.791438 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4vggj" event={"ID":"48fa23b5-90f7-40a4-84b9-7dd6295d23ca","Type":"ContainerStarted","Data":"f75f3ddbb6767f7d887356918ba051d018361c17f42a57f82a51b5a1ccd36c23"} Nov 24 09:59:03 crc kubenswrapper[4886]: I1124 09:59:03.332492 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_11ea3612-3583-4b82-9047-d11cd751adcd/mysql-bootstrap/0.log" Nov 24 09:59:03 crc kubenswrapper[4886]: I1124 09:59:03.518508 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_11ea3612-3583-4b82-9047-d11cd751adcd/mysql-bootstrap/0.log" Nov 24 09:59:03 crc kubenswrapper[4886]: I1124 09:59:03.585373 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_11ea3612-3583-4b82-9047-d11cd751adcd/galera/0.log" Nov 24 09:59:03 crc kubenswrapper[4886]: I1124 09:59:03.621017 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_8288e829-a6d4-4f11-abf2-e9cd50df6c4b/nova-scheduler-scheduler/0.log" Nov 24 09:59:03 crc kubenswrapper[4886]: I1124 09:59:03.806845 4886 generic.go:334] "Generic (PLEG): container finished" podID="48fa23b5-90f7-40a4-84b9-7dd6295d23ca" containerID="f75f3ddbb6767f7d887356918ba051d018361c17f42a57f82a51b5a1ccd36c23" exitCode=0 Nov 24 09:59:03 crc kubenswrapper[4886]: I1124 09:59:03.806892 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4vggj" event={"ID":"48fa23b5-90f7-40a4-84b9-7dd6295d23ca","Type":"ContainerDied","Data":"f75f3ddbb6767f7d887356918ba051d018361c17f42a57f82a51b5a1ccd36c23"} Nov 24 09:59:03 crc kubenswrapper[4886]: I1124 09:59:03.858142 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_3ec7bf38-594d-4606-ab2c-76f4fc8b6a29/mysql-bootstrap/0.log" Nov 24 09:59:04 crc kubenswrapper[4886]: I1124 09:59:04.032922 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_3ec7bf38-594d-4606-ab2c-76f4fc8b6a29/mysql-bootstrap/0.log" Nov 24 09:59:04 crc kubenswrapper[4886]: I1124 09:59:04.114134 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_3ec7bf38-594d-4606-ab2c-76f4fc8b6a29/galera/0.log" Nov 24 09:59:04 crc kubenswrapper[4886]: I1124 09:59:04.333893 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_801740d3-12c4-4576-a79d-186b36e3f079/openstackclient/0.log" Nov 24 09:59:04 crc kubenswrapper[4886]: I1124 09:59:04.403009 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-hqh7m_abe55c7e-0682-4591-bd60-59ee1de24094/openstack-network-exporter/0.log" Nov 24 09:59:04 crc kubenswrapper[4886]: I1124 09:59:04.701912 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-vclvw_6111b0c6-fec0-4738-8a86-e433a2b5c673/ovsdb-server-init/0.log" Nov 24 09:59:04 crc kubenswrapper[4886]: I1124 09:59:04.723245 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-bd8cw" Nov 24 09:59:04 crc kubenswrapper[4886]: I1124 09:59:04.723320 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-bd8cw" Nov 24 09:59:04 crc kubenswrapper[4886]: I1124 09:59:04.778630 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_6d1021e4-f165-4881-9bcc-2cc19416ab64/nova-metadata-metadata/0.log" Nov 24 09:59:04 crc kubenswrapper[4886]: I1124 09:59:04.798585 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-bd8cw" Nov 24 09:59:04 crc kubenswrapper[4886]: I1124 09:59:04.834810 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4vggj" event={"ID":"48fa23b5-90f7-40a4-84b9-7dd6295d23ca","Type":"ContainerStarted","Data":"df1a9a579f86b93336c9da28d977e2ec05e9077892dfef95cd1b43187dd76c20"} Nov 24 09:59:04 crc kubenswrapper[4886]: I1124 09:59:04.900699 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-bd8cw" Nov 24 09:59:04 crc kubenswrapper[4886]: I1124 09:59:04.902094 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-xt2dl" Nov 24 09:59:04 crc kubenswrapper[4886]: I1124 09:59:04.902165 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-xt2dl" Nov 24 09:59:04 crc kubenswrapper[4886]: I1124 09:59:04.907565 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-4vggj" podStartSLOduration=4.273443394 podStartE2EDuration="7.907537163s" podCreationTimestamp="2025-11-24 09:58:57 +0000 UTC" firstStartedPulling="2025-11-24 09:59:00.572987455 +0000 UTC m=+4196.459725590" lastFinishedPulling="2025-11-24 09:59:04.207081234 +0000 UTC m=+4200.093819359" observedRunningTime="2025-11-24 09:59:04.872018748 +0000 UTC m=+4200.758756883" watchObservedRunningTime="2025-11-24 09:59:04.907537163 +0000 UTC m=+4200.794275298" Nov 24 09:59:04 crc kubenswrapper[4886]: I1124 09:59:04.946104 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-vclvw_6111b0c6-fec0-4738-8a86-e433a2b5c673/ovsdb-server-init/0.log" Nov 24 09:59:04 crc kubenswrapper[4886]: I1124 09:59:04.984570 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-xt2dl" Nov 24 09:59:05 crc kubenswrapper[4886]: I1124 09:59:05.046757 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-vclvw_6111b0c6-fec0-4738-8a86-e433a2b5c673/ovsdb-server/0.log" Nov 24 09:59:05 crc kubenswrapper[4886]: I1124 09:59:05.061029 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-vclvw_6111b0c6-fec0-4738-8a86-e433a2b5c673/ovs-vswitchd/0.log" Nov 24 09:59:05 crc kubenswrapper[4886]: I1124 09:59:05.324240 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-rzmth_b7951685-e0e7-4524-ba49-b720357aa59c/ovn-controller/0.log" Nov 24 09:59:05 crc kubenswrapper[4886]: I1124 09:59:05.457557 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-mqb42_cde6df39-d639-4855-a34f-29ff9af5c870/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Nov 24 09:59:05 crc kubenswrapper[4886]: I1124 09:59:05.634312 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_28013454-2b4a-4d68-87fa-272095c8a651/openstack-network-exporter/0.log" Nov 24 09:59:05 crc kubenswrapper[4886]: I1124 09:59:05.682966 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_28013454-2b4a-4d68-87fa-272095c8a651/ovn-northd/0.log" Nov 24 09:59:05 crc kubenswrapper[4886]: I1124 09:59:05.907176 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_43ae2d6a-6a35-4cf5-81fd-76c9a41c6db1/ovsdbserver-nb/0.log" Nov 24 09:59:05 crc kubenswrapper[4886]: I1124 09:59:05.919032 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-xt2dl" Nov 24 09:59:05 crc kubenswrapper[4886]: I1124 09:59:05.931935 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_43ae2d6a-6a35-4cf5-81fd-76c9a41c6db1/openstack-network-exporter/0.log" Nov 24 09:59:06 crc kubenswrapper[4886]: I1124 09:59:06.159890 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_495262a2-0785-4f84-aeb5-00eff9c76e9a/openstack-network-exporter/0.log" Nov 24 09:59:06 crc kubenswrapper[4886]: I1124 09:59:06.229134 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_495262a2-0785-4f84-aeb5-00eff9c76e9a/ovsdbserver-sb/0.log" Nov 24 09:59:06 crc kubenswrapper[4886]: I1124 09:59:06.413487 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-646878466-vzd4z_98af9edc-5cf6-4dd9-93e0-2e320d0d0939/placement-api/0.log" Nov 24 09:59:06 crc kubenswrapper[4886]: I1124 09:59:06.560637 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-646878466-vzd4z_98af9edc-5cf6-4dd9-93e0-2e320d0d0939/placement-log/0.log" Nov 24 09:59:06 crc kubenswrapper[4886]: I1124 09:59:06.606868 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_533cb212-964b-4427-ac3f-ebafca6d8787/setup-container/0.log" Nov 24 09:59:06 crc kubenswrapper[4886]: I1124 09:59:06.896490 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_533cb212-964b-4427-ac3f-ebafca6d8787/setup-container/0.log" Nov 24 09:59:06 crc kubenswrapper[4886]: I1124 09:59:06.918358 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_f14f0ef7-768e-4fc8-a2d1-b852fe44d773/setup-container/0.log" Nov 24 09:59:06 crc kubenswrapper[4886]: I1124 09:59:06.930477 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_533cb212-964b-4427-ac3f-ebafca6d8787/rabbitmq/0.log" Nov 24 09:59:07 crc kubenswrapper[4886]: I1124 09:59:07.213919 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_f14f0ef7-768e-4fc8-a2d1-b852fe44d773/setup-container/0.log" Nov 24 09:59:07 crc kubenswrapper[4886]: I1124 09:59:07.228883 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_f14f0ef7-768e-4fc8-a2d1-b852fe44d773/rabbitmq/0.log" Nov 24 09:59:07 crc kubenswrapper[4886]: I1124 09:59:07.265002 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-7mg9l_cc0c00e3-1e23-4800-9a47-8d86397ba6f3/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Nov 24 09:59:07 crc kubenswrapper[4886]: I1124 09:59:07.473433 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-g9sf7_21022c6d-8637-4952-b0c1-33b80b316a3a/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Nov 24 09:59:07 crc kubenswrapper[4886]: I1124 09:59:07.522043 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-4z2nz_b715926a-c856-44c7-b863-95bd080cbe24/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Nov 24 09:59:07 crc kubenswrapper[4886]: I1124 09:59:07.691506 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-m2f4l_80cccca8-e8d6-4772-b514-83482acf917e/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Nov 24 09:59:07 crc kubenswrapper[4886]: I1124 09:59:07.787019 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-r4dj2_c9133cae-660e-41cc-ad42-4b3772bdcdfe/ssh-known-hosts-edpm-deployment/0.log" Nov 24 09:59:08 crc kubenswrapper[4886]: I1124 09:59:08.100695 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-4vggj" Nov 24 09:59:08 crc kubenswrapper[4886]: I1124 09:59:08.100837 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-4vggj" Nov 24 09:59:08 crc kubenswrapper[4886]: I1124 09:59:08.150045 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-558564f98c-jl2ms_c1f11d5d-8b31-47b7-9ceb-197d5ca23475/proxy-server/0.log" Nov 24 09:59:08 crc kubenswrapper[4886]: I1124 09:59:08.153173 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-4vggj" Nov 24 09:59:08 crc kubenswrapper[4886]: I1124 09:59:08.163848 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-bd8cw"] Nov 24 09:59:08 crc kubenswrapper[4886]: I1124 09:59:08.164216 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-bd8cw" podUID="2d53400e-5071-4c4a-a561-801e95f7b79b" containerName="registry-server" containerID="cri-o://dbf0edc16020930c7823d84ce08469704c6e25227c28b919793ed598a2694531" gracePeriod=2 Nov 24 09:59:08 crc kubenswrapper[4886]: I1124 09:59:08.280083 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-dxnk5_c9a54508-7f70-4e5d-952a-587f8fabeb1c/swift-ring-rebalance/0.log" Nov 24 09:59:08 crc kubenswrapper[4886]: I1124 09:59:08.313755 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-558564f98c-jl2ms_c1f11d5d-8b31-47b7-9ceb-197d5ca23475/proxy-httpd/0.log" Nov 24 09:59:08 crc kubenswrapper[4886]: I1124 09:59:08.433985 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_65b7f4e6-3f5e-419b-9761-c0fc78a4632d/account-auditor/0.log" Nov 24 09:59:08 crc kubenswrapper[4886]: I1124 09:59:08.538820 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_65b7f4e6-3f5e-419b-9761-c0fc78a4632d/account-reaper/0.log" Nov 24 09:59:08 crc kubenswrapper[4886]: I1124 09:59:08.596917 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_65b7f4e6-3f5e-419b-9761-c0fc78a4632d/account-replicator/0.log" Nov 24 09:59:08 crc kubenswrapper[4886]: I1124 09:59:08.708216 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bd8cw" Nov 24 09:59:08 crc kubenswrapper[4886]: I1124 09:59:08.730129 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_65b7f4e6-3f5e-419b-9761-c0fc78a4632d/container-auditor/0.log" Nov 24 09:59:08 crc kubenswrapper[4886]: I1124 09:59:08.797784 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_65b7f4e6-3f5e-419b-9761-c0fc78a4632d/account-server/0.log" Nov 24 09:59:08 crc kubenswrapper[4886]: I1124 09:59:08.799309 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-xt2dl"] Nov 24 09:59:08 crc kubenswrapper[4886]: I1124 09:59:08.799568 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-xt2dl" podUID="9bd2fee0-8a88-4571-9c40-7585198593e1" containerName="registry-server" containerID="cri-o://2361f447ecb096b909c51240a5aecc3771a828d4d620142b7c31509194e17cce" gracePeriod=2 Nov 24 09:59:08 crc kubenswrapper[4886]: I1124 09:59:08.841846 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2d53400e-5071-4c4a-a561-801e95f7b79b-catalog-content\") pod \"2d53400e-5071-4c4a-a561-801e95f7b79b\" (UID: \"2d53400e-5071-4c4a-a561-801e95f7b79b\") " Nov 24 09:59:08 crc kubenswrapper[4886]: I1124 09:59:08.842004 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fswtj\" (UniqueName: \"kubernetes.io/projected/2d53400e-5071-4c4a-a561-801e95f7b79b-kube-api-access-fswtj\") pod \"2d53400e-5071-4c4a-a561-801e95f7b79b\" (UID: \"2d53400e-5071-4c4a-a561-801e95f7b79b\") " Nov 24 09:59:08 crc kubenswrapper[4886]: I1124 09:59:08.842075 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2d53400e-5071-4c4a-a561-801e95f7b79b-utilities\") pod \"2d53400e-5071-4c4a-a561-801e95f7b79b\" (UID: \"2d53400e-5071-4c4a-a561-801e95f7b79b\") " Nov 24 09:59:08 crc kubenswrapper[4886]: I1124 09:59:08.843712 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2d53400e-5071-4c4a-a561-801e95f7b79b-utilities" (OuterVolumeSpecName: "utilities") pod "2d53400e-5071-4c4a-a561-801e95f7b79b" (UID: "2d53400e-5071-4c4a-a561-801e95f7b79b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 09:59:08 crc kubenswrapper[4886]: I1124 09:59:08.848818 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2d53400e-5071-4c4a-a561-801e95f7b79b-kube-api-access-fswtj" (OuterVolumeSpecName: "kube-api-access-fswtj") pod "2d53400e-5071-4c4a-a561-801e95f7b79b" (UID: "2d53400e-5071-4c4a-a561-801e95f7b79b"). InnerVolumeSpecName "kube-api-access-fswtj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 09:59:08 crc kubenswrapper[4886]: I1124 09:59:08.895484 4886 generic.go:334] "Generic (PLEG): container finished" podID="2d53400e-5071-4c4a-a561-801e95f7b79b" containerID="dbf0edc16020930c7823d84ce08469704c6e25227c28b919793ed598a2694531" exitCode=0 Nov 24 09:59:08 crc kubenswrapper[4886]: I1124 09:59:08.899444 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bd8cw" Nov 24 09:59:08 crc kubenswrapper[4886]: I1124 09:59:08.903238 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_65b7f4e6-3f5e-419b-9761-c0fc78a4632d/container-server/0.log" Nov 24 09:59:08 crc kubenswrapper[4886]: I1124 09:59:08.913620 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bd8cw" event={"ID":"2d53400e-5071-4c4a-a561-801e95f7b79b","Type":"ContainerDied","Data":"dbf0edc16020930c7823d84ce08469704c6e25227c28b919793ed598a2694531"} Nov 24 09:59:08 crc kubenswrapper[4886]: I1124 09:59:08.913672 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bd8cw" event={"ID":"2d53400e-5071-4c4a-a561-801e95f7b79b","Type":"ContainerDied","Data":"4e69f409e67bbf8230a106abc54b525819ac552f44321cf4c5301381a48dead8"} Nov 24 09:59:08 crc kubenswrapper[4886]: I1124 09:59:08.913701 4886 scope.go:117] "RemoveContainer" containerID="dbf0edc16020930c7823d84ce08469704c6e25227c28b919793ed598a2694531" Nov 24 09:59:08 crc kubenswrapper[4886]: I1124 09:59:08.922100 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_65b7f4e6-3f5e-419b-9761-c0fc78a4632d/container-replicator/0.log" Nov 24 09:59:08 crc kubenswrapper[4886]: I1124 09:59:08.940940 4886 scope.go:117] "RemoveContainer" containerID="d110618e21cff74c37128c52bc62b72ac2277c7db0498a97f9a7d1ad694b1e9e" Nov 24 09:59:08 crc kubenswrapper[4886]: I1124 09:59:08.944558 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fswtj\" (UniqueName: \"kubernetes.io/projected/2d53400e-5071-4c4a-a561-801e95f7b79b-kube-api-access-fswtj\") on node \"crc\" DevicePath \"\"" Nov 24 09:59:08 crc kubenswrapper[4886]: I1124 09:59:08.944591 4886 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2d53400e-5071-4c4a-a561-801e95f7b79b-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 09:59:08 crc kubenswrapper[4886]: I1124 09:59:08.988821 4886 scope.go:117] "RemoveContainer" containerID="cabf29518094cf60cddeb11e12d0267bcc84e1ad8f9164009cef5a3b76ec6062" Nov 24 09:59:09 crc kubenswrapper[4886]: I1124 09:59:09.035895 4886 scope.go:117] "RemoveContainer" containerID="dbf0edc16020930c7823d84ce08469704c6e25227c28b919793ed598a2694531" Nov 24 09:59:09 crc kubenswrapper[4886]: E1124 09:59:09.036381 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dbf0edc16020930c7823d84ce08469704c6e25227c28b919793ed598a2694531\": container with ID starting with dbf0edc16020930c7823d84ce08469704c6e25227c28b919793ed598a2694531 not found: ID does not exist" containerID="dbf0edc16020930c7823d84ce08469704c6e25227c28b919793ed598a2694531" Nov 24 09:59:09 crc kubenswrapper[4886]: I1124 09:59:09.036418 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dbf0edc16020930c7823d84ce08469704c6e25227c28b919793ed598a2694531"} err="failed to get container status \"dbf0edc16020930c7823d84ce08469704c6e25227c28b919793ed598a2694531\": rpc error: code = NotFound desc = could not find container \"dbf0edc16020930c7823d84ce08469704c6e25227c28b919793ed598a2694531\": container with ID starting with dbf0edc16020930c7823d84ce08469704c6e25227c28b919793ed598a2694531 not found: ID does not exist" Nov 24 09:59:09 crc kubenswrapper[4886]: I1124 09:59:09.036439 4886 scope.go:117] "RemoveContainer" containerID="d110618e21cff74c37128c52bc62b72ac2277c7db0498a97f9a7d1ad694b1e9e" Nov 24 09:59:09 crc kubenswrapper[4886]: E1124 09:59:09.037351 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d110618e21cff74c37128c52bc62b72ac2277c7db0498a97f9a7d1ad694b1e9e\": container with ID starting with d110618e21cff74c37128c52bc62b72ac2277c7db0498a97f9a7d1ad694b1e9e not found: ID does not exist" containerID="d110618e21cff74c37128c52bc62b72ac2277c7db0498a97f9a7d1ad694b1e9e" Nov 24 09:59:09 crc kubenswrapper[4886]: I1124 09:59:09.037405 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d110618e21cff74c37128c52bc62b72ac2277c7db0498a97f9a7d1ad694b1e9e"} err="failed to get container status \"d110618e21cff74c37128c52bc62b72ac2277c7db0498a97f9a7d1ad694b1e9e\": rpc error: code = NotFound desc = could not find container \"d110618e21cff74c37128c52bc62b72ac2277c7db0498a97f9a7d1ad694b1e9e\": container with ID starting with d110618e21cff74c37128c52bc62b72ac2277c7db0498a97f9a7d1ad694b1e9e not found: ID does not exist" Nov 24 09:59:09 crc kubenswrapper[4886]: I1124 09:59:09.037438 4886 scope.go:117] "RemoveContainer" containerID="cabf29518094cf60cddeb11e12d0267bcc84e1ad8f9164009cef5a3b76ec6062" Nov 24 09:59:09 crc kubenswrapper[4886]: E1124 09:59:09.037789 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cabf29518094cf60cddeb11e12d0267bcc84e1ad8f9164009cef5a3b76ec6062\": container with ID starting with cabf29518094cf60cddeb11e12d0267bcc84e1ad8f9164009cef5a3b76ec6062 not found: ID does not exist" containerID="cabf29518094cf60cddeb11e12d0267bcc84e1ad8f9164009cef5a3b76ec6062" Nov 24 09:59:09 crc kubenswrapper[4886]: I1124 09:59:09.037827 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cabf29518094cf60cddeb11e12d0267bcc84e1ad8f9164009cef5a3b76ec6062"} err="failed to get container status \"cabf29518094cf60cddeb11e12d0267bcc84e1ad8f9164009cef5a3b76ec6062\": rpc error: code = NotFound desc = could not find container \"cabf29518094cf60cddeb11e12d0267bcc84e1ad8f9164009cef5a3b76ec6062\": container with ID starting with cabf29518094cf60cddeb11e12d0267bcc84e1ad8f9164009cef5a3b76ec6062 not found: ID does not exist" Nov 24 09:59:09 crc kubenswrapper[4886]: I1124 09:59:09.038358 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_65b7f4e6-3f5e-419b-9761-c0fc78a4632d/container-updater/0.log" Nov 24 09:59:09 crc kubenswrapper[4886]: I1124 09:59:09.115309 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_65b7f4e6-3f5e-419b-9761-c0fc78a4632d/object-auditor/0.log" Nov 24 09:59:09 crc kubenswrapper[4886]: I1124 09:59:09.123085 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_65b7f4e6-3f5e-419b-9761-c0fc78a4632d/object-expirer/0.log" Nov 24 09:59:09 crc kubenswrapper[4886]: I1124 09:59:09.222976 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_65b7f4e6-3f5e-419b-9761-c0fc78a4632d/object-replicator/0.log" Nov 24 09:59:09 crc kubenswrapper[4886]: I1124 09:59:09.264509 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_65b7f4e6-3f5e-419b-9761-c0fc78a4632d/object-server/0.log" Nov 24 09:59:09 crc kubenswrapper[4886]: I1124 09:59:09.332640 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_65b7f4e6-3f5e-419b-9761-c0fc78a4632d/object-updater/0.log" Nov 24 09:59:09 crc kubenswrapper[4886]: I1124 09:59:09.786358 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2d53400e-5071-4c4a-a561-801e95f7b79b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2d53400e-5071-4c4a-a561-801e95f7b79b" (UID: "2d53400e-5071-4c4a-a561-801e95f7b79b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 09:59:09 crc kubenswrapper[4886]: I1124 09:59:09.862593 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_65b7f4e6-3f5e-419b-9761-c0fc78a4632d/swift-recon-cron/0.log" Nov 24 09:59:09 crc kubenswrapper[4886]: I1124 09:59:09.871574 4886 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2d53400e-5071-4c4a-a561-801e95f7b79b-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 09:59:09 crc kubenswrapper[4886]: I1124 09:59:09.874306 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-bd8cw"] Nov 24 09:59:09 crc kubenswrapper[4886]: I1124 09:59:09.883983 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_65b7f4e6-3f5e-419b-9761-c0fc78a4632d/rsync/0.log" Nov 24 09:59:09 crc kubenswrapper[4886]: I1124 09:59:09.886576 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-bd8cw"] Nov 24 09:59:09 crc kubenswrapper[4886]: I1124 09:59:09.920600 4886 generic.go:334] "Generic (PLEG): container finished" podID="9bd2fee0-8a88-4571-9c40-7585198593e1" containerID="2361f447ecb096b909c51240a5aecc3771a828d4d620142b7c31509194e17cce" exitCode=0 Nov 24 09:59:09 crc kubenswrapper[4886]: I1124 09:59:09.920685 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xt2dl" event={"ID":"9bd2fee0-8a88-4571-9c40-7585198593e1","Type":"ContainerDied","Data":"2361f447ecb096b909c51240a5aecc3771a828d4d620142b7c31509194e17cce"} Nov 24 09:59:09 crc kubenswrapper[4886]: I1124 09:59:09.987609 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-4vggj" Nov 24 09:59:10 crc kubenswrapper[4886]: I1124 09:59:10.017627 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-5f9t4_ecc70fd6-3ec9-4488-89c6-2a9a6d803bcb/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Nov 24 09:59:10 crc kubenswrapper[4886]: I1124 09:59:10.109565 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_347e0b0b-6caf-4b65-8fd5-a8cf2c61acc7/tempest-tests-tempest-tests-runner/0.log" Nov 24 09:59:10 crc kubenswrapper[4886]: I1124 09:59:10.257378 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_2e959b23-6fa2-4d10-b235-5fdc8c476ff9/test-operator-logs-container/0.log" Nov 24 09:59:10 crc kubenswrapper[4886]: I1124 09:59:10.399096 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-q6b2v_23e016b0-6143-48d5-85e3-fad3392b2de4/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Nov 24 09:59:10 crc kubenswrapper[4886]: I1124 09:59:10.559764 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-4vggj"] Nov 24 09:59:10 crc kubenswrapper[4886]: I1124 09:59:10.644681 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xt2dl" Nov 24 09:59:10 crc kubenswrapper[4886]: I1124 09:59:10.792617 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9bd2fee0-8a88-4571-9c40-7585198593e1-catalog-content\") pod \"9bd2fee0-8a88-4571-9c40-7585198593e1\" (UID: \"9bd2fee0-8a88-4571-9c40-7585198593e1\") " Nov 24 09:59:10 crc kubenswrapper[4886]: I1124 09:59:10.792858 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9bd2fee0-8a88-4571-9c40-7585198593e1-utilities\") pod \"9bd2fee0-8a88-4571-9c40-7585198593e1\" (UID: \"9bd2fee0-8a88-4571-9c40-7585198593e1\") " Nov 24 09:59:10 crc kubenswrapper[4886]: I1124 09:59:10.793070 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ttjqx\" (UniqueName: \"kubernetes.io/projected/9bd2fee0-8a88-4571-9c40-7585198593e1-kube-api-access-ttjqx\") pod \"9bd2fee0-8a88-4571-9c40-7585198593e1\" (UID: \"9bd2fee0-8a88-4571-9c40-7585198593e1\") " Nov 24 09:59:10 crc kubenswrapper[4886]: I1124 09:59:10.797014 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9bd2fee0-8a88-4571-9c40-7585198593e1-utilities" (OuterVolumeSpecName: "utilities") pod "9bd2fee0-8a88-4571-9c40-7585198593e1" (UID: "9bd2fee0-8a88-4571-9c40-7585198593e1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 09:59:10 crc kubenswrapper[4886]: I1124 09:59:10.815622 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9bd2fee0-8a88-4571-9c40-7585198593e1-kube-api-access-ttjqx" (OuterVolumeSpecName: "kube-api-access-ttjqx") pod "9bd2fee0-8a88-4571-9c40-7585198593e1" (UID: "9bd2fee0-8a88-4571-9c40-7585198593e1"). InnerVolumeSpecName "kube-api-access-ttjqx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 09:59:10 crc kubenswrapper[4886]: I1124 09:59:10.867493 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9bd2fee0-8a88-4571-9c40-7585198593e1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9bd2fee0-8a88-4571-9c40-7585198593e1" (UID: "9bd2fee0-8a88-4571-9c40-7585198593e1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 09:59:10 crc kubenswrapper[4886]: I1124 09:59:10.871949 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2d53400e-5071-4c4a-a561-801e95f7b79b" path="/var/lib/kubelet/pods/2d53400e-5071-4c4a-a561-801e95f7b79b/volumes" Nov 24 09:59:10 crc kubenswrapper[4886]: I1124 09:59:10.896977 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ttjqx\" (UniqueName: \"kubernetes.io/projected/9bd2fee0-8a88-4571-9c40-7585198593e1-kube-api-access-ttjqx\") on node \"crc\" DevicePath \"\"" Nov 24 09:59:10 crc kubenswrapper[4886]: I1124 09:59:10.897012 4886 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9bd2fee0-8a88-4571-9c40-7585198593e1-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 09:59:10 crc kubenswrapper[4886]: I1124 09:59:10.897029 4886 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9bd2fee0-8a88-4571-9c40-7585198593e1-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 09:59:10 crc kubenswrapper[4886]: I1124 09:59:10.940300 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xt2dl" Nov 24 09:59:10 crc kubenswrapper[4886]: I1124 09:59:10.941572 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xt2dl" event={"ID":"9bd2fee0-8a88-4571-9c40-7585198593e1","Type":"ContainerDied","Data":"33938b4f718e26b89ab349160cf3743aa9e36df41da0414d090493c74ce0586e"} Nov 24 09:59:10 crc kubenswrapper[4886]: I1124 09:59:10.941640 4886 scope.go:117] "RemoveContainer" containerID="2361f447ecb096b909c51240a5aecc3771a828d4d620142b7c31509194e17cce" Nov 24 09:59:10 crc kubenswrapper[4886]: I1124 09:59:10.971569 4886 scope.go:117] "RemoveContainer" containerID="5990621bd3c0c02c4c431f5aff0ddf31a7c2465e3fb5ec2743731b2c6f0ca9ae" Nov 24 09:59:10 crc kubenswrapper[4886]: I1124 09:59:10.980301 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-xt2dl"] Nov 24 09:59:10 crc kubenswrapper[4886]: I1124 09:59:10.988775 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-xt2dl"] Nov 24 09:59:10 crc kubenswrapper[4886]: I1124 09:59:10.992093 4886 scope.go:117] "RemoveContainer" containerID="86b074a9968cb0582773c51615ca45957ad91294c18f0aaed2d2010445d55962" Nov 24 09:59:11 crc kubenswrapper[4886]: I1124 09:59:11.957497 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-4vggj" podUID="48fa23b5-90f7-40a4-84b9-7dd6295d23ca" containerName="registry-server" containerID="cri-o://df1a9a579f86b93336c9da28d977e2ec05e9077892dfef95cd1b43187dd76c20" gracePeriod=2 Nov 24 09:59:12 crc kubenswrapper[4886]: I1124 09:59:12.509762 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4vggj" Nov 24 09:59:12 crc kubenswrapper[4886]: I1124 09:59:12.632635 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/48fa23b5-90f7-40a4-84b9-7dd6295d23ca-catalog-content\") pod \"48fa23b5-90f7-40a4-84b9-7dd6295d23ca\" (UID: \"48fa23b5-90f7-40a4-84b9-7dd6295d23ca\") " Nov 24 09:59:12 crc kubenswrapper[4886]: I1124 09:59:12.648335 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/48fa23b5-90f7-40a4-84b9-7dd6295d23ca-utilities\") pod \"48fa23b5-90f7-40a4-84b9-7dd6295d23ca\" (UID: \"48fa23b5-90f7-40a4-84b9-7dd6295d23ca\") " Nov 24 09:59:12 crc kubenswrapper[4886]: I1124 09:59:12.648559 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-db78r\" (UniqueName: \"kubernetes.io/projected/48fa23b5-90f7-40a4-84b9-7dd6295d23ca-kube-api-access-db78r\") pod \"48fa23b5-90f7-40a4-84b9-7dd6295d23ca\" (UID: \"48fa23b5-90f7-40a4-84b9-7dd6295d23ca\") " Nov 24 09:59:12 crc kubenswrapper[4886]: I1124 09:59:12.650575 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/48fa23b5-90f7-40a4-84b9-7dd6295d23ca-utilities" (OuterVolumeSpecName: "utilities") pod "48fa23b5-90f7-40a4-84b9-7dd6295d23ca" (UID: "48fa23b5-90f7-40a4-84b9-7dd6295d23ca"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 09:59:12 crc kubenswrapper[4886]: I1124 09:59:12.653631 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/48fa23b5-90f7-40a4-84b9-7dd6295d23ca-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "48fa23b5-90f7-40a4-84b9-7dd6295d23ca" (UID: "48fa23b5-90f7-40a4-84b9-7dd6295d23ca"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 09:59:12 crc kubenswrapper[4886]: I1124 09:59:12.669030 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/48fa23b5-90f7-40a4-84b9-7dd6295d23ca-kube-api-access-db78r" (OuterVolumeSpecName: "kube-api-access-db78r") pod "48fa23b5-90f7-40a4-84b9-7dd6295d23ca" (UID: "48fa23b5-90f7-40a4-84b9-7dd6295d23ca"). InnerVolumeSpecName "kube-api-access-db78r". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 09:59:12 crc kubenswrapper[4886]: I1124 09:59:12.751535 4886 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/48fa23b5-90f7-40a4-84b9-7dd6295d23ca-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 09:59:12 crc kubenswrapper[4886]: I1124 09:59:12.751577 4886 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/48fa23b5-90f7-40a4-84b9-7dd6295d23ca-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 09:59:12 crc kubenswrapper[4886]: I1124 09:59:12.751590 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-db78r\" (UniqueName: \"kubernetes.io/projected/48fa23b5-90f7-40a4-84b9-7dd6295d23ca-kube-api-access-db78r\") on node \"crc\" DevicePath \"\"" Nov 24 09:59:12 crc kubenswrapper[4886]: I1124 09:59:12.864736 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9bd2fee0-8a88-4571-9c40-7585198593e1" path="/var/lib/kubelet/pods/9bd2fee0-8a88-4571-9c40-7585198593e1/volumes" Nov 24 09:59:12 crc kubenswrapper[4886]: I1124 09:59:12.968816 4886 generic.go:334] "Generic (PLEG): container finished" podID="48fa23b5-90f7-40a4-84b9-7dd6295d23ca" containerID="df1a9a579f86b93336c9da28d977e2ec05e9077892dfef95cd1b43187dd76c20" exitCode=0 Nov 24 09:59:12 crc kubenswrapper[4886]: I1124 09:59:12.968880 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4vggj" event={"ID":"48fa23b5-90f7-40a4-84b9-7dd6295d23ca","Type":"ContainerDied","Data":"df1a9a579f86b93336c9da28d977e2ec05e9077892dfef95cd1b43187dd76c20"} Nov 24 09:59:12 crc kubenswrapper[4886]: I1124 09:59:12.968895 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4vggj" Nov 24 09:59:12 crc kubenswrapper[4886]: I1124 09:59:12.968919 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4vggj" event={"ID":"48fa23b5-90f7-40a4-84b9-7dd6295d23ca","Type":"ContainerDied","Data":"6068d6ad7c21e81e8d26e37dfe7b650db11873d48affe06e603052d57c14c8ff"} Nov 24 09:59:12 crc kubenswrapper[4886]: I1124 09:59:12.968946 4886 scope.go:117] "RemoveContainer" containerID="df1a9a579f86b93336c9da28d977e2ec05e9077892dfef95cd1b43187dd76c20" Nov 24 09:59:13 crc kubenswrapper[4886]: I1124 09:59:13.002261 4886 scope.go:117] "RemoveContainer" containerID="f75f3ddbb6767f7d887356918ba051d018361c17f42a57f82a51b5a1ccd36c23" Nov 24 09:59:13 crc kubenswrapper[4886]: I1124 09:59:13.007526 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-4vggj"] Nov 24 09:59:13 crc kubenswrapper[4886]: I1124 09:59:13.020474 4886 scope.go:117] "RemoveContainer" containerID="87a5ce9770e1e5828a9dd3bfd120e9af8ab87b2ca1c7ccedcdde3f10b97d8fe5" Nov 24 09:59:13 crc kubenswrapper[4886]: I1124 09:59:13.024693 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-4vggj"] Nov 24 09:59:13 crc kubenswrapper[4886]: I1124 09:59:13.063100 4886 scope.go:117] "RemoveContainer" containerID="df1a9a579f86b93336c9da28d977e2ec05e9077892dfef95cd1b43187dd76c20" Nov 24 09:59:13 crc kubenswrapper[4886]: E1124 09:59:13.063562 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"df1a9a579f86b93336c9da28d977e2ec05e9077892dfef95cd1b43187dd76c20\": container with ID starting with df1a9a579f86b93336c9da28d977e2ec05e9077892dfef95cd1b43187dd76c20 not found: ID does not exist" containerID="df1a9a579f86b93336c9da28d977e2ec05e9077892dfef95cd1b43187dd76c20" Nov 24 09:59:13 crc kubenswrapper[4886]: I1124 09:59:13.063616 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"df1a9a579f86b93336c9da28d977e2ec05e9077892dfef95cd1b43187dd76c20"} err="failed to get container status \"df1a9a579f86b93336c9da28d977e2ec05e9077892dfef95cd1b43187dd76c20\": rpc error: code = NotFound desc = could not find container \"df1a9a579f86b93336c9da28d977e2ec05e9077892dfef95cd1b43187dd76c20\": container with ID starting with df1a9a579f86b93336c9da28d977e2ec05e9077892dfef95cd1b43187dd76c20 not found: ID does not exist" Nov 24 09:59:13 crc kubenswrapper[4886]: I1124 09:59:13.063645 4886 scope.go:117] "RemoveContainer" containerID="f75f3ddbb6767f7d887356918ba051d018361c17f42a57f82a51b5a1ccd36c23" Nov 24 09:59:13 crc kubenswrapper[4886]: E1124 09:59:13.064299 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f75f3ddbb6767f7d887356918ba051d018361c17f42a57f82a51b5a1ccd36c23\": container with ID starting with f75f3ddbb6767f7d887356918ba051d018361c17f42a57f82a51b5a1ccd36c23 not found: ID does not exist" containerID="f75f3ddbb6767f7d887356918ba051d018361c17f42a57f82a51b5a1ccd36c23" Nov 24 09:59:13 crc kubenswrapper[4886]: I1124 09:59:13.067266 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f75f3ddbb6767f7d887356918ba051d018361c17f42a57f82a51b5a1ccd36c23"} err="failed to get container status \"f75f3ddbb6767f7d887356918ba051d018361c17f42a57f82a51b5a1ccd36c23\": rpc error: code = NotFound desc = could not find container \"f75f3ddbb6767f7d887356918ba051d018361c17f42a57f82a51b5a1ccd36c23\": container with ID starting with f75f3ddbb6767f7d887356918ba051d018361c17f42a57f82a51b5a1ccd36c23 not found: ID does not exist" Nov 24 09:59:13 crc kubenswrapper[4886]: I1124 09:59:13.067322 4886 scope.go:117] "RemoveContainer" containerID="87a5ce9770e1e5828a9dd3bfd120e9af8ab87b2ca1c7ccedcdde3f10b97d8fe5" Nov 24 09:59:13 crc kubenswrapper[4886]: E1124 09:59:13.068235 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"87a5ce9770e1e5828a9dd3bfd120e9af8ab87b2ca1c7ccedcdde3f10b97d8fe5\": container with ID starting with 87a5ce9770e1e5828a9dd3bfd120e9af8ab87b2ca1c7ccedcdde3f10b97d8fe5 not found: ID does not exist" containerID="87a5ce9770e1e5828a9dd3bfd120e9af8ab87b2ca1c7ccedcdde3f10b97d8fe5" Nov 24 09:59:13 crc kubenswrapper[4886]: I1124 09:59:13.068259 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"87a5ce9770e1e5828a9dd3bfd120e9af8ab87b2ca1c7ccedcdde3f10b97d8fe5"} err="failed to get container status \"87a5ce9770e1e5828a9dd3bfd120e9af8ab87b2ca1c7ccedcdde3f10b97d8fe5\": rpc error: code = NotFound desc = could not find container \"87a5ce9770e1e5828a9dd3bfd120e9af8ab87b2ca1c7ccedcdde3f10b97d8fe5\": container with ID starting with 87a5ce9770e1e5828a9dd3bfd120e9af8ab87b2ca1c7ccedcdde3f10b97d8fe5 not found: ID does not exist" Nov 24 09:59:14 crc kubenswrapper[4886]: I1124 09:59:14.871751 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="48fa23b5-90f7-40a4-84b9-7dd6295d23ca" path="/var/lib/kubelet/pods/48fa23b5-90f7-40a4-84b9-7dd6295d23ca/volumes" Nov 24 09:59:20 crc kubenswrapper[4886]: I1124 09:59:20.114958 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_7db518ac-866a-47c8-a5fb-264625a1c1fd/memcached/0.log" Nov 24 09:59:36 crc kubenswrapper[4886]: I1124 09:59:36.052448 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_0542c7b7525972e13a7189603754e544003130cfecb9b535ff041f20e1wkb22_603fdc43-36f5-4e80-9037-36c972f7cf05/util/0.log" Nov 24 09:59:36 crc kubenswrapper[4886]: I1124 09:59:36.424862 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_0542c7b7525972e13a7189603754e544003130cfecb9b535ff041f20e1wkb22_603fdc43-36f5-4e80-9037-36c972f7cf05/util/0.log" Nov 24 09:59:36 crc kubenswrapper[4886]: I1124 09:59:36.475619 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_0542c7b7525972e13a7189603754e544003130cfecb9b535ff041f20e1wkb22_603fdc43-36f5-4e80-9037-36c972f7cf05/pull/0.log" Nov 24 09:59:36 crc kubenswrapper[4886]: I1124 09:59:36.491568 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_0542c7b7525972e13a7189603754e544003130cfecb9b535ff041f20e1wkb22_603fdc43-36f5-4e80-9037-36c972f7cf05/pull/0.log" Nov 24 09:59:36 crc kubenswrapper[4886]: I1124 09:59:36.646554 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_0542c7b7525972e13a7189603754e544003130cfecb9b535ff041f20e1wkb22_603fdc43-36f5-4e80-9037-36c972f7cf05/util/0.log" Nov 24 09:59:36 crc kubenswrapper[4886]: I1124 09:59:36.653280 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_0542c7b7525972e13a7189603754e544003130cfecb9b535ff041f20e1wkb22_603fdc43-36f5-4e80-9037-36c972f7cf05/extract/0.log" Nov 24 09:59:36 crc kubenswrapper[4886]: I1124 09:59:36.675249 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_0542c7b7525972e13a7189603754e544003130cfecb9b535ff041f20e1wkb22_603fdc43-36f5-4e80-9037-36c972f7cf05/pull/0.log" Nov 24 09:59:36 crc kubenswrapper[4886]: I1124 09:59:36.808381 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-75fb479bcc-pvdd8_6c8c64e0-e4d5-45c1-a697-205deeb19c54/kube-rbac-proxy/0.log" Nov 24 09:59:36 crc kubenswrapper[4886]: I1124 09:59:36.909083 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-6498cbf48f-6pwgl_0ca0fbbb-1734-4a4a-b996-c96aa000131c/kube-rbac-proxy/0.log" Nov 24 09:59:36 crc kubenswrapper[4886]: I1124 09:59:36.921225 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-75fb479bcc-pvdd8_6c8c64e0-e4d5-45c1-a697-205deeb19c54/manager/0.log" Nov 24 09:59:37 crc kubenswrapper[4886]: I1124 09:59:37.029063 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-6498cbf48f-6pwgl_0ca0fbbb-1734-4a4a-b996-c96aa000131c/manager/0.log" Nov 24 09:59:37 crc kubenswrapper[4886]: I1124 09:59:37.103624 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-767ccfd65f-9lqmh_ad04acbe-59a4-490c-ae4e-eacfbd65257c/manager/0.log" Nov 24 09:59:37 crc kubenswrapper[4886]: I1124 09:59:37.118033 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-767ccfd65f-9lqmh_ad04acbe-59a4-490c-ae4e-eacfbd65257c/kube-rbac-proxy/0.log" Nov 24 09:59:37 crc kubenswrapper[4886]: I1124 09:59:37.285834 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-7969689c84-jb6p4_a991f440-958e-42d4-b062-7369966d84c3/kube-rbac-proxy/0.log" Nov 24 09:59:37 crc kubenswrapper[4886]: I1124 09:59:37.474296 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-7969689c84-jb6p4_a991f440-958e-42d4-b062-7369966d84c3/manager/0.log" Nov 24 09:59:37 crc kubenswrapper[4886]: I1124 09:59:37.496360 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-56f54d6746-glmkz_f52431d9-53d4-415b-9e99-3e92fe7be4ca/kube-rbac-proxy/0.log" Nov 24 09:59:37 crc kubenswrapper[4886]: I1124 09:59:37.521392 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-56f54d6746-glmkz_f52431d9-53d4-415b-9e99-3e92fe7be4ca/manager/0.log" Nov 24 09:59:37 crc kubenswrapper[4886]: I1124 09:59:37.689859 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-598f69df5d-z7c6j_def4f2b0-daf8-48c1-95ab-98c2c6f8c72d/manager/0.log" Nov 24 09:59:37 crc kubenswrapper[4886]: I1124 09:59:37.721480 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-598f69df5d-z7c6j_def4f2b0-daf8-48c1-95ab-98c2c6f8c72d/kube-rbac-proxy/0.log" Nov 24 09:59:37 crc kubenswrapper[4886]: I1124 09:59:37.802911 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-6df98c44d8-rsqm2_0f03538e-297e-410d-bf6e-0f947cba868c/kube-rbac-proxy/0.log" Nov 24 09:59:37 crc kubenswrapper[4886]: I1124 09:59:37.941307 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-99b499f4-tjkbx_6fc8a4d5-fad4-4eca-95c0-329b968d5c9d/kube-rbac-proxy/0.log" Nov 24 09:59:38 crc kubenswrapper[4886]: I1124 09:59:38.024841 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-6df98c44d8-rsqm2_0f03538e-297e-410d-bf6e-0f947cba868c/manager/0.log" Nov 24 09:59:38 crc kubenswrapper[4886]: I1124 09:59:38.058085 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-99b499f4-tjkbx_6fc8a4d5-fad4-4eca-95c0-329b968d5c9d/manager/0.log" Nov 24 09:59:38 crc kubenswrapper[4886]: I1124 09:59:38.188785 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-7454b96578-zks44_607c4e63-3cb6-43f8-86b0-7af4b07e81e4/kube-rbac-proxy/0.log" Nov 24 09:59:38 crc kubenswrapper[4886]: I1124 09:59:38.276631 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-7454b96578-zks44_607c4e63-3cb6-43f8-86b0-7af4b07e81e4/manager/0.log" Nov 24 09:59:38 crc kubenswrapper[4886]: I1124 09:59:38.392814 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-58f887965d-5zcvh_73e41e35-4218-492b-93d6-d068c687ee6e/kube-rbac-proxy/0.log" Nov 24 09:59:38 crc kubenswrapper[4886]: I1124 09:59:38.393263 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-58f887965d-5zcvh_73e41e35-4218-492b-93d6-d068c687ee6e/manager/0.log" Nov 24 09:59:38 crc kubenswrapper[4886]: I1124 09:59:38.466845 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-54b5986bb8-47vf5_671e2772-1d7f-4c97-91f6-83f0782b4f6b/kube-rbac-proxy/0.log" Nov 24 09:59:38 crc kubenswrapper[4886]: I1124 09:59:38.614222 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-54b5986bb8-47vf5_671e2772-1d7f-4c97-91f6-83f0782b4f6b/manager/0.log" Nov 24 09:59:38 crc kubenswrapper[4886]: I1124 09:59:38.669500 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-78bd47f458-kczhh_9a2dc275-73a5-4caf-89fe-120ce9401655/kube-rbac-proxy/0.log" Nov 24 09:59:38 crc kubenswrapper[4886]: I1124 09:59:38.714061 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-78bd47f458-kczhh_9a2dc275-73a5-4caf-89fe-120ce9401655/manager/0.log" Nov 24 09:59:38 crc kubenswrapper[4886]: I1124 09:59:38.807732 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-cfbb9c588-bpzxz_f269ac9a-b191-4262-93bf-6cbd27c0d445/kube-rbac-proxy/0.log" Nov 24 09:59:38 crc kubenswrapper[4886]: I1124 09:59:38.978259 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-cfbb9c588-bpzxz_f269ac9a-b191-4262-93bf-6cbd27c0d445/manager/0.log" Nov 24 09:59:39 crc kubenswrapper[4886]: I1124 09:59:39.042340 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-54cfbf4c7d-qnv8p_8aadf5e6-b19e-4b19-b812-50c5bd4721a4/manager/0.log" Nov 24 09:59:39 crc kubenswrapper[4886]: I1124 09:59:39.042817 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-54cfbf4c7d-qnv8p_8aadf5e6-b19e-4b19-b812-50c5bd4721a4/kube-rbac-proxy/0.log" Nov 24 09:59:39 crc kubenswrapper[4886]: I1124 09:59:39.178083 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-8c7444f48-n5gmw_6f4398e5-a5b8-4853-ac68-76385d1a749d/kube-rbac-proxy/0.log" Nov 24 09:59:39 crc kubenswrapper[4886]: I1124 09:59:39.242518 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-8c7444f48-n5gmw_6f4398e5-a5b8-4853-ac68-76385d1a749d/manager/0.log" Nov 24 09:59:39 crc kubenswrapper[4886]: I1124 09:59:39.348832 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-5cd7fdf8c-ztg92_33c0c863-6350-4195-acb5-0dcc801d867b/kube-rbac-proxy/0.log" Nov 24 09:59:39 crc kubenswrapper[4886]: I1124 09:59:39.542327 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-operator-5968c54bfb-nfhfk_48f1853b-9770-4f82-af2b-fc2be2f426b6/kube-rbac-proxy/0.log" Nov 24 09:59:39 crc kubenswrapper[4886]: I1124 09:59:39.772774 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-wfxd2_f134bfae-349d-4078-b49c-7aba86c32093/registry-server/0.log" Nov 24 09:59:39 crc kubenswrapper[4886]: I1124 09:59:39.842063 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-operator-5968c54bfb-nfhfk_48f1853b-9770-4f82-af2b-fc2be2f426b6/operator/0.log" Nov 24 09:59:40 crc kubenswrapper[4886]: I1124 09:59:40.045200 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-54fc5f65b7-z6p4s_26b9db43-5cbd-4513-8685-976bc2bccad8/kube-rbac-proxy/0.log" Nov 24 09:59:40 crc kubenswrapper[4886]: I1124 09:59:40.172142 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-5b797b8dff-nwx4f_789de7d5-5a8b-4005-b37d-83057da5b4e7/kube-rbac-proxy/0.log" Nov 24 09:59:40 crc kubenswrapper[4886]: I1124 09:59:40.195033 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-54fc5f65b7-z6p4s_26b9db43-5cbd-4513-8685-976bc2bccad8/manager/0.log" Nov 24 09:59:40 crc kubenswrapper[4886]: I1124 09:59:40.377115 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-5b797b8dff-nwx4f_789de7d5-5a8b-4005-b37d-83057da5b4e7/manager/0.log" Nov 24 09:59:40 crc kubenswrapper[4886]: I1124 09:59:40.453606 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-5f97d8c699-sgnjz_50b161b3-4911-4ab1-b348-b1b52713c856/operator/0.log" Nov 24 09:59:40 crc kubenswrapper[4886]: I1124 09:59:40.562920 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-5cd7fdf8c-ztg92_33c0c863-6350-4195-acb5-0dcc801d867b/manager/0.log" Nov 24 09:59:40 crc kubenswrapper[4886]: I1124 09:59:40.661813 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-d656998f4-qmlpw_213c4726-cd5c-4f79-ac2a-bc3ca07f0019/kube-rbac-proxy/0.log" Nov 24 09:59:40 crc kubenswrapper[4886]: I1124 09:59:40.673495 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-d656998f4-qmlpw_213c4726-cd5c-4f79-ac2a-bc3ca07f0019/manager/0.log" Nov 24 09:59:40 crc kubenswrapper[4886]: I1124 09:59:40.732868 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-6d4bf84b58-62fz7_ac24d05a-4485-4fad-a03c-2fb381960d7b/kube-rbac-proxy/0.log" Nov 24 09:59:40 crc kubenswrapper[4886]: I1124 09:59:40.810198 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-6d4bf84b58-62fz7_ac24d05a-4485-4fad-a03c-2fb381960d7b/manager/0.log" Nov 24 09:59:40 crc kubenswrapper[4886]: I1124 09:59:40.909635 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-b4c496f69-kgnpt_e69be7ce-2069-42ab-a8c9-7b4c29243ff0/kube-rbac-proxy/0.log" Nov 24 09:59:40 crc kubenswrapper[4886]: I1124 09:59:40.911306 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-b4c496f69-kgnpt_e69be7ce-2069-42ab-a8c9-7b4c29243ff0/manager/0.log" Nov 24 09:59:41 crc kubenswrapper[4886]: I1124 09:59:41.097047 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-8c6448b9f-7zbrr_dc151242-3f76-4414-9a2b-a5e28adf12af/kube-rbac-proxy/0.log" Nov 24 09:59:41 crc kubenswrapper[4886]: I1124 09:59:41.121314 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-8c6448b9f-7zbrr_dc151242-3f76-4414-9a2b-a5e28adf12af/manager/0.log" Nov 24 09:59:58 crc kubenswrapper[4886]: I1124 09:59:58.315433 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-5fr58_afdfb747-0bc0-40a4-89e6-dc6970617398/control-plane-machine-set-operator/0.log" Nov 24 09:59:58 crc kubenswrapper[4886]: I1124 09:59:58.511201 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-fqpl9_e4cfb4a2-69e1-4c38-b07f-cb1e628cbc2f/machine-api-operator/0.log" Nov 24 09:59:58 crc kubenswrapper[4886]: I1124 09:59:58.524422 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-fqpl9_e4cfb4a2-69e1-4c38-b07f-cb1e628cbc2f/kube-rbac-proxy/0.log" Nov 24 10:00:00 crc kubenswrapper[4886]: I1124 10:00:00.155284 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29399640-gd7wq"] Nov 24 10:00:00 crc kubenswrapper[4886]: E1124 10:00:00.156203 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48fa23b5-90f7-40a4-84b9-7dd6295d23ca" containerName="extract-utilities" Nov 24 10:00:00 crc kubenswrapper[4886]: I1124 10:00:00.156223 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="48fa23b5-90f7-40a4-84b9-7dd6295d23ca" containerName="extract-utilities" Nov 24 10:00:00 crc kubenswrapper[4886]: E1124 10:00:00.156247 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9bd2fee0-8a88-4571-9c40-7585198593e1" containerName="registry-server" Nov 24 10:00:00 crc kubenswrapper[4886]: I1124 10:00:00.156256 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="9bd2fee0-8a88-4571-9c40-7585198593e1" containerName="registry-server" Nov 24 10:00:00 crc kubenswrapper[4886]: E1124 10:00:00.156282 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d53400e-5071-4c4a-a561-801e95f7b79b" containerName="extract-content" Nov 24 10:00:00 crc kubenswrapper[4886]: I1124 10:00:00.156290 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d53400e-5071-4c4a-a561-801e95f7b79b" containerName="extract-content" Nov 24 10:00:00 crc kubenswrapper[4886]: E1124 10:00:00.156305 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48fa23b5-90f7-40a4-84b9-7dd6295d23ca" containerName="extract-content" Nov 24 10:00:00 crc kubenswrapper[4886]: I1124 10:00:00.156312 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="48fa23b5-90f7-40a4-84b9-7dd6295d23ca" containerName="extract-content" Nov 24 10:00:00 crc kubenswrapper[4886]: E1124 10:00:00.156326 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9bd2fee0-8a88-4571-9c40-7585198593e1" containerName="extract-content" Nov 24 10:00:00 crc kubenswrapper[4886]: I1124 10:00:00.156333 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="9bd2fee0-8a88-4571-9c40-7585198593e1" containerName="extract-content" Nov 24 10:00:00 crc kubenswrapper[4886]: E1124 10:00:00.156348 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9bd2fee0-8a88-4571-9c40-7585198593e1" containerName="extract-utilities" Nov 24 10:00:00 crc kubenswrapper[4886]: I1124 10:00:00.156355 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="9bd2fee0-8a88-4571-9c40-7585198593e1" containerName="extract-utilities" Nov 24 10:00:00 crc kubenswrapper[4886]: E1124 10:00:00.156367 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d53400e-5071-4c4a-a561-801e95f7b79b" containerName="extract-utilities" Nov 24 10:00:00 crc kubenswrapper[4886]: I1124 10:00:00.156375 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d53400e-5071-4c4a-a561-801e95f7b79b" containerName="extract-utilities" Nov 24 10:00:00 crc kubenswrapper[4886]: E1124 10:00:00.156384 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48fa23b5-90f7-40a4-84b9-7dd6295d23ca" containerName="registry-server" Nov 24 10:00:00 crc kubenswrapper[4886]: I1124 10:00:00.156391 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="48fa23b5-90f7-40a4-84b9-7dd6295d23ca" containerName="registry-server" Nov 24 10:00:00 crc kubenswrapper[4886]: E1124 10:00:00.156404 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d53400e-5071-4c4a-a561-801e95f7b79b" containerName="registry-server" Nov 24 10:00:00 crc kubenswrapper[4886]: I1124 10:00:00.156411 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d53400e-5071-4c4a-a561-801e95f7b79b" containerName="registry-server" Nov 24 10:00:00 crc kubenswrapper[4886]: I1124 10:00:00.156628 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d53400e-5071-4c4a-a561-801e95f7b79b" containerName="registry-server" Nov 24 10:00:00 crc kubenswrapper[4886]: I1124 10:00:00.156640 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="9bd2fee0-8a88-4571-9c40-7585198593e1" containerName="registry-server" Nov 24 10:00:00 crc kubenswrapper[4886]: I1124 10:00:00.156657 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="48fa23b5-90f7-40a4-84b9-7dd6295d23ca" containerName="registry-server" Nov 24 10:00:00 crc kubenswrapper[4886]: I1124 10:00:00.157620 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29399640-gd7wq" Nov 24 10:00:00 crc kubenswrapper[4886]: I1124 10:00:00.161926 4886 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 24 10:00:00 crc kubenswrapper[4886]: I1124 10:00:00.161963 4886 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 24 10:00:00 crc kubenswrapper[4886]: I1124 10:00:00.167481 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29399640-gd7wq"] Nov 24 10:00:00 crc kubenswrapper[4886]: I1124 10:00:00.262536 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e2a8378b-b9c6-4888-ae60-ce497700a7fc-config-volume\") pod \"collect-profiles-29399640-gd7wq\" (UID: \"e2a8378b-b9c6-4888-ae60-ce497700a7fc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399640-gd7wq" Nov 24 10:00:00 crc kubenswrapper[4886]: I1124 10:00:00.262600 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e2a8378b-b9c6-4888-ae60-ce497700a7fc-secret-volume\") pod \"collect-profiles-29399640-gd7wq\" (UID: \"e2a8378b-b9c6-4888-ae60-ce497700a7fc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399640-gd7wq" Nov 24 10:00:00 crc kubenswrapper[4886]: I1124 10:00:00.262652 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-69ws6\" (UniqueName: \"kubernetes.io/projected/e2a8378b-b9c6-4888-ae60-ce497700a7fc-kube-api-access-69ws6\") pod \"collect-profiles-29399640-gd7wq\" (UID: \"e2a8378b-b9c6-4888-ae60-ce497700a7fc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399640-gd7wq" Nov 24 10:00:00 crc kubenswrapper[4886]: I1124 10:00:00.364962 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e2a8378b-b9c6-4888-ae60-ce497700a7fc-config-volume\") pod \"collect-profiles-29399640-gd7wq\" (UID: \"e2a8378b-b9c6-4888-ae60-ce497700a7fc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399640-gd7wq" Nov 24 10:00:00 crc kubenswrapper[4886]: I1124 10:00:00.365018 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e2a8378b-b9c6-4888-ae60-ce497700a7fc-config-volume\") pod \"collect-profiles-29399640-gd7wq\" (UID: \"e2a8378b-b9c6-4888-ae60-ce497700a7fc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399640-gd7wq" Nov 24 10:00:00 crc kubenswrapper[4886]: I1124 10:00:00.365058 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e2a8378b-b9c6-4888-ae60-ce497700a7fc-secret-volume\") pod \"collect-profiles-29399640-gd7wq\" (UID: \"e2a8378b-b9c6-4888-ae60-ce497700a7fc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399640-gd7wq" Nov 24 10:00:00 crc kubenswrapper[4886]: I1124 10:00:00.365091 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-69ws6\" (UniqueName: \"kubernetes.io/projected/e2a8378b-b9c6-4888-ae60-ce497700a7fc-kube-api-access-69ws6\") pod \"collect-profiles-29399640-gd7wq\" (UID: \"e2a8378b-b9c6-4888-ae60-ce497700a7fc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399640-gd7wq" Nov 24 10:00:00 crc kubenswrapper[4886]: I1124 10:00:00.374503 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e2a8378b-b9c6-4888-ae60-ce497700a7fc-secret-volume\") pod \"collect-profiles-29399640-gd7wq\" (UID: \"e2a8378b-b9c6-4888-ae60-ce497700a7fc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399640-gd7wq" Nov 24 10:00:00 crc kubenswrapper[4886]: I1124 10:00:00.387362 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-69ws6\" (UniqueName: \"kubernetes.io/projected/e2a8378b-b9c6-4888-ae60-ce497700a7fc-kube-api-access-69ws6\") pod \"collect-profiles-29399640-gd7wq\" (UID: \"e2a8378b-b9c6-4888-ae60-ce497700a7fc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399640-gd7wq" Nov 24 10:00:00 crc kubenswrapper[4886]: I1124 10:00:00.492811 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29399640-gd7wq" Nov 24 10:00:00 crc kubenswrapper[4886]: I1124 10:00:00.945637 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29399640-gd7wq"] Nov 24 10:00:01 crc kubenswrapper[4886]: I1124 10:00:01.455431 4886 generic.go:334] "Generic (PLEG): container finished" podID="e2a8378b-b9c6-4888-ae60-ce497700a7fc" containerID="7839b19476b8b48382e8d13783a96cf86682a1f53e423da258645d842189b91a" exitCode=0 Nov 24 10:00:01 crc kubenswrapper[4886]: I1124 10:00:01.455787 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29399640-gd7wq" event={"ID":"e2a8378b-b9c6-4888-ae60-ce497700a7fc","Type":"ContainerDied","Data":"7839b19476b8b48382e8d13783a96cf86682a1f53e423da258645d842189b91a"} Nov 24 10:00:01 crc kubenswrapper[4886]: I1124 10:00:01.455856 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29399640-gd7wq" event={"ID":"e2a8378b-b9c6-4888-ae60-ce497700a7fc","Type":"ContainerStarted","Data":"a304085fef74dfa0748435e3cf04268afbf4d1c512badb70f526ff6b45baa57e"} Nov 24 10:00:02 crc kubenswrapper[4886]: I1124 10:00:02.885118 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29399640-gd7wq" Nov 24 10:00:03 crc kubenswrapper[4886]: I1124 10:00:03.047191 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e2a8378b-b9c6-4888-ae60-ce497700a7fc-secret-volume\") pod \"e2a8378b-b9c6-4888-ae60-ce497700a7fc\" (UID: \"e2a8378b-b9c6-4888-ae60-ce497700a7fc\") " Nov 24 10:00:03 crc kubenswrapper[4886]: I1124 10:00:03.047368 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-69ws6\" (UniqueName: \"kubernetes.io/projected/e2a8378b-b9c6-4888-ae60-ce497700a7fc-kube-api-access-69ws6\") pod \"e2a8378b-b9c6-4888-ae60-ce497700a7fc\" (UID: \"e2a8378b-b9c6-4888-ae60-ce497700a7fc\") " Nov 24 10:00:03 crc kubenswrapper[4886]: I1124 10:00:03.047446 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e2a8378b-b9c6-4888-ae60-ce497700a7fc-config-volume\") pod \"e2a8378b-b9c6-4888-ae60-ce497700a7fc\" (UID: \"e2a8378b-b9c6-4888-ae60-ce497700a7fc\") " Nov 24 10:00:03 crc kubenswrapper[4886]: I1124 10:00:03.048262 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e2a8378b-b9c6-4888-ae60-ce497700a7fc-config-volume" (OuterVolumeSpecName: "config-volume") pod "e2a8378b-b9c6-4888-ae60-ce497700a7fc" (UID: "e2a8378b-b9c6-4888-ae60-ce497700a7fc"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 10:00:03 crc kubenswrapper[4886]: I1124 10:00:03.051199 4886 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e2a8378b-b9c6-4888-ae60-ce497700a7fc-config-volume\") on node \"crc\" DevicePath \"\"" Nov 24 10:00:03 crc kubenswrapper[4886]: I1124 10:00:03.054526 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e2a8378b-b9c6-4888-ae60-ce497700a7fc-kube-api-access-69ws6" (OuterVolumeSpecName: "kube-api-access-69ws6") pod "e2a8378b-b9c6-4888-ae60-ce497700a7fc" (UID: "e2a8378b-b9c6-4888-ae60-ce497700a7fc"). InnerVolumeSpecName "kube-api-access-69ws6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 10:00:03 crc kubenswrapper[4886]: I1124 10:00:03.055697 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e2a8378b-b9c6-4888-ae60-ce497700a7fc-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "e2a8378b-b9c6-4888-ae60-ce497700a7fc" (UID: "e2a8378b-b9c6-4888-ae60-ce497700a7fc"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 10:00:03 crc kubenswrapper[4886]: I1124 10:00:03.153575 4886 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e2a8378b-b9c6-4888-ae60-ce497700a7fc-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 24 10:00:03 crc kubenswrapper[4886]: I1124 10:00:03.153621 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-69ws6\" (UniqueName: \"kubernetes.io/projected/e2a8378b-b9c6-4888-ae60-ce497700a7fc-kube-api-access-69ws6\") on node \"crc\" DevicePath \"\"" Nov 24 10:00:03 crc kubenswrapper[4886]: I1124 10:00:03.475202 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29399640-gd7wq" event={"ID":"e2a8378b-b9c6-4888-ae60-ce497700a7fc","Type":"ContainerDied","Data":"a304085fef74dfa0748435e3cf04268afbf4d1c512badb70f526ff6b45baa57e"} Nov 24 10:00:03 crc kubenswrapper[4886]: I1124 10:00:03.475719 4886 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a304085fef74dfa0748435e3cf04268afbf4d1c512badb70f526ff6b45baa57e" Nov 24 10:00:03 crc kubenswrapper[4886]: I1124 10:00:03.475274 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29399640-gd7wq" Nov 24 10:00:03 crc kubenswrapper[4886]: I1124 10:00:03.960092 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29399595-hxg8m"] Nov 24 10:00:03 crc kubenswrapper[4886]: I1124 10:00:03.969081 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29399595-hxg8m"] Nov 24 10:00:04 crc kubenswrapper[4886]: I1124 10:00:04.864484 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="74884a84-50f8-45a2-9b2c-29f84f510593" path="/var/lib/kubelet/pods/74884a84-50f8-45a2-9b2c-29f84f510593/volumes" Nov 24 10:00:11 crc kubenswrapper[4886]: I1124 10:00:11.440993 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-5b446d88c5-ff82d_fc0d7b30-aa61-4f00-a908-d13689ed0b04/cert-manager-controller/0.log" Nov 24 10:00:11 crc kubenswrapper[4886]: I1124 10:00:11.588556 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-7f985d654d-gsdqn_9475a865-8fb9-4c93-aeb0-09e9b8285a88/cert-manager-cainjector/0.log" Nov 24 10:00:11 crc kubenswrapper[4886]: I1124 10:00:11.612274 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-5655c58dd6-9cfx6_7b1b394b-0362-4ee6-a956-48d7598ef6a2/cert-manager-webhook/0.log" Nov 24 10:00:23 crc kubenswrapper[4886]: I1124 10:00:23.166525 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-5874bd7bc5-nxtnd_166ba125-3d7b-4ab8-bbca-7f707fd9261b/nmstate-console-plugin/0.log" Nov 24 10:00:23 crc kubenswrapper[4886]: I1124 10:00:23.321451 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-bxczf_50df2428-7c0e-4f4a-9c13-dd5cb4038f2e/nmstate-handler/0.log" Nov 24 10:00:23 crc kubenswrapper[4886]: I1124 10:00:23.358826 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-5dcf9c57c5-646k5_028a41e3-6c82-4e95-a4e5-fc835e4d75af/kube-rbac-proxy/0.log" Nov 24 10:00:23 crc kubenswrapper[4886]: I1124 10:00:23.414718 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-5dcf9c57c5-646k5_028a41e3-6c82-4e95-a4e5-fc835e4d75af/nmstate-metrics/0.log" Nov 24 10:00:23 crc kubenswrapper[4886]: I1124 10:00:23.623352 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-6b89b748d8-dvjwb_33d55c5c-55cd-453e-8888-c064a7e0e36d/nmstate-webhook/0.log" Nov 24 10:00:23 crc kubenswrapper[4886]: I1124 10:00:23.631144 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-557fdffb88-mc67m_2c6833a8-49fc-4959-b487-21009d6da024/nmstate-operator/0.log" Nov 24 10:00:31 crc kubenswrapper[4886]: I1124 10:00:31.784921 4886 patch_prober.go:28] interesting pod/machine-config-daemon-zc46q container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 10:00:31 crc kubenswrapper[4886]: I1124 10:00:31.787311 4886 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zc46q" podUID="23cb993e-0360-4449-b604-8ddd825a6502" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 10:00:38 crc kubenswrapper[4886]: I1124 10:00:38.792630 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6c7b4b5f48-znxdl_8bfe8a52-0472-407d-a1c4-a828c81e5032/kube-rbac-proxy/0.log" Nov 24 10:00:38 crc kubenswrapper[4886]: I1124 10:00:38.904024 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6c7b4b5f48-znxdl_8bfe8a52-0472-407d-a1c4-a828c81e5032/controller/0.log" Nov 24 10:00:38 crc kubenswrapper[4886]: I1124 10:00:38.984937 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-gr6pn_68ac7d9f-558c-415d-a499-9aca2c3c7d62/cp-frr-files/0.log" Nov 24 10:00:39 crc kubenswrapper[4886]: I1124 10:00:39.185943 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-gr6pn_68ac7d9f-558c-415d-a499-9aca2c3c7d62/cp-reloader/0.log" Nov 24 10:00:39 crc kubenswrapper[4886]: I1124 10:00:39.217730 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-gr6pn_68ac7d9f-558c-415d-a499-9aca2c3c7d62/cp-frr-files/0.log" Nov 24 10:00:39 crc kubenswrapper[4886]: I1124 10:00:39.250397 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-gr6pn_68ac7d9f-558c-415d-a499-9aca2c3c7d62/cp-reloader/0.log" Nov 24 10:00:39 crc kubenswrapper[4886]: I1124 10:00:39.258983 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-gr6pn_68ac7d9f-558c-415d-a499-9aca2c3c7d62/cp-metrics/0.log" Nov 24 10:00:39 crc kubenswrapper[4886]: I1124 10:00:39.464831 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-gr6pn_68ac7d9f-558c-415d-a499-9aca2c3c7d62/cp-metrics/0.log" Nov 24 10:00:39 crc kubenswrapper[4886]: I1124 10:00:39.483869 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-gr6pn_68ac7d9f-558c-415d-a499-9aca2c3c7d62/cp-metrics/0.log" Nov 24 10:00:39 crc kubenswrapper[4886]: I1124 10:00:39.503993 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-gr6pn_68ac7d9f-558c-415d-a499-9aca2c3c7d62/cp-reloader/0.log" Nov 24 10:00:39 crc kubenswrapper[4886]: I1124 10:00:39.505335 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-gr6pn_68ac7d9f-558c-415d-a499-9aca2c3c7d62/cp-frr-files/0.log" Nov 24 10:00:39 crc kubenswrapper[4886]: I1124 10:00:39.683946 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-gr6pn_68ac7d9f-558c-415d-a499-9aca2c3c7d62/cp-frr-files/0.log" Nov 24 10:00:39 crc kubenswrapper[4886]: I1124 10:00:39.686217 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-gr6pn_68ac7d9f-558c-415d-a499-9aca2c3c7d62/cp-reloader/0.log" Nov 24 10:00:39 crc kubenswrapper[4886]: I1124 10:00:39.713040 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-gr6pn_68ac7d9f-558c-415d-a499-9aca2c3c7d62/controller/0.log" Nov 24 10:00:39 crc kubenswrapper[4886]: I1124 10:00:39.746243 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-gr6pn_68ac7d9f-558c-415d-a499-9aca2c3c7d62/cp-metrics/0.log" Nov 24 10:00:39 crc kubenswrapper[4886]: I1124 10:00:39.917972 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-gr6pn_68ac7d9f-558c-415d-a499-9aca2c3c7d62/kube-rbac-proxy/0.log" Nov 24 10:00:39 crc kubenswrapper[4886]: I1124 10:00:39.926950 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-gr6pn_68ac7d9f-558c-415d-a499-9aca2c3c7d62/frr-metrics/0.log" Nov 24 10:00:40 crc kubenswrapper[4886]: I1124 10:00:40.017309 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-gr6pn_68ac7d9f-558c-415d-a499-9aca2c3c7d62/kube-rbac-proxy-frr/0.log" Nov 24 10:00:40 crc kubenswrapper[4886]: I1124 10:00:40.203452 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-gr6pn_68ac7d9f-558c-415d-a499-9aca2c3c7d62/reloader/0.log" Nov 24 10:00:40 crc kubenswrapper[4886]: I1124 10:00:40.298524 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-6998585d5-npnj4_3d2f363e-5545-4437-90ff-060ba6628fa9/frr-k8s-webhook-server/0.log" Nov 24 10:00:40 crc kubenswrapper[4886]: I1124 10:00:40.519369 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-688456bb67-dhj9s_24f2f5da-80b6-49b8-abe7-43f1301c84db/manager/0.log" Nov 24 10:00:40 crc kubenswrapper[4886]: I1124 10:00:40.689391 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-7fd5b69667-tg7zm_58ed8691-0e33-4c91-aecb-d8bfcceab2de/webhook-server/0.log" Nov 24 10:00:40 crc kubenswrapper[4886]: I1124 10:00:40.884914 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-gwqzh_795ee8d3-ac1e-4a6f-ba0a-8b75eda08e00/kube-rbac-proxy/0.log" Nov 24 10:00:41 crc kubenswrapper[4886]: I1124 10:00:41.022887 4886 scope.go:117] "RemoveContainer" containerID="984c775b9a47f7a5588a2753d4614fdfaf7ff38c830934c1ccec29151730b8c5" Nov 24 10:00:41 crc kubenswrapper[4886]: I1124 10:00:41.532265 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-gwqzh_795ee8d3-ac1e-4a6f-ba0a-8b75eda08e00/speaker/0.log" Nov 24 10:00:41 crc kubenswrapper[4886]: I1124 10:00:41.566502 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-gr6pn_68ac7d9f-558c-415d-a499-9aca2c3c7d62/frr/0.log" Nov 24 10:00:55 crc kubenswrapper[4886]: I1124 10:00:55.277842 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e5rzdb_45839f6c-5966-4fa5-84da-187fc952f624/util/0.log" Nov 24 10:00:55 crc kubenswrapper[4886]: I1124 10:00:55.466206 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e5rzdb_45839f6c-5966-4fa5-84da-187fc952f624/pull/0.log" Nov 24 10:00:55 crc kubenswrapper[4886]: I1124 10:00:55.488021 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e5rzdb_45839f6c-5966-4fa5-84da-187fc952f624/pull/0.log" Nov 24 10:00:55 crc kubenswrapper[4886]: I1124 10:00:55.543609 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e5rzdb_45839f6c-5966-4fa5-84da-187fc952f624/util/0.log" Nov 24 10:00:55 crc kubenswrapper[4886]: I1124 10:00:55.643357 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e5rzdb_45839f6c-5966-4fa5-84da-187fc952f624/pull/0.log" Nov 24 10:00:55 crc kubenswrapper[4886]: I1124 10:00:55.667085 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e5rzdb_45839f6c-5966-4fa5-84da-187fc952f624/extract/0.log" Nov 24 10:00:55 crc kubenswrapper[4886]: I1124 10:00:55.689923 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e5rzdb_45839f6c-5966-4fa5-84da-187fc952f624/util/0.log" Nov 24 10:00:55 crc kubenswrapper[4886]: I1124 10:00:55.867838 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-5zslr_acc3d90c-d18b-48b6-94b2-8ef5250fd6c3/extract-utilities/0.log" Nov 24 10:00:56 crc kubenswrapper[4886]: I1124 10:00:56.028801 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-5zslr_acc3d90c-d18b-48b6-94b2-8ef5250fd6c3/extract-content/0.log" Nov 24 10:00:56 crc kubenswrapper[4886]: I1124 10:00:56.052777 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-5zslr_acc3d90c-d18b-48b6-94b2-8ef5250fd6c3/extract-content/0.log" Nov 24 10:00:56 crc kubenswrapper[4886]: I1124 10:00:56.063582 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-5zslr_acc3d90c-d18b-48b6-94b2-8ef5250fd6c3/extract-utilities/0.log" Nov 24 10:00:56 crc kubenswrapper[4886]: I1124 10:00:56.234830 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-5zslr_acc3d90c-d18b-48b6-94b2-8ef5250fd6c3/extract-content/0.log" Nov 24 10:00:56 crc kubenswrapper[4886]: I1124 10:00:56.255886 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-5zslr_acc3d90c-d18b-48b6-94b2-8ef5250fd6c3/extract-utilities/0.log" Nov 24 10:00:56 crc kubenswrapper[4886]: I1124 10:00:56.482959 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-2wf7k_1fb9d8ba-cdd5-4186-8905-8e06876efe9c/extract-utilities/0.log" Nov 24 10:00:56 crc kubenswrapper[4886]: I1124 10:00:56.744395 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-2wf7k_1fb9d8ba-cdd5-4186-8905-8e06876efe9c/extract-content/0.log" Nov 24 10:00:56 crc kubenswrapper[4886]: I1124 10:00:56.761259 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-2wf7k_1fb9d8ba-cdd5-4186-8905-8e06876efe9c/extract-content/0.log" Nov 24 10:00:56 crc kubenswrapper[4886]: I1124 10:00:56.761313 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-2wf7k_1fb9d8ba-cdd5-4186-8905-8e06876efe9c/extract-utilities/0.log" Nov 24 10:00:56 crc kubenswrapper[4886]: I1124 10:00:56.930746 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-5zslr_acc3d90c-d18b-48b6-94b2-8ef5250fd6c3/registry-server/0.log" Nov 24 10:00:56 crc kubenswrapper[4886]: I1124 10:00:56.954442 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-2wf7k_1fb9d8ba-cdd5-4186-8905-8e06876efe9c/extract-utilities/0.log" Nov 24 10:00:56 crc kubenswrapper[4886]: I1124 10:00:56.996978 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-2wf7k_1fb9d8ba-cdd5-4186-8905-8e06876efe9c/extract-content/0.log" Nov 24 10:00:57 crc kubenswrapper[4886]: I1124 10:00:57.212061 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c68lz9q_e28dd825-a491-4ac3-a3b1-0e19192a40b9/util/0.log" Nov 24 10:00:57 crc kubenswrapper[4886]: I1124 10:00:57.746281 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-2wf7k_1fb9d8ba-cdd5-4186-8905-8e06876efe9c/registry-server/0.log" Nov 24 10:00:57 crc kubenswrapper[4886]: I1124 10:00:57.834606 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c68lz9q_e28dd825-a491-4ac3-a3b1-0e19192a40b9/pull/0.log" Nov 24 10:00:57 crc kubenswrapper[4886]: I1124 10:00:57.880894 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c68lz9q_e28dd825-a491-4ac3-a3b1-0e19192a40b9/util/0.log" Nov 24 10:00:57 crc kubenswrapper[4886]: I1124 10:00:57.912312 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c68lz9q_e28dd825-a491-4ac3-a3b1-0e19192a40b9/pull/0.log" Nov 24 10:00:58 crc kubenswrapper[4886]: I1124 10:00:58.139328 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c68lz9q_e28dd825-a491-4ac3-a3b1-0e19192a40b9/extract/0.log" Nov 24 10:00:58 crc kubenswrapper[4886]: I1124 10:00:58.198771 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c68lz9q_e28dd825-a491-4ac3-a3b1-0e19192a40b9/util/0.log" Nov 24 10:00:58 crc kubenswrapper[4886]: I1124 10:00:58.243187 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c68lz9q_e28dd825-a491-4ac3-a3b1-0e19192a40b9/pull/0.log" Nov 24 10:00:58 crc kubenswrapper[4886]: I1124 10:00:58.354287 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-psbrg_5c32598f-bb74-4615-b8f9-77f36f97f80a/marketplace-operator/0.log" Nov 24 10:00:58 crc kubenswrapper[4886]: I1124 10:00:58.376523 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-hpdtc_b7685eb7-7670-424e-834e-cbe8c0a62dc9/extract-utilities/0.log" Nov 24 10:00:58 crc kubenswrapper[4886]: I1124 10:00:58.595333 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-hpdtc_b7685eb7-7670-424e-834e-cbe8c0a62dc9/extract-content/0.log" Nov 24 10:00:58 crc kubenswrapper[4886]: I1124 10:00:58.596929 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-hpdtc_b7685eb7-7670-424e-834e-cbe8c0a62dc9/extract-utilities/0.log" Nov 24 10:00:58 crc kubenswrapper[4886]: I1124 10:00:58.610922 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-hpdtc_b7685eb7-7670-424e-834e-cbe8c0a62dc9/extract-content/0.log" Nov 24 10:00:58 crc kubenswrapper[4886]: I1124 10:00:58.805638 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-hpdtc_b7685eb7-7670-424e-834e-cbe8c0a62dc9/extract-content/0.log" Nov 24 10:00:58 crc kubenswrapper[4886]: I1124 10:00:58.845562 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-hpdtc_b7685eb7-7670-424e-834e-cbe8c0a62dc9/extract-utilities/0.log" Nov 24 10:00:59 crc kubenswrapper[4886]: I1124 10:00:59.000664 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-hpdtc_b7685eb7-7670-424e-834e-cbe8c0a62dc9/registry-server/0.log" Nov 24 10:00:59 crc kubenswrapper[4886]: I1124 10:00:59.491424 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-lvvtb_7472f270-eb34-4f9e-b332-d7f53ff1e014/extract-utilities/0.log" Nov 24 10:00:59 crc kubenswrapper[4886]: I1124 10:00:59.618223 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-lvvtb_7472f270-eb34-4f9e-b332-d7f53ff1e014/extract-utilities/0.log" Nov 24 10:00:59 crc kubenswrapper[4886]: I1124 10:00:59.661578 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-lvvtb_7472f270-eb34-4f9e-b332-d7f53ff1e014/extract-content/0.log" Nov 24 10:00:59 crc kubenswrapper[4886]: I1124 10:00:59.666907 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-lvvtb_7472f270-eb34-4f9e-b332-d7f53ff1e014/extract-content/0.log" Nov 24 10:00:59 crc kubenswrapper[4886]: I1124 10:00:59.809835 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-lvvtb_7472f270-eb34-4f9e-b332-d7f53ff1e014/extract-utilities/0.log" Nov 24 10:00:59 crc kubenswrapper[4886]: I1124 10:00:59.838518 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-lvvtb_7472f270-eb34-4f9e-b332-d7f53ff1e014/extract-content/0.log" Nov 24 10:01:00 crc kubenswrapper[4886]: I1124 10:01:00.093514 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-lvvtb_7472f270-eb34-4f9e-b332-d7f53ff1e014/registry-server/0.log" Nov 24 10:01:00 crc kubenswrapper[4886]: I1124 10:01:00.164133 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29399641-dmngw"] Nov 24 10:01:00 crc kubenswrapper[4886]: E1124 10:01:00.164705 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2a8378b-b9c6-4888-ae60-ce497700a7fc" containerName="collect-profiles" Nov 24 10:01:00 crc kubenswrapper[4886]: I1124 10:01:00.164731 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2a8378b-b9c6-4888-ae60-ce497700a7fc" containerName="collect-profiles" Nov 24 10:01:00 crc kubenswrapper[4886]: I1124 10:01:00.164996 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="e2a8378b-b9c6-4888-ae60-ce497700a7fc" containerName="collect-profiles" Nov 24 10:01:00 crc kubenswrapper[4886]: I1124 10:01:00.165878 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29399641-dmngw" Nov 24 10:01:00 crc kubenswrapper[4886]: I1124 10:01:00.175136 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29399641-dmngw"] Nov 24 10:01:00 crc kubenswrapper[4886]: I1124 10:01:00.253973 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/829b8bbf-7b4b-47d2-a26a-93a13eb3436b-config-data\") pod \"keystone-cron-29399641-dmngw\" (UID: \"829b8bbf-7b4b-47d2-a26a-93a13eb3436b\") " pod="openstack/keystone-cron-29399641-dmngw" Nov 24 10:01:00 crc kubenswrapper[4886]: I1124 10:01:00.254062 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tldf8\" (UniqueName: \"kubernetes.io/projected/829b8bbf-7b4b-47d2-a26a-93a13eb3436b-kube-api-access-tldf8\") pod \"keystone-cron-29399641-dmngw\" (UID: \"829b8bbf-7b4b-47d2-a26a-93a13eb3436b\") " pod="openstack/keystone-cron-29399641-dmngw" Nov 24 10:01:00 crc kubenswrapper[4886]: I1124 10:01:00.254226 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/829b8bbf-7b4b-47d2-a26a-93a13eb3436b-fernet-keys\") pod \"keystone-cron-29399641-dmngw\" (UID: \"829b8bbf-7b4b-47d2-a26a-93a13eb3436b\") " pod="openstack/keystone-cron-29399641-dmngw" Nov 24 10:01:00 crc kubenswrapper[4886]: I1124 10:01:00.254359 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/829b8bbf-7b4b-47d2-a26a-93a13eb3436b-combined-ca-bundle\") pod \"keystone-cron-29399641-dmngw\" (UID: \"829b8bbf-7b4b-47d2-a26a-93a13eb3436b\") " pod="openstack/keystone-cron-29399641-dmngw" Nov 24 10:01:00 crc kubenswrapper[4886]: I1124 10:01:00.357025 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/829b8bbf-7b4b-47d2-a26a-93a13eb3436b-combined-ca-bundle\") pod \"keystone-cron-29399641-dmngw\" (UID: \"829b8bbf-7b4b-47d2-a26a-93a13eb3436b\") " pod="openstack/keystone-cron-29399641-dmngw" Nov 24 10:01:00 crc kubenswrapper[4886]: I1124 10:01:00.357180 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/829b8bbf-7b4b-47d2-a26a-93a13eb3436b-config-data\") pod \"keystone-cron-29399641-dmngw\" (UID: \"829b8bbf-7b4b-47d2-a26a-93a13eb3436b\") " pod="openstack/keystone-cron-29399641-dmngw" Nov 24 10:01:00 crc kubenswrapper[4886]: I1124 10:01:00.357237 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tldf8\" (UniqueName: \"kubernetes.io/projected/829b8bbf-7b4b-47d2-a26a-93a13eb3436b-kube-api-access-tldf8\") pod \"keystone-cron-29399641-dmngw\" (UID: \"829b8bbf-7b4b-47d2-a26a-93a13eb3436b\") " pod="openstack/keystone-cron-29399641-dmngw" Nov 24 10:01:00 crc kubenswrapper[4886]: I1124 10:01:00.357291 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/829b8bbf-7b4b-47d2-a26a-93a13eb3436b-fernet-keys\") pod \"keystone-cron-29399641-dmngw\" (UID: \"829b8bbf-7b4b-47d2-a26a-93a13eb3436b\") " pod="openstack/keystone-cron-29399641-dmngw" Nov 24 10:01:00 crc kubenswrapper[4886]: I1124 10:01:00.364385 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/829b8bbf-7b4b-47d2-a26a-93a13eb3436b-fernet-keys\") pod \"keystone-cron-29399641-dmngw\" (UID: \"829b8bbf-7b4b-47d2-a26a-93a13eb3436b\") " pod="openstack/keystone-cron-29399641-dmngw" Nov 24 10:01:00 crc kubenswrapper[4886]: I1124 10:01:00.364586 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/829b8bbf-7b4b-47d2-a26a-93a13eb3436b-config-data\") pod \"keystone-cron-29399641-dmngw\" (UID: \"829b8bbf-7b4b-47d2-a26a-93a13eb3436b\") " pod="openstack/keystone-cron-29399641-dmngw" Nov 24 10:01:00 crc kubenswrapper[4886]: I1124 10:01:00.365438 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/829b8bbf-7b4b-47d2-a26a-93a13eb3436b-combined-ca-bundle\") pod \"keystone-cron-29399641-dmngw\" (UID: \"829b8bbf-7b4b-47d2-a26a-93a13eb3436b\") " pod="openstack/keystone-cron-29399641-dmngw" Nov 24 10:01:00 crc kubenswrapper[4886]: I1124 10:01:00.376399 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tldf8\" (UniqueName: \"kubernetes.io/projected/829b8bbf-7b4b-47d2-a26a-93a13eb3436b-kube-api-access-tldf8\") pod \"keystone-cron-29399641-dmngw\" (UID: \"829b8bbf-7b4b-47d2-a26a-93a13eb3436b\") " pod="openstack/keystone-cron-29399641-dmngw" Nov 24 10:01:00 crc kubenswrapper[4886]: I1124 10:01:00.491849 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29399641-dmngw" Nov 24 10:01:00 crc kubenswrapper[4886]: I1124 10:01:00.939943 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29399641-dmngw"] Nov 24 10:01:01 crc kubenswrapper[4886]: I1124 10:01:01.066675 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29399641-dmngw" event={"ID":"829b8bbf-7b4b-47d2-a26a-93a13eb3436b","Type":"ContainerStarted","Data":"05ec21b9acfbdc854d9d7e492d82f1bf872b4668e14e0dc4ef434ea35e077824"} Nov 24 10:01:01 crc kubenswrapper[4886]: I1124 10:01:01.784195 4886 patch_prober.go:28] interesting pod/machine-config-daemon-zc46q container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 10:01:01 crc kubenswrapper[4886]: I1124 10:01:01.784267 4886 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zc46q" podUID="23cb993e-0360-4449-b604-8ddd825a6502" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 10:01:02 crc kubenswrapper[4886]: I1124 10:01:02.078127 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29399641-dmngw" event={"ID":"829b8bbf-7b4b-47d2-a26a-93a13eb3436b","Type":"ContainerStarted","Data":"bea1ba59669b89df0661b0bf456619d069628eaae06685c3df8bd64fde26e47f"} Nov 24 10:01:02 crc kubenswrapper[4886]: I1124 10:01:02.100702 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29399641-dmngw" podStartSLOduration=2.100673151 podStartE2EDuration="2.100673151s" podCreationTimestamp="2025-11-24 10:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 10:01:02.091146289 +0000 UTC m=+4317.977884444" watchObservedRunningTime="2025-11-24 10:01:02.100673151 +0000 UTC m=+4317.987411286" Nov 24 10:01:04 crc kubenswrapper[4886]: I1124 10:01:04.101965 4886 generic.go:334] "Generic (PLEG): container finished" podID="829b8bbf-7b4b-47d2-a26a-93a13eb3436b" containerID="bea1ba59669b89df0661b0bf456619d069628eaae06685c3df8bd64fde26e47f" exitCode=0 Nov 24 10:01:04 crc kubenswrapper[4886]: I1124 10:01:04.102059 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29399641-dmngw" event={"ID":"829b8bbf-7b4b-47d2-a26a-93a13eb3436b","Type":"ContainerDied","Data":"bea1ba59669b89df0661b0bf456619d069628eaae06685c3df8bd64fde26e47f"} Nov 24 10:01:05 crc kubenswrapper[4886]: I1124 10:01:05.476892 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29399641-dmngw" Nov 24 10:01:05 crc kubenswrapper[4886]: I1124 10:01:05.546998 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/829b8bbf-7b4b-47d2-a26a-93a13eb3436b-config-data\") pod \"829b8bbf-7b4b-47d2-a26a-93a13eb3436b\" (UID: \"829b8bbf-7b4b-47d2-a26a-93a13eb3436b\") " Nov 24 10:01:05 crc kubenswrapper[4886]: I1124 10:01:05.547072 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tldf8\" (UniqueName: \"kubernetes.io/projected/829b8bbf-7b4b-47d2-a26a-93a13eb3436b-kube-api-access-tldf8\") pod \"829b8bbf-7b4b-47d2-a26a-93a13eb3436b\" (UID: \"829b8bbf-7b4b-47d2-a26a-93a13eb3436b\") " Nov 24 10:01:05 crc kubenswrapper[4886]: I1124 10:01:05.547099 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/829b8bbf-7b4b-47d2-a26a-93a13eb3436b-fernet-keys\") pod \"829b8bbf-7b4b-47d2-a26a-93a13eb3436b\" (UID: \"829b8bbf-7b4b-47d2-a26a-93a13eb3436b\") " Nov 24 10:01:05 crc kubenswrapper[4886]: I1124 10:01:05.553434 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/829b8bbf-7b4b-47d2-a26a-93a13eb3436b-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "829b8bbf-7b4b-47d2-a26a-93a13eb3436b" (UID: "829b8bbf-7b4b-47d2-a26a-93a13eb3436b"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 10:01:05 crc kubenswrapper[4886]: I1124 10:01:05.558012 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/829b8bbf-7b4b-47d2-a26a-93a13eb3436b-kube-api-access-tldf8" (OuterVolumeSpecName: "kube-api-access-tldf8") pod "829b8bbf-7b4b-47d2-a26a-93a13eb3436b" (UID: "829b8bbf-7b4b-47d2-a26a-93a13eb3436b"). InnerVolumeSpecName "kube-api-access-tldf8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 10:01:05 crc kubenswrapper[4886]: I1124 10:01:05.602972 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/829b8bbf-7b4b-47d2-a26a-93a13eb3436b-config-data" (OuterVolumeSpecName: "config-data") pod "829b8bbf-7b4b-47d2-a26a-93a13eb3436b" (UID: "829b8bbf-7b4b-47d2-a26a-93a13eb3436b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 10:01:05 crc kubenswrapper[4886]: I1124 10:01:05.649008 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/829b8bbf-7b4b-47d2-a26a-93a13eb3436b-combined-ca-bundle\") pod \"829b8bbf-7b4b-47d2-a26a-93a13eb3436b\" (UID: \"829b8bbf-7b4b-47d2-a26a-93a13eb3436b\") " Nov 24 10:01:05 crc kubenswrapper[4886]: I1124 10:01:05.649715 4886 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/829b8bbf-7b4b-47d2-a26a-93a13eb3436b-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 10:01:05 crc kubenswrapper[4886]: I1124 10:01:05.649738 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tldf8\" (UniqueName: \"kubernetes.io/projected/829b8bbf-7b4b-47d2-a26a-93a13eb3436b-kube-api-access-tldf8\") on node \"crc\" DevicePath \"\"" Nov 24 10:01:05 crc kubenswrapper[4886]: I1124 10:01:05.649752 4886 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/829b8bbf-7b4b-47d2-a26a-93a13eb3436b-fernet-keys\") on node \"crc\" DevicePath \"\"" Nov 24 10:01:05 crc kubenswrapper[4886]: I1124 10:01:05.679802 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/829b8bbf-7b4b-47d2-a26a-93a13eb3436b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "829b8bbf-7b4b-47d2-a26a-93a13eb3436b" (UID: "829b8bbf-7b4b-47d2-a26a-93a13eb3436b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 10:01:05 crc kubenswrapper[4886]: I1124 10:01:05.751957 4886 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/829b8bbf-7b4b-47d2-a26a-93a13eb3436b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 10:01:06 crc kubenswrapper[4886]: I1124 10:01:06.123669 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29399641-dmngw" event={"ID":"829b8bbf-7b4b-47d2-a26a-93a13eb3436b","Type":"ContainerDied","Data":"05ec21b9acfbdc854d9d7e492d82f1bf872b4668e14e0dc4ef434ea35e077824"} Nov 24 10:01:06 crc kubenswrapper[4886]: I1124 10:01:06.123722 4886 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="05ec21b9acfbdc854d9d7e492d82f1bf872b4668e14e0dc4ef434ea35e077824" Nov 24 10:01:06 crc kubenswrapper[4886]: I1124 10:01:06.123734 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29399641-dmngw" Nov 24 10:01:21 crc kubenswrapper[4886]: E1124 10:01:21.916499 4886 upgradeaware.go:441] Error proxying data from backend to client: writeto tcp 38.129.56.194:43528->38.129.56.194:46609: read tcp 38.129.56.194:43528->38.129.56.194:46609: read: connection reset by peer Nov 24 10:01:31 crc kubenswrapper[4886]: I1124 10:01:31.784438 4886 patch_prober.go:28] interesting pod/machine-config-daemon-zc46q container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 10:01:31 crc kubenswrapper[4886]: I1124 10:01:31.785116 4886 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zc46q" podUID="23cb993e-0360-4449-b604-8ddd825a6502" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 10:01:31 crc kubenswrapper[4886]: I1124 10:01:31.785200 4886 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-zc46q" Nov 24 10:01:31 crc kubenswrapper[4886]: I1124 10:01:31.786283 4886 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"36b26e4cf83aad3ea3d4859c5d848427409291cf3105e6e7735d206d24bf9ffb"} pod="openshift-machine-config-operator/machine-config-daemon-zc46q" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 24 10:01:31 crc kubenswrapper[4886]: I1124 10:01:31.786357 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-zc46q" podUID="23cb993e-0360-4449-b604-8ddd825a6502" containerName="machine-config-daemon" containerID="cri-o://36b26e4cf83aad3ea3d4859c5d848427409291cf3105e6e7735d206d24bf9ffb" gracePeriod=600 Nov 24 10:01:32 crc kubenswrapper[4886]: I1124 10:01:32.376993 4886 generic.go:334] "Generic (PLEG): container finished" podID="23cb993e-0360-4449-b604-8ddd825a6502" containerID="36b26e4cf83aad3ea3d4859c5d848427409291cf3105e6e7735d206d24bf9ffb" exitCode=0 Nov 24 10:01:32 crc kubenswrapper[4886]: I1124 10:01:32.377191 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zc46q" event={"ID":"23cb993e-0360-4449-b604-8ddd825a6502","Type":"ContainerDied","Data":"36b26e4cf83aad3ea3d4859c5d848427409291cf3105e6e7735d206d24bf9ffb"} Nov 24 10:01:32 crc kubenswrapper[4886]: I1124 10:01:32.378388 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zc46q" event={"ID":"23cb993e-0360-4449-b604-8ddd825a6502","Type":"ContainerStarted","Data":"3bc95c1f2f670013db4a5ec1af66ae89aa568d30fba817b20b23d1cb6678ea86"} Nov 24 10:01:32 crc kubenswrapper[4886]: I1124 10:01:32.378519 4886 scope.go:117] "RemoveContainer" containerID="1e45cb7a19817212ac61663702e5e39e029304fff651e6302fbb94fac4e5b2b7" Nov 24 10:02:48 crc kubenswrapper[4886]: I1124 10:02:48.182720 4886 generic.go:334] "Generic (PLEG): container finished" podID="795701b4-9f3a-4065-bc4f-54daab63c092" containerID="d706c070b3265ad6535e1e677553c0937835772ba86a6a3e9af4c4d63886f956" exitCode=0 Nov 24 10:02:48 crc kubenswrapper[4886]: I1124 10:02:48.182873 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-w5rqf/must-gather-55bft" event={"ID":"795701b4-9f3a-4065-bc4f-54daab63c092","Type":"ContainerDied","Data":"d706c070b3265ad6535e1e677553c0937835772ba86a6a3e9af4c4d63886f956"} Nov 24 10:02:48 crc kubenswrapper[4886]: I1124 10:02:48.185111 4886 scope.go:117] "RemoveContainer" containerID="d706c070b3265ad6535e1e677553c0937835772ba86a6a3e9af4c4d63886f956" Nov 24 10:02:48 crc kubenswrapper[4886]: I1124 10:02:48.287560 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-w5rqf_must-gather-55bft_795701b4-9f3a-4065-bc4f-54daab63c092/gather/0.log" Nov 24 10:02:58 crc kubenswrapper[4886]: I1124 10:02:58.167675 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-w5rqf/must-gather-55bft"] Nov 24 10:02:58 crc kubenswrapper[4886]: I1124 10:02:58.168680 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-w5rqf/must-gather-55bft" podUID="795701b4-9f3a-4065-bc4f-54daab63c092" containerName="copy" containerID="cri-o://727762faabd1db2f814319e6e068d7022ec9057298b8c6b43b6b81ebec26a381" gracePeriod=2 Nov 24 10:02:58 crc kubenswrapper[4886]: I1124 10:02:58.184007 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-w5rqf/must-gather-55bft"] Nov 24 10:02:59 crc kubenswrapper[4886]: I1124 10:02:59.139303 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-w5rqf_must-gather-55bft_795701b4-9f3a-4065-bc4f-54daab63c092/copy/0.log" Nov 24 10:02:59 crc kubenswrapper[4886]: I1124 10:02:59.141010 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-w5rqf/must-gather-55bft" Nov 24 10:02:59 crc kubenswrapper[4886]: I1124 10:02:59.259467 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/795701b4-9f3a-4065-bc4f-54daab63c092-must-gather-output\") pod \"795701b4-9f3a-4065-bc4f-54daab63c092\" (UID: \"795701b4-9f3a-4065-bc4f-54daab63c092\") " Nov 24 10:02:59 crc kubenswrapper[4886]: I1124 10:02:59.259539 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lsr8t\" (UniqueName: \"kubernetes.io/projected/795701b4-9f3a-4065-bc4f-54daab63c092-kube-api-access-lsr8t\") pod \"795701b4-9f3a-4065-bc4f-54daab63c092\" (UID: \"795701b4-9f3a-4065-bc4f-54daab63c092\") " Nov 24 10:02:59 crc kubenswrapper[4886]: I1124 10:02:59.266814 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/795701b4-9f3a-4065-bc4f-54daab63c092-kube-api-access-lsr8t" (OuterVolumeSpecName: "kube-api-access-lsr8t") pod "795701b4-9f3a-4065-bc4f-54daab63c092" (UID: "795701b4-9f3a-4065-bc4f-54daab63c092"). InnerVolumeSpecName "kube-api-access-lsr8t". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 10:02:59 crc kubenswrapper[4886]: I1124 10:02:59.300136 4886 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-w5rqf_must-gather-55bft_795701b4-9f3a-4065-bc4f-54daab63c092/copy/0.log" Nov 24 10:02:59 crc kubenswrapper[4886]: I1124 10:02:59.300778 4886 generic.go:334] "Generic (PLEG): container finished" podID="795701b4-9f3a-4065-bc4f-54daab63c092" containerID="727762faabd1db2f814319e6e068d7022ec9057298b8c6b43b6b81ebec26a381" exitCode=143 Nov 24 10:02:59 crc kubenswrapper[4886]: I1124 10:02:59.300849 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-w5rqf/must-gather-55bft" Nov 24 10:02:59 crc kubenswrapper[4886]: I1124 10:02:59.300865 4886 scope.go:117] "RemoveContainer" containerID="727762faabd1db2f814319e6e068d7022ec9057298b8c6b43b6b81ebec26a381" Nov 24 10:02:59 crc kubenswrapper[4886]: I1124 10:02:59.331191 4886 scope.go:117] "RemoveContainer" containerID="d706c070b3265ad6535e1e677553c0937835772ba86a6a3e9af4c4d63886f956" Nov 24 10:02:59 crc kubenswrapper[4886]: I1124 10:02:59.364444 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lsr8t\" (UniqueName: \"kubernetes.io/projected/795701b4-9f3a-4065-bc4f-54daab63c092-kube-api-access-lsr8t\") on node \"crc\" DevicePath \"\"" Nov 24 10:02:59 crc kubenswrapper[4886]: I1124 10:02:59.410108 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/795701b4-9f3a-4065-bc4f-54daab63c092-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "795701b4-9f3a-4065-bc4f-54daab63c092" (UID: "795701b4-9f3a-4065-bc4f-54daab63c092"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 10:02:59 crc kubenswrapper[4886]: I1124 10:02:59.418510 4886 scope.go:117] "RemoveContainer" containerID="727762faabd1db2f814319e6e068d7022ec9057298b8c6b43b6b81ebec26a381" Nov 24 10:02:59 crc kubenswrapper[4886]: E1124 10:02:59.419291 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"727762faabd1db2f814319e6e068d7022ec9057298b8c6b43b6b81ebec26a381\": container with ID starting with 727762faabd1db2f814319e6e068d7022ec9057298b8c6b43b6b81ebec26a381 not found: ID does not exist" containerID="727762faabd1db2f814319e6e068d7022ec9057298b8c6b43b6b81ebec26a381" Nov 24 10:02:59 crc kubenswrapper[4886]: I1124 10:02:59.419326 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"727762faabd1db2f814319e6e068d7022ec9057298b8c6b43b6b81ebec26a381"} err="failed to get container status \"727762faabd1db2f814319e6e068d7022ec9057298b8c6b43b6b81ebec26a381\": rpc error: code = NotFound desc = could not find container \"727762faabd1db2f814319e6e068d7022ec9057298b8c6b43b6b81ebec26a381\": container with ID starting with 727762faabd1db2f814319e6e068d7022ec9057298b8c6b43b6b81ebec26a381 not found: ID does not exist" Nov 24 10:02:59 crc kubenswrapper[4886]: I1124 10:02:59.419348 4886 scope.go:117] "RemoveContainer" containerID="d706c070b3265ad6535e1e677553c0937835772ba86a6a3e9af4c4d63886f956" Nov 24 10:02:59 crc kubenswrapper[4886]: E1124 10:02:59.419717 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d706c070b3265ad6535e1e677553c0937835772ba86a6a3e9af4c4d63886f956\": container with ID starting with d706c070b3265ad6535e1e677553c0937835772ba86a6a3e9af4c4d63886f956 not found: ID does not exist" containerID="d706c070b3265ad6535e1e677553c0937835772ba86a6a3e9af4c4d63886f956" Nov 24 10:02:59 crc kubenswrapper[4886]: I1124 10:02:59.419786 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d706c070b3265ad6535e1e677553c0937835772ba86a6a3e9af4c4d63886f956"} err="failed to get container status \"d706c070b3265ad6535e1e677553c0937835772ba86a6a3e9af4c4d63886f956\": rpc error: code = NotFound desc = could not find container \"d706c070b3265ad6535e1e677553c0937835772ba86a6a3e9af4c4d63886f956\": container with ID starting with d706c070b3265ad6535e1e677553c0937835772ba86a6a3e9af4c4d63886f956 not found: ID does not exist" Nov 24 10:02:59 crc kubenswrapper[4886]: I1124 10:02:59.466790 4886 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/795701b4-9f3a-4065-bc4f-54daab63c092-must-gather-output\") on node \"crc\" DevicePath \"\"" Nov 24 10:03:00 crc kubenswrapper[4886]: I1124 10:03:00.865722 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="795701b4-9f3a-4065-bc4f-54daab63c092" path="/var/lib/kubelet/pods/795701b4-9f3a-4065-bc4f-54daab63c092/volumes" Nov 24 10:03:38 crc kubenswrapper[4886]: I1124 10:03:38.150823 4886 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-9qb4b"] Nov 24 10:03:38 crc kubenswrapper[4886]: E1124 10:03:38.152413 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="829b8bbf-7b4b-47d2-a26a-93a13eb3436b" containerName="keystone-cron" Nov 24 10:03:38 crc kubenswrapper[4886]: I1124 10:03:38.152448 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="829b8bbf-7b4b-47d2-a26a-93a13eb3436b" containerName="keystone-cron" Nov 24 10:03:38 crc kubenswrapper[4886]: E1124 10:03:38.152466 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="795701b4-9f3a-4065-bc4f-54daab63c092" containerName="copy" Nov 24 10:03:38 crc kubenswrapper[4886]: I1124 10:03:38.152472 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="795701b4-9f3a-4065-bc4f-54daab63c092" containerName="copy" Nov 24 10:03:38 crc kubenswrapper[4886]: E1124 10:03:38.152481 4886 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="795701b4-9f3a-4065-bc4f-54daab63c092" containerName="gather" Nov 24 10:03:38 crc kubenswrapper[4886]: I1124 10:03:38.152491 4886 state_mem.go:107] "Deleted CPUSet assignment" podUID="795701b4-9f3a-4065-bc4f-54daab63c092" containerName="gather" Nov 24 10:03:38 crc kubenswrapper[4886]: I1124 10:03:38.152696 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="795701b4-9f3a-4065-bc4f-54daab63c092" containerName="copy" Nov 24 10:03:38 crc kubenswrapper[4886]: I1124 10:03:38.152712 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="795701b4-9f3a-4065-bc4f-54daab63c092" containerName="gather" Nov 24 10:03:38 crc kubenswrapper[4886]: I1124 10:03:38.152730 4886 memory_manager.go:354] "RemoveStaleState removing state" podUID="829b8bbf-7b4b-47d2-a26a-93a13eb3436b" containerName="keystone-cron" Nov 24 10:03:38 crc kubenswrapper[4886]: I1124 10:03:38.154434 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9qb4b" Nov 24 10:03:38 crc kubenswrapper[4886]: I1124 10:03:38.172345 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-9qb4b"] Nov 24 10:03:38 crc kubenswrapper[4886]: I1124 10:03:38.248541 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-snw5m\" (UniqueName: \"kubernetes.io/projected/6681af7c-7ddb-48c4-aee7-4119c810d34d-kube-api-access-snw5m\") pod \"redhat-operators-9qb4b\" (UID: \"6681af7c-7ddb-48c4-aee7-4119c810d34d\") " pod="openshift-marketplace/redhat-operators-9qb4b" Nov 24 10:03:38 crc kubenswrapper[4886]: I1124 10:03:38.248931 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6681af7c-7ddb-48c4-aee7-4119c810d34d-utilities\") pod \"redhat-operators-9qb4b\" (UID: \"6681af7c-7ddb-48c4-aee7-4119c810d34d\") " pod="openshift-marketplace/redhat-operators-9qb4b" Nov 24 10:03:38 crc kubenswrapper[4886]: I1124 10:03:38.249218 4886 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6681af7c-7ddb-48c4-aee7-4119c810d34d-catalog-content\") pod \"redhat-operators-9qb4b\" (UID: \"6681af7c-7ddb-48c4-aee7-4119c810d34d\") " pod="openshift-marketplace/redhat-operators-9qb4b" Nov 24 10:03:38 crc kubenswrapper[4886]: I1124 10:03:38.351316 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6681af7c-7ddb-48c4-aee7-4119c810d34d-utilities\") pod \"redhat-operators-9qb4b\" (UID: \"6681af7c-7ddb-48c4-aee7-4119c810d34d\") " pod="openshift-marketplace/redhat-operators-9qb4b" Nov 24 10:03:38 crc kubenswrapper[4886]: I1124 10:03:38.351749 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6681af7c-7ddb-48c4-aee7-4119c810d34d-catalog-content\") pod \"redhat-operators-9qb4b\" (UID: \"6681af7c-7ddb-48c4-aee7-4119c810d34d\") " pod="openshift-marketplace/redhat-operators-9qb4b" Nov 24 10:03:38 crc kubenswrapper[4886]: I1124 10:03:38.351926 4886 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-snw5m\" (UniqueName: \"kubernetes.io/projected/6681af7c-7ddb-48c4-aee7-4119c810d34d-kube-api-access-snw5m\") pod \"redhat-operators-9qb4b\" (UID: \"6681af7c-7ddb-48c4-aee7-4119c810d34d\") " pod="openshift-marketplace/redhat-operators-9qb4b" Nov 24 10:03:38 crc kubenswrapper[4886]: I1124 10:03:38.351964 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6681af7c-7ddb-48c4-aee7-4119c810d34d-utilities\") pod \"redhat-operators-9qb4b\" (UID: \"6681af7c-7ddb-48c4-aee7-4119c810d34d\") " pod="openshift-marketplace/redhat-operators-9qb4b" Nov 24 10:03:38 crc kubenswrapper[4886]: I1124 10:03:38.352280 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6681af7c-7ddb-48c4-aee7-4119c810d34d-catalog-content\") pod \"redhat-operators-9qb4b\" (UID: \"6681af7c-7ddb-48c4-aee7-4119c810d34d\") " pod="openshift-marketplace/redhat-operators-9qb4b" Nov 24 10:03:38 crc kubenswrapper[4886]: I1124 10:03:38.384361 4886 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-snw5m\" (UniqueName: \"kubernetes.io/projected/6681af7c-7ddb-48c4-aee7-4119c810d34d-kube-api-access-snw5m\") pod \"redhat-operators-9qb4b\" (UID: \"6681af7c-7ddb-48c4-aee7-4119c810d34d\") " pod="openshift-marketplace/redhat-operators-9qb4b" Nov 24 10:03:38 crc kubenswrapper[4886]: I1124 10:03:38.475933 4886 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9qb4b" Nov 24 10:03:38 crc kubenswrapper[4886]: I1124 10:03:38.950322 4886 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-9qb4b"] Nov 24 10:03:39 crc kubenswrapper[4886]: I1124 10:03:39.743530 4886 generic.go:334] "Generic (PLEG): container finished" podID="6681af7c-7ddb-48c4-aee7-4119c810d34d" containerID="9aabd02b1e87941f41d921899ddbbdf8a0368576562e147d7846188e25206436" exitCode=0 Nov 24 10:03:39 crc kubenswrapper[4886]: I1124 10:03:39.743607 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9qb4b" event={"ID":"6681af7c-7ddb-48c4-aee7-4119c810d34d","Type":"ContainerDied","Data":"9aabd02b1e87941f41d921899ddbbdf8a0368576562e147d7846188e25206436"} Nov 24 10:03:39 crc kubenswrapper[4886]: I1124 10:03:39.743884 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9qb4b" event={"ID":"6681af7c-7ddb-48c4-aee7-4119c810d34d","Type":"ContainerStarted","Data":"cf63abe7f586add035ae2fd97e00b24d5b2a1c97f3e07400071b96a7f56c6912"} Nov 24 10:03:40 crc kubenswrapper[4886]: I1124 10:03:40.754689 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9qb4b" event={"ID":"6681af7c-7ddb-48c4-aee7-4119c810d34d","Type":"ContainerStarted","Data":"2ea993522b61cdb78d136c4c96da5fbd005d1b00fd353d19d45ef97695d349de"} Nov 24 10:03:42 crc kubenswrapper[4886]: I1124 10:03:42.774258 4886 generic.go:334] "Generic (PLEG): container finished" podID="6681af7c-7ddb-48c4-aee7-4119c810d34d" containerID="2ea993522b61cdb78d136c4c96da5fbd005d1b00fd353d19d45ef97695d349de" exitCode=0 Nov 24 10:03:42 crc kubenswrapper[4886]: I1124 10:03:42.774361 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9qb4b" event={"ID":"6681af7c-7ddb-48c4-aee7-4119c810d34d","Type":"ContainerDied","Data":"2ea993522b61cdb78d136c4c96da5fbd005d1b00fd353d19d45ef97695d349de"} Nov 24 10:03:43 crc kubenswrapper[4886]: I1124 10:03:43.786985 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9qb4b" event={"ID":"6681af7c-7ddb-48c4-aee7-4119c810d34d","Type":"ContainerStarted","Data":"f43f22c606043af8bcfbf8efe4166aa826dc11e0226d3b8eb06c1e44d028257b"} Nov 24 10:03:43 crc kubenswrapper[4886]: I1124 10:03:43.805466 4886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-9qb4b" podStartSLOduration=2.394177228 podStartE2EDuration="5.805444852s" podCreationTimestamp="2025-11-24 10:03:38 +0000 UTC" firstStartedPulling="2025-11-24 10:03:39.745792776 +0000 UTC m=+4475.632530911" lastFinishedPulling="2025-11-24 10:03:43.15706039 +0000 UTC m=+4479.043798535" observedRunningTime="2025-11-24 10:03:43.801646093 +0000 UTC m=+4479.688384278" watchObservedRunningTime="2025-11-24 10:03:43.805444852 +0000 UTC m=+4479.692182997" Nov 24 10:03:48 crc kubenswrapper[4886]: I1124 10:03:48.477431 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-9qb4b" Nov 24 10:03:48 crc kubenswrapper[4886]: I1124 10:03:48.478237 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-9qb4b" Nov 24 10:03:49 crc kubenswrapper[4886]: I1124 10:03:49.529541 4886 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-9qb4b" podUID="6681af7c-7ddb-48c4-aee7-4119c810d34d" containerName="registry-server" probeResult="failure" output=< Nov 24 10:03:49 crc kubenswrapper[4886]: timeout: failed to connect service ":50051" within 1s Nov 24 10:03:49 crc kubenswrapper[4886]: > Nov 24 10:03:58 crc kubenswrapper[4886]: I1124 10:03:58.521041 4886 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-9qb4b" Nov 24 10:03:58 crc kubenswrapper[4886]: I1124 10:03:58.575760 4886 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-9qb4b" Nov 24 10:03:58 crc kubenswrapper[4886]: I1124 10:03:58.781567 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-9qb4b"] Nov 24 10:03:59 crc kubenswrapper[4886]: I1124 10:03:59.962708 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-9qb4b" podUID="6681af7c-7ddb-48c4-aee7-4119c810d34d" containerName="registry-server" containerID="cri-o://f43f22c606043af8bcfbf8efe4166aa826dc11e0226d3b8eb06c1e44d028257b" gracePeriod=2 Nov 24 10:04:00 crc kubenswrapper[4886]: I1124 10:04:00.460547 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9qb4b" Nov 24 10:04:00 crc kubenswrapper[4886]: I1124 10:04:00.575929 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6681af7c-7ddb-48c4-aee7-4119c810d34d-catalog-content\") pod \"6681af7c-7ddb-48c4-aee7-4119c810d34d\" (UID: \"6681af7c-7ddb-48c4-aee7-4119c810d34d\") " Nov 24 10:04:00 crc kubenswrapper[4886]: I1124 10:04:00.576076 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-snw5m\" (UniqueName: \"kubernetes.io/projected/6681af7c-7ddb-48c4-aee7-4119c810d34d-kube-api-access-snw5m\") pod \"6681af7c-7ddb-48c4-aee7-4119c810d34d\" (UID: \"6681af7c-7ddb-48c4-aee7-4119c810d34d\") " Nov 24 10:04:00 crc kubenswrapper[4886]: I1124 10:04:00.576227 4886 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6681af7c-7ddb-48c4-aee7-4119c810d34d-utilities\") pod \"6681af7c-7ddb-48c4-aee7-4119c810d34d\" (UID: \"6681af7c-7ddb-48c4-aee7-4119c810d34d\") " Nov 24 10:04:00 crc kubenswrapper[4886]: I1124 10:04:00.577656 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6681af7c-7ddb-48c4-aee7-4119c810d34d-utilities" (OuterVolumeSpecName: "utilities") pod "6681af7c-7ddb-48c4-aee7-4119c810d34d" (UID: "6681af7c-7ddb-48c4-aee7-4119c810d34d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 10:04:00 crc kubenswrapper[4886]: I1124 10:04:00.585996 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6681af7c-7ddb-48c4-aee7-4119c810d34d-kube-api-access-snw5m" (OuterVolumeSpecName: "kube-api-access-snw5m") pod "6681af7c-7ddb-48c4-aee7-4119c810d34d" (UID: "6681af7c-7ddb-48c4-aee7-4119c810d34d"). InnerVolumeSpecName "kube-api-access-snw5m". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 10:04:00 crc kubenswrapper[4886]: I1124 10:04:00.669423 4886 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6681af7c-7ddb-48c4-aee7-4119c810d34d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6681af7c-7ddb-48c4-aee7-4119c810d34d" (UID: "6681af7c-7ddb-48c4-aee7-4119c810d34d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 10:04:00 crc kubenswrapper[4886]: I1124 10:04:00.679197 4886 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6681af7c-7ddb-48c4-aee7-4119c810d34d-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 10:04:00 crc kubenswrapper[4886]: I1124 10:04:00.679227 4886 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-snw5m\" (UniqueName: \"kubernetes.io/projected/6681af7c-7ddb-48c4-aee7-4119c810d34d-kube-api-access-snw5m\") on node \"crc\" DevicePath \"\"" Nov 24 10:04:00 crc kubenswrapper[4886]: I1124 10:04:00.679238 4886 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6681af7c-7ddb-48c4-aee7-4119c810d34d-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 10:04:00 crc kubenswrapper[4886]: I1124 10:04:00.977433 4886 generic.go:334] "Generic (PLEG): container finished" podID="6681af7c-7ddb-48c4-aee7-4119c810d34d" containerID="f43f22c606043af8bcfbf8efe4166aa826dc11e0226d3b8eb06c1e44d028257b" exitCode=0 Nov 24 10:04:00 crc kubenswrapper[4886]: I1124 10:04:00.977498 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9qb4b" event={"ID":"6681af7c-7ddb-48c4-aee7-4119c810d34d","Type":"ContainerDied","Data":"f43f22c606043af8bcfbf8efe4166aa826dc11e0226d3b8eb06c1e44d028257b"} Nov 24 10:04:00 crc kubenswrapper[4886]: I1124 10:04:00.977562 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9qb4b" event={"ID":"6681af7c-7ddb-48c4-aee7-4119c810d34d","Type":"ContainerDied","Data":"cf63abe7f586add035ae2fd97e00b24d5b2a1c97f3e07400071b96a7f56c6912"} Nov 24 10:04:00 crc kubenswrapper[4886]: I1124 10:04:00.977587 4886 scope.go:117] "RemoveContainer" containerID="f43f22c606043af8bcfbf8efe4166aa826dc11e0226d3b8eb06c1e44d028257b" Nov 24 10:04:00 crc kubenswrapper[4886]: I1124 10:04:00.977511 4886 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9qb4b" Nov 24 10:04:01 crc kubenswrapper[4886]: I1124 10:04:01.004034 4886 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-9qb4b"] Nov 24 10:04:01 crc kubenswrapper[4886]: I1124 10:04:01.004274 4886 scope.go:117] "RemoveContainer" containerID="2ea993522b61cdb78d136c4c96da5fbd005d1b00fd353d19d45ef97695d349de" Nov 24 10:04:01 crc kubenswrapper[4886]: I1124 10:04:01.013820 4886 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-9qb4b"] Nov 24 10:04:01 crc kubenswrapper[4886]: I1124 10:04:01.027566 4886 scope.go:117] "RemoveContainer" containerID="9aabd02b1e87941f41d921899ddbbdf8a0368576562e147d7846188e25206436" Nov 24 10:04:01 crc kubenswrapper[4886]: I1124 10:04:01.074270 4886 scope.go:117] "RemoveContainer" containerID="f43f22c606043af8bcfbf8efe4166aa826dc11e0226d3b8eb06c1e44d028257b" Nov 24 10:04:01 crc kubenswrapper[4886]: E1124 10:04:01.074602 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f43f22c606043af8bcfbf8efe4166aa826dc11e0226d3b8eb06c1e44d028257b\": container with ID starting with f43f22c606043af8bcfbf8efe4166aa826dc11e0226d3b8eb06c1e44d028257b not found: ID does not exist" containerID="f43f22c606043af8bcfbf8efe4166aa826dc11e0226d3b8eb06c1e44d028257b" Nov 24 10:04:01 crc kubenswrapper[4886]: I1124 10:04:01.074634 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f43f22c606043af8bcfbf8efe4166aa826dc11e0226d3b8eb06c1e44d028257b"} err="failed to get container status \"f43f22c606043af8bcfbf8efe4166aa826dc11e0226d3b8eb06c1e44d028257b\": rpc error: code = NotFound desc = could not find container \"f43f22c606043af8bcfbf8efe4166aa826dc11e0226d3b8eb06c1e44d028257b\": container with ID starting with f43f22c606043af8bcfbf8efe4166aa826dc11e0226d3b8eb06c1e44d028257b not found: ID does not exist" Nov 24 10:04:01 crc kubenswrapper[4886]: I1124 10:04:01.074656 4886 scope.go:117] "RemoveContainer" containerID="2ea993522b61cdb78d136c4c96da5fbd005d1b00fd353d19d45ef97695d349de" Nov 24 10:04:01 crc kubenswrapper[4886]: E1124 10:04:01.074935 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2ea993522b61cdb78d136c4c96da5fbd005d1b00fd353d19d45ef97695d349de\": container with ID starting with 2ea993522b61cdb78d136c4c96da5fbd005d1b00fd353d19d45ef97695d349de not found: ID does not exist" containerID="2ea993522b61cdb78d136c4c96da5fbd005d1b00fd353d19d45ef97695d349de" Nov 24 10:04:01 crc kubenswrapper[4886]: I1124 10:04:01.074959 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2ea993522b61cdb78d136c4c96da5fbd005d1b00fd353d19d45ef97695d349de"} err="failed to get container status \"2ea993522b61cdb78d136c4c96da5fbd005d1b00fd353d19d45ef97695d349de\": rpc error: code = NotFound desc = could not find container \"2ea993522b61cdb78d136c4c96da5fbd005d1b00fd353d19d45ef97695d349de\": container with ID starting with 2ea993522b61cdb78d136c4c96da5fbd005d1b00fd353d19d45ef97695d349de not found: ID does not exist" Nov 24 10:04:01 crc kubenswrapper[4886]: I1124 10:04:01.074973 4886 scope.go:117] "RemoveContainer" containerID="9aabd02b1e87941f41d921899ddbbdf8a0368576562e147d7846188e25206436" Nov 24 10:04:01 crc kubenswrapper[4886]: E1124 10:04:01.075448 4886 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9aabd02b1e87941f41d921899ddbbdf8a0368576562e147d7846188e25206436\": container with ID starting with 9aabd02b1e87941f41d921899ddbbdf8a0368576562e147d7846188e25206436 not found: ID does not exist" containerID="9aabd02b1e87941f41d921899ddbbdf8a0368576562e147d7846188e25206436" Nov 24 10:04:01 crc kubenswrapper[4886]: I1124 10:04:01.075517 4886 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9aabd02b1e87941f41d921899ddbbdf8a0368576562e147d7846188e25206436"} err="failed to get container status \"9aabd02b1e87941f41d921899ddbbdf8a0368576562e147d7846188e25206436\": rpc error: code = NotFound desc = could not find container \"9aabd02b1e87941f41d921899ddbbdf8a0368576562e147d7846188e25206436\": container with ID starting with 9aabd02b1e87941f41d921899ddbbdf8a0368576562e147d7846188e25206436 not found: ID does not exist" Nov 24 10:04:01 crc kubenswrapper[4886]: I1124 10:04:01.784944 4886 patch_prober.go:28] interesting pod/machine-config-daemon-zc46q container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 10:04:01 crc kubenswrapper[4886]: I1124 10:04:01.785467 4886 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zc46q" podUID="23cb993e-0360-4449-b604-8ddd825a6502" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 10:04:02 crc kubenswrapper[4886]: I1124 10:04:02.861908 4886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6681af7c-7ddb-48c4-aee7-4119c810d34d" path="/var/lib/kubelet/pods/6681af7c-7ddb-48c4-aee7-4119c810d34d/volumes" Nov 24 10:04:31 crc kubenswrapper[4886]: I1124 10:04:31.784634 4886 patch_prober.go:28] interesting pod/machine-config-daemon-zc46q container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 10:04:31 crc kubenswrapper[4886]: I1124 10:04:31.785257 4886 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zc46q" podUID="23cb993e-0360-4449-b604-8ddd825a6502" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 10:05:01 crc kubenswrapper[4886]: I1124 10:05:01.784769 4886 patch_prober.go:28] interesting pod/machine-config-daemon-zc46q container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 10:05:01 crc kubenswrapper[4886]: I1124 10:05:01.785351 4886 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zc46q" podUID="23cb993e-0360-4449-b604-8ddd825a6502" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 10:05:01 crc kubenswrapper[4886]: I1124 10:05:01.785414 4886 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-zc46q" Nov 24 10:05:01 crc kubenswrapper[4886]: I1124 10:05:01.785985 4886 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"3bc95c1f2f670013db4a5ec1af66ae89aa568d30fba817b20b23d1cb6678ea86"} pod="openshift-machine-config-operator/machine-config-daemon-zc46q" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 24 10:05:01 crc kubenswrapper[4886]: I1124 10:05:01.786037 4886 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-zc46q" podUID="23cb993e-0360-4449-b604-8ddd825a6502" containerName="machine-config-daemon" containerID="cri-o://3bc95c1f2f670013db4a5ec1af66ae89aa568d30fba817b20b23d1cb6678ea86" gracePeriod=600 Nov 24 10:05:01 crc kubenswrapper[4886]: E1124 10:05:01.907634 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zc46q_openshift-machine-config-operator(23cb993e-0360-4449-b604-8ddd825a6502)\"" pod="openshift-machine-config-operator/machine-config-daemon-zc46q" podUID="23cb993e-0360-4449-b604-8ddd825a6502" Nov 24 10:05:02 crc kubenswrapper[4886]: I1124 10:05:02.630733 4886 generic.go:334] "Generic (PLEG): container finished" podID="23cb993e-0360-4449-b604-8ddd825a6502" containerID="3bc95c1f2f670013db4a5ec1af66ae89aa568d30fba817b20b23d1cb6678ea86" exitCode=0 Nov 24 10:05:02 crc kubenswrapper[4886]: I1124 10:05:02.630796 4886 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zc46q" event={"ID":"23cb993e-0360-4449-b604-8ddd825a6502","Type":"ContainerDied","Data":"3bc95c1f2f670013db4a5ec1af66ae89aa568d30fba817b20b23d1cb6678ea86"} Nov 24 10:05:02 crc kubenswrapper[4886]: I1124 10:05:02.630846 4886 scope.go:117] "RemoveContainer" containerID="36b26e4cf83aad3ea3d4859c5d848427409291cf3105e6e7735d206d24bf9ffb" Nov 24 10:05:02 crc kubenswrapper[4886]: I1124 10:05:02.632009 4886 scope.go:117] "RemoveContainer" containerID="3bc95c1f2f670013db4a5ec1af66ae89aa568d30fba817b20b23d1cb6678ea86" Nov 24 10:05:02 crc kubenswrapper[4886]: E1124 10:05:02.632400 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zc46q_openshift-machine-config-operator(23cb993e-0360-4449-b604-8ddd825a6502)\"" pod="openshift-machine-config-operator/machine-config-daemon-zc46q" podUID="23cb993e-0360-4449-b604-8ddd825a6502" Nov 24 10:05:16 crc kubenswrapper[4886]: I1124 10:05:16.850435 4886 scope.go:117] "RemoveContainer" containerID="3bc95c1f2f670013db4a5ec1af66ae89aa568d30fba817b20b23d1cb6678ea86" Nov 24 10:05:16 crc kubenswrapper[4886]: E1124 10:05:16.851981 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zc46q_openshift-machine-config-operator(23cb993e-0360-4449-b604-8ddd825a6502)\"" pod="openshift-machine-config-operator/machine-config-daemon-zc46q" podUID="23cb993e-0360-4449-b604-8ddd825a6502" Nov 24 10:05:29 crc kubenswrapper[4886]: I1124 10:05:29.849184 4886 scope.go:117] "RemoveContainer" containerID="3bc95c1f2f670013db4a5ec1af66ae89aa568d30fba817b20b23d1cb6678ea86" Nov 24 10:05:29 crc kubenswrapper[4886]: E1124 10:05:29.849998 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zc46q_openshift-machine-config-operator(23cb993e-0360-4449-b604-8ddd825a6502)\"" pod="openshift-machine-config-operator/machine-config-daemon-zc46q" podUID="23cb993e-0360-4449-b604-8ddd825a6502" Nov 24 10:05:44 crc kubenswrapper[4886]: I1124 10:05:44.858282 4886 scope.go:117] "RemoveContainer" containerID="3bc95c1f2f670013db4a5ec1af66ae89aa568d30fba817b20b23d1cb6678ea86" Nov 24 10:05:44 crc kubenswrapper[4886]: E1124 10:05:44.859081 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zc46q_openshift-machine-config-operator(23cb993e-0360-4449-b604-8ddd825a6502)\"" pod="openshift-machine-config-operator/machine-config-daemon-zc46q" podUID="23cb993e-0360-4449-b604-8ddd825a6502" Nov 24 10:05:56 crc kubenswrapper[4886]: I1124 10:05:56.849446 4886 scope.go:117] "RemoveContainer" containerID="3bc95c1f2f670013db4a5ec1af66ae89aa568d30fba817b20b23d1cb6678ea86" Nov 24 10:05:56 crc kubenswrapper[4886]: E1124 10:05:56.850296 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zc46q_openshift-machine-config-operator(23cb993e-0360-4449-b604-8ddd825a6502)\"" pod="openshift-machine-config-operator/machine-config-daemon-zc46q" podUID="23cb993e-0360-4449-b604-8ddd825a6502" Nov 24 10:06:03 crc kubenswrapper[4886]: I1124 10:06:03.025386 4886 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/swift-proxy-558564f98c-jl2ms" podUID="c1f11d5d-8b31-47b7-9ceb-197d5ca23475" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 502" Nov 24 10:06:09 crc kubenswrapper[4886]: I1124 10:06:09.851420 4886 scope.go:117] "RemoveContainer" containerID="3bc95c1f2f670013db4a5ec1af66ae89aa568d30fba817b20b23d1cb6678ea86" Nov 24 10:06:09 crc kubenswrapper[4886]: E1124 10:06:09.853625 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zc46q_openshift-machine-config-operator(23cb993e-0360-4449-b604-8ddd825a6502)\"" pod="openshift-machine-config-operator/machine-config-daemon-zc46q" podUID="23cb993e-0360-4449-b604-8ddd825a6502" Nov 24 10:06:24 crc kubenswrapper[4886]: I1124 10:06:24.856430 4886 scope.go:117] "RemoveContainer" containerID="3bc95c1f2f670013db4a5ec1af66ae89aa568d30fba817b20b23d1cb6678ea86" Nov 24 10:06:24 crc kubenswrapper[4886]: E1124 10:06:24.857626 4886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zc46q_openshift-machine-config-operator(23cb993e-0360-4449-b604-8ddd825a6502)\"" pod="openshift-machine-config-operator/machine-config-daemon-zc46q" podUID="23cb993e-0360-4449-b604-8ddd825a6502"